id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
771,470 | First steps with Rust declarative macros! | Macros are one of the ways to extend Rust syntax. As The Book calls it, “a way of writing code that... | 0 | 2021-07-26T14:09:13 | https://dev.to/rogertorres/first-steps-with-rust-declarative-macros-1f8m | beginners, rust, tutorial, codenewbie | Macros are one of the ways to extend Rust syntax. As [_The Book_](https://doc.rust-lang.org/book/ch19-06-macros.html) calls it, “a way of writing code that writes other code”. Here, I will talk about _declarative macros_, or as it is also called, _macros by example_. Examples of declarative macros are `vec!`, `println!` and `format!`.
The macros I will **not** talk about are the _procedural macros_, but you can read about them [here](https://doc.rust-lang.org/reference/procedural-macros.html).
As usual, **I am writing for beginners**. If you want to jump to the next level, check:
- _The Book's_ [section on macros](https://doc.rust-lang.org/book/ch19-06-macros.html).
- The chapter about macros in [_Rust by Example_](https://doc.rust-lang.org/reference/macros.html)
- [_The Little Book of Rust Macros_](https://danielkeep.github.io/tlborm/book/README.html), which is the most complete material I found about the topic (the second chapter is specially amusing).
---
## Why do I need macros?
> The actual coding start in the [next section](#coding).
Declarative macros (from now on, just “macros”) are not functions, but it would be silly to deny the resemblance. Like functions, we use them to perform actions that would otherwise require too many lines of code or quirky commands (I am thinking about `vec!` and `println!`, respectively). These are (two of) the reasons to _use_ macros, but what about the reasons to _create_ them?
Well, maybe you are developing a crate and want to offer this feature to the users, like warp did with [`path!`](https://docs.rs/warp/0.1.20/warp/macro.path.html). Or maybe you want to use a macro as a boilerplate, so you don't have to create several similar functions, as I did [here](https://github.com/rogertorres/mtgsdk/blob/main/src/cards.rs#L197). It might be also the case that you need something that cannot be delivered by usual Rust syntax, like a function with initial values or structurally different parameters (such as `vec!`, that allows calls like `vec![2,2,2]` or `vec![2;3]`—more on this later).
That being said, I believe that the best approach is to learn how to use them, try them a few times, and when the time comes when they might be useful, you will remember this alternative.
---
## <a name="coding"></a>Declaring macros
This is how you declare a macro:
```rust
macro_rules! etwas {
() => {}
}
```
You could call this macro with the commands `etwas!()`, `etwas![]` or `etwas!{}`. There's no way to force one of those. When we call a macro always using one or the other—like parenthesis in `println!("text")` or square-brackets in `vec![]`—it is just a convention of usage (a convention that we should keep).
But what is happening in this macro? Nothing. Let's add something to make it easier to visualize its structure:
```rust
macro_rules! double {
($value:expr) => { $value * 2 }
}
fn main() {
println!("{}", double!(7));
}
```
The left side of `=>` is the **matcher**, the rules that define what the macro can receive as input. The right side is the **transcriber**, the output processing.
> Not very important, but both matcher and transcriber could be writen using either `()`, `[]` or `{}`.
---
## The matching
This will become clear later, but let me tell you right the way: the matching resembles [regex](https://en.wikipedia.org/wiki/Regular_expression). You may ask for specific arguments, fixed values, define acceptable repetition, etc. If you are familiar with it, you should have no problems picking this up.
Let's go through the most important things you should know about the matching.
### Variable argument
Variable arguments begin with `$` (e.g., `$value:expr`). Their structure is: `$` `name` `:` `designator`.
- Both `$` and `:` are fixed.
- The `name` follows Rust variables convention. When used in the transcriber (see below), they will be called _metavariables_.
- Designators are **not** variable types. You may think of them as “syntax categories”. Here, I will stick with [expressions](https://doc.rust-lang.org/reference/expressions.html) (`expr`), since Rust is [“primarily an expression language”](https://doc.rust-lang.org/reference/statements-and-expressions.html). A list of possible designators can be found [here](https://doc.rust-lang.org/reference/macros-by-example.html#metavariables).
> **Note:** There seems to be no consensus on the name "designator". [The little book](https://danielkeep.github.io/tlborm/book/mbe-min-captures-and-expansion-redux.html) calls it "capture"; [The Rust reference](https://doc.rust-lang.org/reference/macros-by-example.html) calls it "fragment-specifier"; and you will also find people referring them as "types". Just be aware of that when jumping from source to source. Here, I will stick with designator, as proposed in [Rust by example](https://doc.rust-lang.org/rust-by-example/macros/designators.html).
### Fixed arguments
No mystery here. Just add them without `$`. For example:
```rust
macro_rules! power {
($value:expr, squared) => { $value.pow(2) }
}
fn main() {
println!("{}", power!(3_i32, squared));
}
```
I know there are things here that I have not explained yet. I will talk about them now.
### Separator
Some designators require some specific follow up. Expressions require one of these: `=>`, `,` or `;`. That is why I had to add a comma between `$value:expr` and the fixed-value `squared`. You will find a complete list of follow-ups [here](https://danielkeep.github.io/tlborm/book/mbe-min-captures-and-expansion-redux.html).
### Multiple matching
What if we want our macro to not only calculate a number squared, but also a number cubed? We do this:
```rust
macro_rules! power {
($value:expr, squared) => { $value.pow(2_i32) }
($value:expr, cubed) => { $value.pow(3_i32) }
}
```
Multiple matching can be used to capture different levels of specificity. Usually, you will want to write the matching rules from the most-specific to the least-specific, so your call doesn't fall in the wrong matching. A more technical explanation can be found [here](https://danielkeep.github.io/tlborm/book/mbe-min-captures-and-expansion-redux.html).
### Repetition
Most macros that we use allow for a flexible number of inputs. For example, we may call `vec![2]` or `vec![1, 2, 3]`. This is where the matching resembles Regex the most. Basically, we wrap the variable inside `$()` and follow up with a repetition operator:
- `*` — indicates any number of repetitions.
- `+` — indicates any number, but at least one.
- `?` — indicates an optional, with zero or one occurrence.
Let's say we want to add `n` numbers. We need at least two addends, so we will have a single first value, and one or more (`+`) second value. This is what such a matching would look like.
```rust
macro_rules! adder {
($left:expr, $($right:expr),+) => {}
}
fn main() {
adder!(1, 2, 3, 4);
}
```
We will work on the transcriber latter.
### Repetition separator
As you can see in the example above, I added a comma before the repetition operator `+`. That's how we add a separator for each repetition without a trailing separator. But what if we want a trailing separator? Or maybe we want it to be flexible, allowing the user to have a trailing separator or not? You may have any of the three possibilities like this:
```rust
macro_rules! no_trailing {
($($e:expr),*) => {}
}
macro_rules! with_trailing {
($($e:expr,)*) => {}
}
macro_rules! either {
($($e:expr),* $(,)*) => {}
}
fn main() {
no_trailing!(1, 2, 3);
with_trailing!(1, 2, 3,);
either!(1, 2, 3);
either!(1, 2, 3,);
}
```
### Versatility
Unlike functions, you may pass rather different arguments to macros. Let's consider the `vec!` macro example. For that, I will omit the transcriber.
```rust
macro_rules! vec {
() => {};
($elem:expr; $n:expr) => {};
($($x:expr),+ $(,)?) => {};
}
```
It deals with three kinds of calls:
- `vec![]`, which creates an empty Vector.
- `vec!["text"; 10]`, which repeats the first value ("text") `n` times, where `n` is the second value (10).
- `vec![1,2,3]`, which creates a vector with all the listed elements.
> If you want to see the implementation of the `vec!` macro, check [Jon's stream about macros](https://www.youtube.com/watch?v=q6paRBbLgNw).
---
## The transcriber
The magic happens after the `=>`. Most of what you are going to do here is regular Rust, but I would like to bring your attention to some specificities.
### Type
When I called the exponentiation macro `power!`, I did this:
```rust
power!(3_i32, squared);
```
I had to specify the type `i32` because I used the `pow()` function, which cannot be called on ambiguous numeric type; and as we do not define types in macros, I had to let the compiler know this information somehow. This is something to be aware when dealing with macros. Of course, I could have forced it by declaring a variable and passing the metavariable value to it and thus fixing the variable type. However, to do such a thing, we need multiple statements.
### Multiple statements
To have more than one line in your transcriber, you have to use double curly brackets:
```rust
macro_rules! etwas {
//v --- this one
($value:expr, squared) => {{
let x: u32 = $value;
x.pow(2)
}}
//^ --- and this one
};
```
Easy.
### Using repetition
Let us finish our `adder!` macro.
```rust
macro_rules! adder {
($($right:expr),+) => {{
let mut total: i32 = 0;
$(
total += $right;
)+
total
}};
}
fn main() {
assert_eq!(adder!(1, 2, 3, 4), 10);
}
```
To handle repetition, all we have to do is to place the statement we want to repeat within `$()+` (the repetition operator should match, that is why I am using `+` here as well).
But what if we have multiple repetitions? Consider the following code.
```rust
macro_rules! operations {
(add $($addend:expr),+; mult $($multiplier:expr),+) => {{
let mut sum = 0;
$(
sum += $addend;
)*
let mut product = 1;
$(
product *= $multiplier;
)*
println!("Sum: {} | Product: {}", sum, product);
}}
}
fn main() {
operations!(add 1, 2, 3, 4; mult 2, 3, 10);
}
```
How does Rust know that it must repeat _four times_ during the first repetition block and only _three times_ in the second one? By context. It checks the variable that is being use and figure out what to do. Clever, huh?
Sure, you can make things harder to Rust. In fact, you may turn them indecipherable, like this:
```rust
macro_rules! operations {
(add $($addend:expr),+; mult $($multiplier:expr),+) => {{
let mut sum = 0;
let mut product = 1;
$(
sum += $addend;
product *= $multiplier;
)*
println!("Sum: {} | Product: {}", sum, product);
}}
}
```
What does “clever Rust” do with something like this? Well, one of the things it does best: it gives you a clear compile error:
```zsh
error: meta-variable 'addend' repeats 4 times, but 'multiplier' repeats 3 times
--> src/main.rs:43:10
|
43 | $(
| __________^
44 | | sum += $addend;
45 | | product *= $multiplier;
46 | | )*
| |_________^
```
Neat! 🦀
---
## Expand
As mentioned earlier, macros are syntax extensions, which means that Rust will turn them into regular Rust code. Sometimes, to understand what is going ~~wrong~~ on, it is very helpful to see how rust pull that transformation off. To do so, use the following command.
```rust
$ cargo rustc --profile=check -- -Zunstable-options --pretty=expanded
```
This command, however, is not only verbose, but it will also call for the nightly compiler. To avoid this and get the same result, you may install [`cargo-expand`](https://github.com/dtolnay/cargo-expand):
```shell
$ cargo install cargo-expand
```
Once it is installed, you just have to run the command `cargo expand`.
> **Note**: Although you don't have to be using the nightly compiler, I guess (and you may call me on this) you got to have it installed. To do so, run the command `rustup instal nightly`.
Look at how the macro `operations!` is expanded.
```rust
fn main() {
{
let mut sum = 0;
sum += 1;
sum += 2;
sum += 3;
sum += 4;
let mut product = 1;
product *= 2;
product *= 3;
product *= 10;
{
::std::io::_print(::core::fmt::Arguments::new_v1(
&["Sum: ", " | Product: ", "\n"],
&match (&sum, &product) {
(arg0, arg1) => [
::core::fmt::ArgumentV1::new(arg0, ::core::fmt::Display::fmt),
::core::fmt::ArgumentV1::new(arg1, ::core::fmt::Display::fmt),
],
},
));
};
};
}
```
As you can see, even `println!` was expanded.
---
## Export and import
To use a macro outside the scope it was defined, you got to export it by using `#[macro_export]`.
```rust
#[macro_export]
macro_rules! etwas {
() => {};
}
```
You may also export a module of macros with `#[macro_use]`.
```rust
#[macro_use]
mod inner {
macro_rules! a {
() => {};
}
macro_rules! b {
() => {};
}
}
a!();
b!();
```
To use a macro that a crate exports, you also use `#[macro_use]`.
```rust
#[macro_use(lazy_static)] // Or #[macro_use] to import all macros.
extern crate lazy_static;
lazy_static!{}
```
The example above is from [The Rust Reference](https://doc.rust-lang.org/reference/macros-by-example.html#the-macro_use-attribute).
---
And that's all for today. There is certainly more to cover, but I will leave you with the readings I recommended earlier.
> _Cover image by [Thom Milkovic](https://unsplash.com/photos/FTNGfpYCpGM)_.
| rogertorres |
771,477 | Assertion in Python Programming | Hello dear readers! welcome back to another section of our tutorial on Python. In this tutorial post,... | 0 | 2021-07-26T01:48:49 | https://www.webdesigntutorialz.com/2020/08/assertion-in-python-programming.html | programming, codepen, python, devops |
Hello dear readers! welcome back to another section of our tutorial on Python. In this tutorial post, we will be studying about Assertions in Python.
What is an Assertion?
An assertion is a sanity-check that you can turn on or turn off when you are done with your program testing.
The easiest way to think of an assertion is to link it to a raise-if statement (or to be more accurate, a raise-if-not statement). An expression is tested and if the result comes up false, then an exception is raised.
Assertions in Python are carried out by the assert statement, the newest keyword to Python, it was introduced in version 1.5.
Python programmers often place assertions at the start of a function to check for valid input, and after a function call to check for valid output.
The assert Statement
When an assert statement is being encountered, Python evaluates the accompanying expression, which is hopefully true. If the expression is false, then an AssertionError exception is raised by Python.
Syntax
The syntax for assert statement is -
assert Expression[, Arguments]
If the assertion fails, Python uses ArgumentExpression as the code argument for the AssertionError. The AssertionError exceptions can be of course caught and handled like any other exception using the try-except statement, but if it is not handled, they will then terminate the program and then produce a traceback.
Example
Below is a function which converts a temperature from degrees Kelvin to degrees Fahrenheit. Since zero degrees Kelvin is as cold as it gets, then the function bails out if it sees negative temperature -
#!/usr/bin/python
def KelvinToFahrenheit(Temperature):
assert (Temperature >= 0),"Colder than absolute zero!"
return ((Temperature-273)*1.8)+32
print KelvinToFahrenheit(273)
print int(KelvinToFahrenheit(505.78))
print KelvinToFahrenheit(-5)
Output
When the above code is executed, it will produce the following result -
32.0
451
Traceback (most recent call last):
File "test.py", line 9, in <module>
print KelvinToFahrenheit(-5)
File "test.py", line 4, in KelvinToFahrenheit
assert (Temperature >= 0),"Colder than absolute zero!"
AssertionError: Colder than absolute zero!
Alright guys! This is where we are rounding up for this tutorial post. In our next tutorial, we are going to be discussing about the Object Oriented Programming in Python.
Feel free to ask your questions where necessary and i will attend to them as soon as possible. If this tutorial was helpful to you, you can use the share button to share this tutorial.
Follow us on our various social media platforms to stay updated with our latest tutorials. You can also subscribe to our newsletter in order to get our tutorials delivered directly to your emails.
Thanks for reading and bye for now. | webdesigntutorialz |
771,486 | How to build CRUD Spring Boot JPA with MySQL Container on Ubuntu 18.04 | Spring Boot, Spring Data JPA, MySQL, Docker | 0 | 2021-07-26T02:42:05 | https://dev.to/khanhnhb/how-to-build-crud-spring-boot-jpa-with-mysql-container-on-ubuntu-18-04-47d3 | ---
title: How to build CRUD Spring Boot JPA with MySQL Container on Ubuntu 18.04
published: true
description: Spring Boot, Spring Data JPA, MySQL, Docker
tags:
//cover_image: https://res.cloudinary.com/dvehkdedj/image/upload/v1627267177/banner_fzrosw.jpg
---
## I. Introduction
In this article you learn about Spring Boot JPA allowing to build simple backend applications by connect MySQL database container by write docker-compose file. Spring Data JPA is a great choice allowing to speed your development and focus on the business logic.
Technology you can learn in this tutorial:
- Spring Boot JPA
- Build Docker Compose for MySQL
- CRUD operation for simple
Tool support to implementation:
- IDE Intellij for coding (you can choose other IDE, it is up for you)
- Postman, for manipulate HTTP request
- Docker and Docker Compose, for build MySQL container. You can read more install on Ubuntu 18.04 [Install Docker on Ubuntu 18,04](https://khanhhoang.hashnode.dev/how-to-install-docker-on-ubuntu-1804), [Install Docker Compose on Ubuntu 18.04](https://khanhhoang.hashnode.dev/how-to-install-docker-compose-on-ubuntu-1804)
GitHub repository at here [blog repository](https://github.com/KhanhNHB/blog)
Video reference for coding [at here](https://www.youtube.com/watch?v=NxfbU9YTDQI&t=268s)
## II. Implement
### Step 1: Initial Project Spring Boot Maven
- In this project I will using Maven. Maven is a build automation tool used for primarily for Java project.
Open Intellij choose Spring Initializr edit name project, and choose Project SDK, Java version.
> Option you can create project on website [at here](https://start.spring.io/), click download and extract import project by Intellij.

- You can pick add dependency when initial project Spring Data JPA, Spring Web, and MySQL Driver to exploring. or add manual in file pom.xml.
- POM or Project Object Model. It is the fundamental unit of work in Maven. It is a file XML contain information about project and configuration detail used by Maven, read more detail [at here](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html).
```
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
</dependency>
```
For the dependency you can see it is a format XML. It is a naming convention:
- **groupId**: A uniquely identified name of project or organization.
- **artifactId**: A unique name of project. It is mention about packages .
- **version**: The version of project.
- **scope**: Specfic scope for denpendency. There are five scope available:
- **compile**: It is default when none specific scope provided.
- **provided**: The dependencies should be provided at runtime JDK or a container.
- **runtime**: The dependencies with this scope are required at runtime.
- **test**: Test is aren't transitive and are only present for test and execution classpaths.
- **system**: The scope is similar to the provided scope. The difference is that system requires us to directly point to the specific jar on the system.
#### Step 2: Implement `docker-compose.file` to start MySQL
Create file `docker-compose.file` in root project to run MySQL container.
- Docker Compose is tool is defined and running multi-container. It is convention YML file to configuration your application's services. Using Compose a basically a three-step process:
- Create Dockerfile to define environment your apps. It can be reproduces anywhere.
- Create docker compose. Define in your app `docker-comse.yml`. They can run together in an isolated environment.
- Run docker compose up to start your application's services by command `docker-compose up`.
> Note: For now I skip Dockerfile. I will create Dockerfile in session later.
```
version: '3'
services:
mysql:
image: mysql:8.0
container_name: blog_mysql
volumes:
- db_mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_USER: blogger
MYSQL_PASSWORD: blogger
MYSQL_DATABASE: blog
ports:
- 3306:3306
volumes:
db_mysql:
```
- **version**: The version of compose. It is number spring format.
- **services**: The make up your application's services.
- **mysql**: The unique name specific service.
- **image**: Before colon is the name service you want use, and after colon is version service. Recommend specific version if image will be change in the future.
- **container_name**: The unique name container. It will be generate random if none provided.
- **volume**: It is make persist data. if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost. Before colon name `db_mysql` is a origin folder container script, after colon name is destination `/var/lib/mysql`.
- **environment**: Is is make permission to access container MySQL to manipulate data.
- **port**: The port to access container. Before colon is a port `3306` for host machine connect, and after colon the port `3306` is MySQL Container.
#### Step 3: Configuration `application.properties` to connect MySQL
To connect MySQL Container we will be configuration from Spring Boot application at `application.properties` in resources folder
```
spring.jpa.hibernate.ddl-auto=update
spring.datasource.url=jdbc:mysql://localhost:3306/blog
spring.datasource.username=blogger
spring.datasource.password=blogger
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
```
- spring.jpa.hibernate.ddl-auto: It can be none, update, create, create-drop see the more detail [Hibernate Document](https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#configurations-hbmddl)
- **none**: Default MySQL, no change is make database structure. It is good security for the production state.
- **update**: The database will be change by Hibernate if the entity given structures.
- **create**: Create database every time but does not drop it on close.
- **create-drop**: Create database and drops when `SessionFactory` close.
- spring.datasource.url: Given URL connect to MySQL Container for datasource.
- spring.datasource.username: Given username connect to MySQL for datasource.
- spring.datasource.password: Give password connect to MySQL for datasource.
- spring.datasource.driver-class-name: Given name driver MySQL for datasource.
#### Step 4: Create `@Entity`, Repository, `@Controller`
**Define Blog class Entity. The Hibernate will be automatically translate entity Blog to table Blog in database.**
```
package com.khanhnhb.blog.model;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
@Entity
public class Blog {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public Integer id;
public String title;
public String content;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
public String getContent() {
return content;
}
public void setContent(String content) {
this.content = content;
}
}
```
- `@Entity` annotation this tells Hibernate to make a table out of this class.
**Define Blog Repository to manipulate interface CRUD records.**
```
package com.khanhnhb.blog.repository;
import com.khanhnhb.blog.model.Blog;
import org.springframework.data.repository.CrudRepository;
public interface BlogRepository extends CrudRepository<Blog, Integer> {
}
```
- CRUD refers Create, Read, Update, Delete
- Spring automatically implements this repository interface into a beans the same name blogRepository.
**Define Blog Controller to using HTTP request.**
```
package com.khanhnhb.blog.controller;
import com.khanhnhb.blog.model.Blog;
import com.khanhnhb.blog.repository.BlogRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
@Controller
@RequestMapping(path = "/blogs")
public class BlogController {
@Autowired
private BlogRepository blogRepository;
@PostMapping()
public @ResponseBody Blog createBlog(@RequestBody Blog newBlog) {
return blogRepository.save(newBlog);
}
@GetMapping()
public @ResponseBody Iterable<Blog> getAll() {
return blogRepository.findAll();
}
@GetMapping(path = "{id}")
public @ResponseBody Blog getOne(@PathVariable Integer id) {
return blogRepository.findById(id).get();
}
@PutMapping(path = "{id}")
public @ResponseBody Blog updateBlog(@PathVariable Integer id, @RequestBody Blog updateBlog) {
Blog blog = blogRepository.findById(id).get();
blog.setTitle(updateBlog.getTitle());
blog.setContent(updateBlog.getContent());
return blogRepository.save(blog);
}
@DeleteMapping(path = "{id}")
public @ResponseBody Integer deleteBlog(@PathVariable Integer id) {
Blog blog = blogRepository.findById(id).get();
blogRepository.delete(blog);
return blog.getId();
}
}
```
- `@Controller` annotation it means that this class is a Controller.
- `@RequestMapping` annotation it means URL's start with /blog (after Application path)
- `@Autowire` annotation its means to get the bean called blogRepository. Which is auto-generated by Spring when application start, we will use it to handle the data.
- `@PostMapping` annotations its mean map ONLY POST Requests.
- `@RequestBody` annotations its mean pass value in body request.
- `@ResponseBody` annotations its mean returned object data type (Blog, Integer, ...).
- `@GetMapping` annotations its mean map ONLY GET Requests.
- `@PathVariable` annotations its mean can be use handle template variables in the request URL.
- `@PutMapping` annotations its mean map ONLY PUT Requests.
- `@DeleteMapping` annotations its mean map ONLY DELETE Requests.
#### Step 5: Make HTTP request with Postman
**Create Blog**
`POST: localhost:8080/blogs`
```
body:
{
"title": "Exploring Spring Data JPA",
"content: "Create first blog"
}
```

**Read all Blog**
`GET: localhost:8080/blogs`

**Read blog by id**
`GET: localhost:8080/blogs/1`

**Update blog**
`PUT: localhost:8080/blogs/1`
```
body:
{
"title": "Exploring Spring Data JPA and MySQL",
"content: "Modify first blog"
}
```

**Delete blog**
`DELETE: localhost:8080/blogs/1`

## III. Conclusion
In this article you learn about Spring Boot JPA allowing to build simple backend applications by connect MySQL database container by write docker-compose file.
📚 Recommended Books
Clean Code:[Book](https://amzn.to/2UGDPlX)
HTTP: [The Definitive Guide ](https://amzn.to/2JDVi8s)
Clean architecture: [Book](https://amzn.to/2xOBNXW)
📱 Social Media
Blog: [KhanhNHB Tech](https://hashnode.com/@KhanhHoang)
Twitter: [KhanhNHB](https://twitter.com/khanh_nhb)
Youtube: [KhanhNHB Tech](https://www.youtube.com/channel/UCp_M_ymiNsbu0_Hs1mIgSZg)
GitHub: [KhanhNHB](https://github.com/KhanhNHB)
Youtube: KhanhNHB Tech
GitHub: KhanhNHB | khanhnhb | |
771,503 | Introduction to React Context API | Learn how the Context API works in React and the best times to use it to avoid prop-drilling in your... | 0 | 2021-07-26T03:25:07 | https://dev.to/joseprest/introduction-to-react-context-api-7gi | react, javascript, redux | Learn how the Context API works in React and the best times to use it to avoid prop-drilling in your application.
One of the best things about React is that we have a lot of different ways to solve specific problems. We have a few different form libraries, a bunch of CSS libraries and, for the most important part of React, we have a lot of different libraries specific to state data problems in React.
Identifying when to use a certain library in our project is a skill that we develop through experience. Especially in React, where we have so many libraries to choose from, sometimes we might end up installing and using libraries that we don’t need.
Context API is a React API that can solve a lot of problems that modern applications face related to state management and how they’re passing state to their components. Instead of installing a state management library in your project that will eventually cost your project performance and increase your bundle size, you can easily go with Context API and be fine with it.
Let’s understand what the Context API is, the problems it solves and how to work with it.
Why Context API?
One of the concepts of React is to break your application into components, for reusability purposes. So in a simple React application, we have a few different components. As our application grows, these components can become huge and unmaintainable, so we break them into smaller components.
That’s one of the best concepts about React—you can create a bunch of components and have a fully maintainable and concise application, without having to create a super huge component to deal with your whole application.
After breaking components into smaller components for maintainability purposes, these small components might now need some data to work properly. If these small components need data to work with, you will have to pass data through props from the parent component to the child component. This is where we can slow down our application and cause development issues.
Let’s imagine that we have a component called Notes that is responsible to render a bunch of notes.

Just looking at this code, we can notice that we can break this component into smaller components, making our code cleaner and more maintainable. For example, we could create a component called Note and inside that component, we would have three more components: Title, Description and Done.
We now have a few components, and we certainly increased the reusability and maintainability of our example application. But, in the future, if this application grows in size and we feel the need to break these components into smaller components, we might have a problem.
Passing data through props over and over can cause problems for your application. Sometimes you might pass more props than you need or even forget to pass props that you do need, rename props through the components without noticing, etc. If you’re passing data through props from the parent component to a fourth- or fifth-level component, you’re not reusing and writing maintainable code, and this might prejudice your application in the future.
This is what we call “prop-drilling.” This can frustrate and slow down your development in the mid- to long-term—passing props over and over again to your components will cause future problems in your application.
That’s one of the main problems that Context API came to solve for us.
Context API
The Context API can be used to share data with multiple components, without having to pass data through props manually. For example, some use cases the Context API is ideal for: theming, user language, authentication, etc.
createContext
To start with the Context API, the first thing we need to do is create a context using the createContext function from React.
const NotesContext = createContext([]);
JavaScript
The createContext function accepts an initial value, but this initial value is not required.
After creating your context, that context now has two React components that are going to be used: Provider and Consumer.
Provider
The Provider component is going to be used to wrap the components that are going to have access to our context.
<NotesContext.Provider value={this.state.notes}>
...
</Notes.Provider>
JavaScript
The Provider component receives a prop called value, which can be accessed from all the components that are wrapped inside Provider, and it will be responsible to grant access to the context data.
Consumer
After you wrap all the components that are going to need access to the context with the Provider component, you need to tell which component is going to consume that data.
The Consumer component allows a React component to subscribe to the context changes. The component makes the data available using a render prop.

useContext
You might have been using React Hooks for some time now, but if you don’t know yet what React Hooks are and how they work, let me very briefly explain them to you:
React Hooks allow us to manage state data inside functional components; now we don’t need to create class components just to manage state data.
React has a few built-in hooks such as useState, useCallback, useEffect, etc. But the one that we’re going to talk and learn more about here is the useContext hook.
The useContext hook allows us to connect and consume a context. The useContext hook receives a single argument, which is the context that you want to have access to.
const notes = useContext(NotesContext);
The useContext is way better and cleaner than the Consumer component—we can easily understand what’s going on and increase the maintainability of our application.
Let’s now create an example with the Context API and the hook to see how it applies in a real-world application. We’re going to create a simple application to check if the user is authenticated or not.
We’ll create a file called context.js. Inside that file, we’re going to create our context and our provider, import the useState and useContext hooks from React, and create our context which is going to be called AuthContext. The initial value of our AuthContext will be undefined for now.
import React, { useState, useContext } from "react";
const AuthContext = React.createContext(undefined);
JavaScript
Now, we’re going to create a functional component called AuthProvider, which will receive children as props. Inside this component, we’re going to render more components and handle the state data that we want to share with the other components.
const AuthProvider = ({ children }) => {
...
};
First, we’ll create our auth state. This will be a simple boolean state to check if the user is authenticated or not. Also, we’re going to create a function called handleAuth, which will be responsible to change our auth state.
const [auth, setAuth] = useState(false);
const handleAuth = () => {
setAuth(!auth);
};
JavaScript
The Provider does not accept array values, so we’re going to create an array called data, which will contain our auth state and our handleAuth function. We’re going to pass this data as our value in our AuthContextProvider.
const AuthProvider = ({ children }) => {
const [auth, setAuth] = useState(false);
const handleAuth = () => {
setAuth(!auth);
};
const data = [auth, handleAuth];
return <AuthContext.Provider value={data}>{children} </AuthContext.Provider>;
};
Now, inside our context.js file, we’ll also create a simple hook component called useAuth, which we’ll use to consume our context. If we try to use this component outside our Provider, it will throw an error.
const useAuth = () => {
const context = useContext(AuthContext);
if (context === undefined) {
throw new Error("useAuth can only be used inside AuthProvider");
}
return context;
};
Then we’re going to export our AuthProvider and useAuth at the end of our file.
Now, in our index.js component, we need to import the AuthProvider component and wrap the components that we want to give access to the context inside this provider.
import { AuthProvider } from "./context";
ReactDOM.render(
<React.StrictMode>
<AuthProvider>
<App />
</AuthProvider>
</React.StrictMode>,
rootElement
);
Next, inside our App.js file, we’re going to manage our context data. We need first to import the useAuth hook that we created and get the auth and handleAuth from useAuth.
Let’s create a button and, every time we click this button, we’ll invoke the handleAuth function. Let’s also use a ternary rendering of a simple h1 to check if the auth value is changing as we click the button.
We now have a simple application using the Context API. Notice that we don’t need to pass any props from parent component to child components.
The Context API can be really helpful in some use cases, such as authentication when you need to check if the user is authenticated in a few unrelated components.
Conclusion
In this article, we learned more about the React Context API. The Context API came to solve a few different problems that we were having in React applications—one of the most important is prop-drilling. We created an example using the Context API in a class component, then in a functional component. Also, we were introduced to how to use the useContext hook.
| joseprest |
771,510 | GraphQL on the client side with Apollo, React, and TypeScript | This article aims to be an introduction to the Apollo Client. It gives an overview of its features... | 0 | 2021-07-26T03:46:10 | https://dev.to/joseprest/graphql-on-the-client-side-with-apollo-react-and-typescript-23jb | This article aims to be an introduction to the Apollo Client. It gives an overview of its features while providing examples with TypeScript.
The most fundamental function of the Apollo Client is making requests to our GraphQL API. It is crucial to understand that it has quite a lot of features built on top of it.
Why you might not need it
An important thing about the Apollo Client is that it is more than just a tool for requesting data. At its core, it is a state management library. It fetches information and takes care of caching, handling errors, and establishing WebSocket connections with GraphQL subscriptions.
If you are adding GraphQL to an existing project, there is a good chance that you are already using Redux, Mobx, or React Context. Those libraries are commonly used to manage the fetched data. If you want to keep them as your single source of truth, you would not want to use Apollo Client.
Consider using a library like graphql-request if the only thing you need is to call your GraphQL API and put the response in your state management library.

Introducing the Apollo Client with TypeScript
Above, you can see me using a process.env.REACT_APP_GRAPHQL_API_URL variable. In this article, we use Create React App with TypeScript. It creates a react-app-env.d.ts file for us. Let’s use it to define our variable.
We need to remember that weh Create React App, our environment variables need to have the REACT_APP_ prefix.
react-app-env.d.ts

<h3>Setting up the Apollo Client<h3>
The first step in setting up the Apollo Client is installing the necessary libraries.
npm install @apollo/client graphql
The second step is creating an instance of the ApolloClient. This is where we need our environment variable.

One of the features of the Apollo Client is caching. Above, we are initializing the InMemoryCache(). It keeps the cache in memory that disappears when we refresh the page. Even though that’s the case, we can persist it using Web Storage by using the apollo3-cache-persist library.
The last part of setting up the Apollo Client is connecting it to React. To do so, we need the ApolloProvider that works similarly to Context.Provider built into React.
app.tsx

<h2>Performing queries</h2>
Once all of the above is ready, we can start using our GraphQL API. The most basic functionality of the Apollo Client is querying data.
In our NestJS course, we create an API for managing posts. Let’s now create a component that can display a list of them. Since we are using TypeScript, the first step would be creating a Post interface.
post.tsx

app.tsx

Since there are quite many things going on both with the query and the TypeScrip definitions, I suggest creating a separate custom hook for that. It would also make it quite easy to mock for testing purposes.
The useQuery hook returns quite a few things. The most essential of them are:
-data – contains the result of the query (might be undefined),
-loading – indicates whether the request is currently pending,
-error – contains errors that happened when performing the query.
PostsList/index.tsx

The cache mechanism
A significant thing to notice is that the usePostsQuery() hook is called every time the PostsList renders. Fortunately, the Apollo Client caches the results locally.
The 29th part of the NestJS course mentions polling. It is an approach that involves executing a query periodically at a specified interval.

Another approach to refreshing the cache is refetching. Instead of using a fixed interval, we can refresh the cache by calling a function.

<h2>Mutating data</h2>
The second core functionality in GraphQL after querying data is performing mutations. Let’s create a custom hook for creating posts.
useCreatePostMutation.tsx

The useMutation hook also returns an array with two elements
the mutate function that triggers the mutation; it returns a promise that resolves to the mutation result,
the result of the mutation with properties such as data, loading, and error.
With that knowledge, let’s create a simple form that allows us to perform the above mutation.
useCreatePostFormData.tsx

Above, we use the useCreatePostMutation hook. As we’ve noted before, it returns an array where the first element is a function that triggers the mutation.
All that’s left is to use the useCreatePostFormData hook with a form.
PostForm/index.tsx

Authenticating with cookies
In our NestJS course, we authenticate by sending the Set-Cookie header with a cookie and the httpOnly flag. This means that the browser needs to attach the cookies because JavaScript can’t access them.
If you are interested in the implementation details of the above authentication, check out API with NestJS #3. Authenticating users with bcrypt, Passport, JWT, and cookies
To achieve this, we might need to modify our apolloClient slightly.
Apollo Client has support for communicating with GraphQL servers using HTTP. By default, it sends the cookies only if the API is in the same origin. Fortunately, we can easily customize it:

Updating the cache after the mutation
The last piece missing is updating the list after the successful mutation. One of the ways to do so is to pass the update parameter when calling the useMutation hook.
With it, we can directly access both the cache and the mutation result. By calling readQuery we get the cache’s current state, and with writeQuery, we overwrite it.
useCreatePostMutation.tsx

Another good way of keeping our application up to date is through subscriptions. It deserves a separate article, though.
<h2>Summary</h2>
Today, we’ve looked into the Apollo Client and learned its basics. We’ve also considered if we need it in the first place because it might not be the best approach for some use-cases.
Looking into the fundamentals of the Apollo Client included both the queries and mutations. We’ve also touched on the subject of cookie-based authentication. There is still quite a lot to cover, so stay tuned!
| joseprest | |
771,517 | Something you may not know about the Javascript "Switch" statement... | The switch statement is used to perform different actions based on different conditions. You use the... | 0 | 2021-07-26T04:25:24 | https://dev.to/roadpilot/something-you-may-not-know-about-the-javascript-switch-statement-390b | The switch statement is used to perform different actions based on different conditions. You use the switch statement to select one of many code blocks to be executed. The following "if then" statement, where x could be any value but will only give a result with 0 or 1:
```
if (x===0) {
text = "Zero";
} else if (x===1) {
text = "One";
} else {
text = "No value found";
}
```
...can be re-written as this "switch" statement:
```
switch (x) {
case 0:
text = "Zero";
break;
case 1:
text = "One";
break;
default:
text = "No value found";
}
```
TIP: You always need to include the "break" command if you want the switch to stop when it finds a match.
One thing to keep in mind, Switch cases use strict comparison (===). The values must be of the same type to match. A strict comparison can only be true if the operands are of the same type.
In the above examples, "0" would not return a match but 0 would.
But what if you wanted to switch on something like this:
```
if (time < 10) {
greeting = "Good morning";
} else if (time < 20) {
greeting = "Good day";
} else {
greeting = "Good evening";
}
```
You might think you could do it like this:
```
switch(time){
case (<10):
greeting = "Good morning";
break;
case (<20):
greeting = "Good day";
break;
default:
greeting = "Good evening";
}
```
But switch comparisons do not work that way. Switch follows these rules:
- The switch expression is evaluated once.
- The value of the expression is compared with the values of each case. (Since the values can not be compared, they can not match)
- If there is a match, the associated block of code is executed.
- If there is no match, the default code block is executed.
But here's one thing you can do. Since the value of the expression is compared to the values of each case, if you set the value of the expression to 'true', then you can make any case value an expression that would evaluate to true (or false) to test for a match. For example, we already know the above switch expression will not work. But if it were rewritten as follows:
```
switch(true){
case (time < 10):
greeting = "Good morning";
break;
case (time < 20):
greeting = "Good day";
break;
default:
greeting = "Good evening";
}
```
...then you could use "switch" as alternative to the "if then" JavaScript statement.
The more you know! | roadpilot | |
771,679 | How to setup OBS for live streaming, presentations, and virtual meetings | Not so long ago, there were these awesome things called conferences and gatherings that happened in... | 17,041 | 2021-07-29T04:26:50 | https://dev.to/mishmanners/how-to-setup-obs-for-live-streaming-presentations-and-virtual-meetings-1chf | obs, tutorial, github, opensource | Not so long ago, there were these awesome things called conferences and gatherings that happened in person. Now - yes we all know why - we are stuck doing everything virtually. From meetings, to standups, conferences, talks, and even game nights.
You've probably seen that one, maybe two people, who come into your virtual meetings, or present at a conference and they have a killer setup. Everything looks super cool, snazzy, and they have these sweet looking overlays.
I'm going to walk you through how to setup your system so you can be _that_ person.
## Open Broadcast Software (OBS)
You may have heard of [OBS](https://obsproject.com/). It's an open source project that many streamers use.
{% github obsproject/obs-studio %}
Did you know, you can also use it for the ultimate virtual meeting. OBS has a feature called "virtual camera". This sets your OBS scene as a "virtual" camera so you can select this instead of using a USB or built in camera.
## 1. Setting up your Virtual Camera
Firstly, you'll need to install [OBS](https://obsproject.com/). Once you have OBS installed, you'll see there is a "Start Virtual Camera" option in the "Controls" panel.
Click this to initialise your OBS Virtual Camera. Now that your camera is "on" and "working", you can select it as a source. If you're in Zoom, Microsoft Teams, Google meet, or on the web, you can select "OBS Virtual Camera" as your video source.

If you're using Zoom, Zoom will invert horizontally. your camera, but only for you. Don't freak out! Others will see you normally.
## 2. Setting up your scene/s
Now that you can select your OBS source, it's time to setup your scenes. Anything displayed on the OBS scene canvas will be shown to your audience. Firstly, you'll need to add a "Scene", and then add elements to it:
1. Press the + button under "Scenes" to create a new scene.
2. Type a name for your scene and press ENTER.
3. Press the + button under "Sources".
4. Add your desired sources.

For example, you'll probably want to add a "Video Capture Device" which is your camera. Then you can add overlays, logos, images, and more.
Here you can also select filters for each source, including things like "Chroma Key" if you want to use a greenscreen.
## 3. Setting up your physical space
With your physical space, you need to make sure the background and the technology you are using make you look and sound awesome. Why is this important? Because if no one can see or hear you, it doesn't matter what you're saying or how important it is. This matters whether you're in a meeting or giving a keynote.
### Camera
Number one piece of advice is to NOT use your built in camera. Built in cameras on MacBooks or laptops aren't useful because they are often low in quality and you don't have flexibility to move them around. This means people are probably looking up your nose or right at your forehead.
Get a decent webcam so you can place it in a desired location. My pick is the [Razer Kiyo](https://daily.upcomer.com/streamers-rejoice-the-ultimate-streaming-setup-from-razer/) because it's good quality and has a built in light.

Most webcams these days are good so it's up to personal preference here.
### Sound
Similar to the camera, DO NOT use the built in microphone from a laptop or MacBook. You can't move it to the desired location, it picks up a lot of static, and the quality of the audio is low. Invest in a good microphone and you'll notice a massive difference.
If you are recording or live streaming through OBS, add an "Audio Input" to capture your audio when you stream or record. If you're joining a call then choose "OBS Virtual Camera" as your video, and select your microphone as the "Audio" source.
### Lights
Lighting is important if you're doing presentations or recordings. Good lighting helps to showcase who you are and what you're talking about.
Three points of lighting (in front of you, and each side) will help remove any shadows from your face. Lights with high lumens are great and there's lots on the market now with WiFi capabilities. This means you can control them from your computer while streaming or presenting.
I love the [Elgato Ring light](https://daily.upcomer.com/review-elgato-ring-light/) and [Elgato keylights](https://www.elgato.com/en/key-light). These are great for highlight various physical features, lighting up a green screen, or making your photos look amazing.

### Background
Another thing you'll want to consider when live streaming or presenting is your background. If you're not using a virtual background, think about the types of things being shown on the screen. Do they represent you? Or your work? Are there little Easter eggs in your background? These little things will help engage the awesome and resonate with them.
Here's a couple of things in my background. A 3D printed model my [GitHub Skyline](https://dev.to/github/view-your-github-contribution-graph-as-an-animated-skyline-3d-print-it-2dpl) showcases one of the products we have as also looks really cool! What can you add in that your audience will love?

### Other things to consider
Based on the type of setup you want and your budget, you might want to consider a few other things to add:
- Pop filter to help make your sound clearer
- Green screen if you want to do fancy virtual backgrounds
- Stream deck for ultimate control while streaming
- Audio mixer to fine tune your audio on the go
- Acoustic sound boarding for even clearer sound
If you want to read a little more into my setup specifically, check out my article on the ultimate work from home setup:
{% link https://dev.to/mishmanners/get-the-ultimate-code-from-home-setup-3aoj %}
| mishmanners |
771,699 | Use Netlify to Host your SvelteKit Site | SvelteKit is a new, fast site generator. Netlify is a leading site hosting service. We look at how to use Netlify to host your SvelteKit site. | 0 | 2021-07-26T07:58:14 | https://rodneylab.com/use-netlify-to-host-your-sveltekit-site/ | svelte, webdev, javascript | ---
title: "Use Netlify to Host your SvelteKit Site"
published: "true"
description: "SvelteKit is a new, fast site generator. Netlify is a leading site hosting service. We look at how to use Netlify to host your SvelteKit site."
tags: "svelte, webdev, javascript"
canonical_url: "https://rodneylab.com/use-netlify-to-host-your-sveltekit-site/"
cover_image: "https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywp1ok16smw60sgt6udx.png"
---
## ☁️ Use Netlify to Host your SvelteKit Site
In this post we look at how to use Netlify to host your SvelteKit site. At first, I wanted to include this information in <a aria-label="Jump to the post on getting started with Svelte Kit" href="https://rodneylab.com/getting-started-with-sveltekit/">the post I wrote recently on 10 Tips for getting started with SvelteKit</a>. That post focussed on my experience on getting familiar with SvelteKit and included 10 tips I learned along the journey. At any rate, that post got a bit too long to include Netlify hosting details for SvelteKit. Because I had already done the research, I thought why not have a separate post, just on Netlify and SvelteKit? Anyway the long and the short of this is that you can consider this to be the “Part II” of that earlier post.
## ⚙️ Create the Netlify Config File
If you have used Netlify with other site generators, you will probably already be familiar with the `netlify.toml` file. This contains information like the build command, as well as default directories. Often specifying parameters here makes configuration simpler; rather than having to fish around the web interface to find the option you want, all defined in a single place. Typically the parameters defined here will override those previously set in the web console. Anyway enough talk, let's create the file `netlify.toml` file in the project root directory:
```toml
[build]
command = "npm run build"
functions = "functions"
publish = "build"
[dev]
command = "svelte-kit dev"
[functions]
directory = "netlify/functions"
```
Note the build command just references the build script, as defined in your project `package.json` file. Just tweak the definition in `package.json` if you want to customise what happens on build. If you want to <a aria-label="Learn more about Netlify file based configuration" href="https://docs.netlify.com/configure-builds/file-based-configuration/">learn more about the Netlify configuration file, check out the documentation</a>.
One additional recommendation is to add the `functions` and `publish` directories from the `build` stanza (as defined in lines `3` & `4`) to your `.gitignore` file. As an aside, for the configuration above, `netlify/functions` is where we place any Netlify functions we want our app to use while `functions` is where the functions are copied to when the site is built. You will not normally need to edit the files in the generated `functions` folder.
```plaintext
.DS_Store
node_modules
/.svelte-kit
/package
build
functions
```
## 🔧 The SvelteKit Netlify Adapter
SvelteKit has with various adapters which facilitate hosting in different environments. You can install the SvelteKit
Netlify adapter running the command:
```javascript
/** @type {import('@sveltejs/kit').Config} */
import adapter from '@sveltejs/adapter-netlify';
const config = {
kit: {
adapter: adapter(),
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte'
}
};
export default config
```
## 🧱 Building your SvelteKit Site on Netlify
If you completed the config and done a local build to check your app is behaving as expected and checked accessibility, you will undoubtedly want to push the site to Netlify. The easiest way to do this is to push your code to GitHub or GitLab and then link Netlify to the git repo. The process varies depending on where your repo is (i.e. GitHub, GitLab or Bitbucket). However essentially, you just need to click **New site from git** and choose the provider. You then have to log into the provider (if you are not already logged in). From here you can follow on-screen instructions, letting Netlify to have read access to your repo.
The next step is to define the environment variables in your project. Significantly, it is best practise not to store any sensitive variables in your repo. See the post on getting started with SvelteKit to <a aria-label="Learn how to use Environment Variables with Svelte Kit" href="https://rodneylab.com/getting-started-with-sveltekit/#sveltekitEnvironmentVariables">learn how to use environment variables in SvelteKit</a>. To set the variables in the web console open up the relevant site and click **Site settings**. From there, click **Build & deploy** from the left and finally **Environment** from the list that drops down. You simply fill out the variables your site need to build and save when done.
If you get a build failing, take a look at the output. I found that the Node version on Netlify was not compatible with one of the SvelteKit packages. If this happens for you, you can force Netlify to use a different version. Just go to your project root folder in the command line and type the following command, adjusting for the node version you need (the build log should contain this information):
```shell
echo "14" > .nvmrc
```
This creates a `.nvmrc` file containing the desired node version. Netlify respects the file. You can learn more about <a aria-label="Learn more about Netlify build dependencies" href="https://docs.netlify.com/configure-builds/manage-dependencies/">managing build dependencies for Netlify in the docs</a>.
## 🙌🏽 Use Netlify to Host your SvelteKit Site: Recap
In this post we have looked at:
- file based Netlify configuration,
- how to install the SvelteKit Netlify adapter,
- setting up Netlify to host your SvelteKit site in the web console.
I hope the step were clear enough. Let me know if I could change anything to make it easier for anyone else following along to understand. Also let me know if there is something important on this topic, which I should have included. Drop a comment below, or equally you can <a aria-label="Open Rodney Lab's twitter profile" href="https://twitter.com/intent/user?screen_name=askRodney">@ mention me on Twitter</a>.
## 🙏🏽Feedback
Please send me feedback! Have you found the post useful? Would you like to see posts on another topic instead? Get in touch with ideas for new posts. Also if you like my writing style, get in touch if I can write some posts for your company site on a consultancy basis. Read on to find ways to get in touch, further below. If you want to support posts similar to this one and can spare a couple of dollars, rupees, euros or pounds, please <a aria-label="Support Rodney Lab via Buy me a Coffee" href ="https://rodneylab.com/giving/">consider supporting me through Buy me a Coffee</a>.
Finally, feel free to share the post on your social media accounts for all your followers who will find it useful. As well as leaving a comment below, you can get in touch via <a aria-label="Get in touch via Twitter" href="https://twitter.com/messages/compose?recipient_id=1323579817258831875">@askRodney</a> on Twitter and also <a aria-label="Contact Rodney Lab via Telegram" href="https://t.me/askRodney">askRodney on Telegram</a>. Also, see <a aria-label="Get in touch with Rodney Lab" href="https://rodneylab.com/contact">further ways to get in touch with Rodney Lab</a>. I post regularly on <a aria-label="See posts on svelte kit" href="https://rodneylab.com/tags/sveltekit/">SvelteKit</a> as well as other topics. Also <a aria-label="Subscribe to the Rodney Lab newsletter" href="https://rodneylab.com/about/#newsletter">subscribe to the newsletter to keep up-to-date</a> with our latest projects. | askrodney |
771,757 | 9-Steps to Flash Sonoff WiFi Smart Switch with Tasmota (MacOS Catalina) | In this short tutorial, we will change Sonoff WiFi Smart Switch firmware with Tasmota, using MacOS... | 0 | 2021-07-26T08:29:50 | https://dev.to/henri_rion/9-steps-to-flash-sonoff-wifi-smart-switch-with-tasmota-macos-catalina-4in | sonoff, tasmota, macos, wifi | In this short tutorial, we will change Sonoff WiFi Smart Switch firmware with Tasmota, using MacOS Catalina. No worries, it’s very easy. Let’s move on.
Before we start, let’s have short reminders.
#What is Sonoff WiFi Smart Switch?
Sonoff is an ESP8266 based device providing users with smart home control.
It’s a WiFi switch that connects to a variety of devices.
Initially, Sonoff sends data to a cloud platform via a WiFi router, allowing customers to control all connected appliances remotely using the eWeLink smartphone app, being the proprietary firmware of Sonoff.
If you landed here, you want to change this proprietary firmware of Sonoff, with Tasmota.
#What is Tasmota?
Tasmota is an open source firmware for ESP8266 based devices created and maintained by Mr Theo Arends.
For more details, please have a look on the dedicated Tasmota page on Github: https://tasmota.github.io/docs/About/ .
#Material required
| Components | Where To Buy? | Price |
| ------------- |:-------------:| -----:|
| Sonoff Module | [Amazon.fr](https://www.amazon.fr/Interrupteur-Intelligent-Universel-minuterie-Android/dp/B07XYVKHNH), [Reichelt.de](https://www.reichelt.de/de/de/1-kanal-schaltaktor-wlan-sonoff-basic-r2-p288686.html) | ~9 EUR |
| FT232RL Adapter USB to TTL 3.3V | [Amazon.fr](https://www.amazon.fr/gp/product/B01N9RZK6I/)| ~8 EUR |
| USB 2.0 A to USB Mini B cable | It Depends on the connection of the FT232RL you will have, but it’s to connect the FT232RL to the computer. (FYI: I ordered the model referenced with the Amazon link, and the port received was Mini B instead of USB 2.0 as presented. Nevermind ;-)).[Amazon.fr](https://www.amazon.fr/gp/product/B006ZZ1C4M/) |~5 EUR |
|Cables and, Pin Connectors| To connect Sonoff with the FT232RL: [Amazon.de](https://www.amazon.fr/dp/B01JD5WCG2/)| ~1 EUR|
#Materials Overview

#9 Steps
Now, we are ready to start.
##Step1: 🏗️ Prepare the Sonoff for welding
Remove the Sonoff’s plastic box (Caution: be careful with static electricity while touching the electronic inside !)

Get a 4 pins connector and plug it on the Sonoff (cf image for the exact location)

Braze the 4 connectors (or if you can, hold it on the device).

##Step 2: 🔌 Connect the Sonoff and the FT232RL adapter together
The figure below show how to connect Sonoff and the FT232 adapter together.
As you can see on the figure and my pictures, the Pins are not on the same place. Never mind, it’s just a different model.
Connection Summary (from Sonoff to FT232RL):
- GND → GND
- TX → RX
- RX → TX
- VCC or 3.3V → VCC or 3.3V

##Step 3: 🔌 Plug the USB Cable to the FT232RL
Plug the USB cable to the FT232RL. Don’t connect the cable to the computer yet.
###Overview after connection

Now we are all set, let’s move on the computer part 😄.
##Step 4: 🤓 Switch you Mac On…
For this tutorial, I’m using a Macbook Air with MacOS Catalina installed.
##Step 5: 🔌 Preparing Flashing
We need to flash the internal memory with Tasmota. In other words, we are replacing Sonoff firmware with Tasmota.
Download NodeMCU PyFlasher for Catalina, and Copy the App to your computer: https://github.com/marcelstoer/nodemcu-pyflasher/releases

Download Tasmota: http://ota.tasmota.com/tasmota/release/

Now, open NodeMCU PyFlasher, and set the options as in the screenshot.

##Step 6: 💥 Let’s Flash now
Be careful here, because we need to switch the Sonoff to “boot mode”.
Here is the process:
1. Press the Sonoff button (before plugging the USB!!)
2. While pressing the Sonoff button, plug the USB cable to the computer. Hold the Sonoff button for 2sec after plugging the USB cable.
3. Now you can launch the Flash ModeMCU on your computer. The operation can take up to 1 min.


Note: I had a few issues during the flashing step. My computer was not able to connect the Sonoff device. I had to restart a couple of times the operation of pressing the button, and connect the USB to access the Boot Mode. So be careful doing this operation. After a couple of tries, the Flashing operation succeeds.
#Step 7: ⚡ Power Up
Unplug the Sonoff device from USB adapter, and connect a Male Plug in the Input side, and a Female Plug in the Output side.
Now you can Power Up.
#Step 8: 🚧 Tasmota Setup
We should have TasmotaXXXX that appears in our WiFi networks. Select the WiFi Tasmota emitted by the Sonoff device.
After selecting, a pop-up window will appear from Tasmota Firmware asking to connect to your Wifi network. Enter login & password.
#Step 9: 🍿 Enjoy
We can now toggle Off & On the Sonoff WiFi switch :-).
Feel Free to contact me if you have any questions, or if any step was not clear enough, happy to help. | henri_rion |
771,779 | Interact with Relational Databases using TypeORM (w/JavaScript) | I bet most of the Node.js community has heard of TypeORM at some point in their life. So people... | 0 | 2021-07-26T10:54:34 | https://dev.to/franciscomendes10866/interact-with-relational-databases-using-typeorm-w-javascript-17pb | node, javascript, webdev, beginners | I bet most of the Node.js community has heard of [TypeORM](https://typeorm.io) at some point in their life. So people working with [NestJS](https://nestjs.com/) literally know this ORM from one end to the other.
But generally those who use this ORM enjoy working with TypeScript and many tutorials and articles are aimed at using TypeORM using only TypeScript.
However you can use it with JavaScript and the only thing that changes is the way we define the models, otherwise everything is exactly the same.
In today's example we are going to create an application for a bookstore, we are going to create an Api with a simple CRUD and all we have to do is insert books, etc.
The framework I'm going to use today is [Fastify](https://www.fastify.io/), if you're used to Express, you'll feel at home because they're similar in many ways.
But today I won't explain why I prefer Fastify over Express because the focus is on using TypeORM with JavaScript.
# Let's code
The database dialect I'm going to use in this example is SQLite, don't worry because the only thing that changes are the properties in the [configuration](https://typeorm.io/#/connection) object, otherwise everything is the same.
As always, first let's install the dependencies we need:
```sh
npm install fastify typeorm sqlite3
```
Now let's start by defining our models which in the case of TypeORM are called entities. In order to define our model, we will need to import the [EntitySchema](https://typeorm.io/#/separating-entity-definition) of typeorm, which we will name BookEntity.
```js
// @src/Models/book.js
import { EntitySchema } from "typeorm";
export const BookEntity = new EntitySchema({
// Some things come here.
});
```
We have to define the name of our database table, which we will call Books. Then we have to define the columns of the table that we need. We will have an attribute called id, which will be our primary and auto-incremental key. Then we will have three other attributes that will be strings, called name, description and format.
```js
// @src/Models/book.js
import { EntitySchema } from "typeorm";
export const BookEntity = new EntitySchema({
name: "Books",
columns: {
id: {
type: Number,
primary: true,
generated: true,
},
name: {
type: String,
},
description: {
type: String,
},
format: {
type: String,
},
},
});
```
In addition to defining our model, we will also need to create a class with the respective attributes of our model.
This is because when we are going to create a new book, we need to assign the data we obtained in the http request and add them to the book's instance.
In the future it will make more sense.
```js
// @src/Models/book.js
import { EntitySchema } from "typeorm";
export class Book {
constructor(name, description, format) {
this.name = name;
this.description = description;
this.format = format;
}
}
export const BookEntity = new EntitySchema({
name: "Books",
columns: {
id: {
type: Number,
primary: true,
generated: true,
},
name: {
type: String,
},
description: {
type: String,
},
format: {
type: String,
},
},
});
```
Now we can move on to configuring the connection to the database. At this point there are several approaches that can be taken, however I will do it in a way that I find simple and intuitive.
First we need to import the `createConnection()` function from typeorm and then we import our BookEntity from our model.
```js
// @src/database/index.js
import { createConnection } from "typeorm";
import { BookEntity } from "../Models/book.js";
// More stuff comes here.
```
The `createConnection()` function is asynchronous and from here on there are several approaches that can be taken, in this example I will create an asynchronous function called connection that will return our connection to the database.
And in `createConnection()` we will pass our connection settings, such as the dialect, our entities, among other things.
```js
// @src/database/index.js
import { createConnection } from "typeorm";
import { BookEntity } from "../Models/book.js";
export const connection = async () => {
return await createConnection({
name: "default",
type: "sqlite",
database: "src/database/dev.db",
entities: [BookEntity],
logging: true,
synchronize: true,
});
};
```
Now, with our model and our connection created, we can start working on the module that will be responsible for running our application.
First we will import the app module of our application which will contain all the logic (which has not yet been created) and our function responsible for connecting to the database.
Afterwards we will create a function that will be responsible for initializing the connection to the database and starting our Api, if an error occurs we will terminate the process.
```js
// @src/main.js
import app from "./app.js";
import { connection } from "./database/index.js";
const start = async () => {
try {
await connection();
await app.listen(3333);
} catch (err) {
console.error(err);
process.exit(1);
}
};
start();
```
Now in our app, we'll start by importing Fastify, as well as the typeorm's `getRepository()` function and our model (BookEntity) along with our Book class.
In TypeORM we can choose between two patterns, Active Record and Data Mapper. When using repositories in this example, we will be using the Data Mapper pattern, to learn more about this pattern click [here](https://en.wikipedia.org/wiki/Data_mapper_pattern).
```js
// @src/app.js
import Fastify from "fastify";
import { getRepository } from "typeorm";
import { BookEntity, Book } from "./Models/book.js";
const app = Fastify();
// More stuff comes here.
export default app;
```
Now we can start defining our Api routes, first I want to know if we have any books stored in our database, for that we will use typeorm's `.find()` method to get all the data stored in our database table.
```js
// @src/app.js
app.get("/books", async (request, reply) => {
const Books = getRepository(BookEntity);
const data = await Books.find();
return reply.send({ data });
});
```
However, our table is still empty, so we'll have to insert some books first.
In this case, we'll create a route to add a new book to our table, for that we'll instantiate our Book class and map each of the properties we got in the http request to our instance.
Next, we'll use the typeorm's `.save()` method to insert a new book into our database table.
```js
// @src/app.js
app.post("/books", async (request, reply) => {
const Books = getRepository(BookEntity);
const book = new Book();
book.name = request.body.name;
book.description = request.body.description;
book.format = request.body.format;
const data = await Books.save(book);
return reply.send({ data });
});
```
Now with a book inserted in the table, let's try to find just that book. For this we will create a new route that will have only one parameter, which in this case will be the `id`.
Then we'll use the typeorm's `.findOne()` method to find just the book with its `id`.
```js
// @src/app.js
app.get("/books/:id", async (request, reply) => {
const { id } = request.params;
const Books = getRepository(BookEntity);
const book = await Books.findOne(id);
return reply.send({ book });
});
```
As we already have the book in the table and we can already get the book we specifically want, we still need to update the data for that book. For that we will use the `.update()` method of the typeorm and we will pass two things, the `id` and the updated object of the book.
```js
// @src/app.js
app.put("/books/:id", async (request, reply) => {
const { id } = request.params;
const Books = getRepository(BookEntity);
await Books.update({ id }, { ...request.body });
const book = await Books.findOne(id);
return reply.send({ book });
});
```
Last but not least, it is necessary to remove a specific book from the table. To do this, we will first have to find the book we want using the `.findOne()` method and we will have to pass that same book as the only argument to the `.remove()` method.
```js
// @src/app.js
app.delete("/books/:id", async (request, reply) => {
const { id } = request.params;
const Books = getRepository(BookEntity);
const bookToRemove = await Books.findOne(id);
await Books.remove(bookToRemove);
return reply.send({ book: bookToRemove });
});
```
The final result of the app module should look like the following:
```js
// @src/app.js
import Fastify from "fastify";
import { getRepository } from "typeorm";
import { BookEntity, Book } from "./Models/book.js";
const app = Fastify();
app.get("/books", async (request, reply) => {
const Books = getRepository(BookEntity);
const data = await Books.find();
return reply.send({ data });
});
app.post("/books", async (request, reply) => {
const Books = getRepository(BookEntity);
const book = new Book();
book.name = request.body.name;
book.description = request.body.description;
book.format = request.body.format;
const data = await Books.save(book);
return reply.send({ data });
});
app.get("/books/:id", async (request, reply) => {
const { id } = request.params;
const Books = getRepository(BookEntity);
const book = await Books.findOne(id);
return reply.send({ book });
});
app.put("/books/:id", async (request, reply) => {
const { id } = request.params;
const Books = getRepository(BookEntity);
await Books.update({ id }, { ...request.body });
const book = await Books.findOne(id);
return reply.send({ book });
});
app.delete("/books/:id", async (request, reply) => {
const { id } = request.params;
const Books = getRepository(BookEntity);
const bookToRemove = await Books.findOne(id);
await Books.remove(bookToRemove);
return reply.send({ book: bookToRemove });
});
export default app;
```
If you want to see the final result of our application and you want to test locally, just clone the Github repository by accessing [this](https://github.com/FranciscoMendes10866/xul) link.
# Conclusion
As always, I hope I was brief in explaining things and that I didn't confuse you. Have a great day! 👋 🤓 | franciscomendes10866 |
771,780 | Checking if a string of parentheses are balanced in O(n) time and O(1) Space. | First, let's use the stack method. I know what you are thinking, "but using a stack will... | 0 | 2021-07-26T09:44:00 | https://dev.to/k2code/checking-if-a-pair-of-parentheses-are-balanced-in-o-n-time-and-o-1-space-17k8 | # First, let's use the stack method.
I know what you are thinking, "but using a stack will result in `O(n)` space." Yes, but first let's go through the stack method for those who are not familiar with this problem. I'll be implementing this in Python.
### Creating a stack
In Python, we can use a list to implement a stack.
```python
class Stack():
def __init__(self):
self.items = []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def is_empty(self):
return self.items == []
def get_stack(self):
return self.items
def peek(self):
if not self.is_empty():
return self.items[-1]
```
If you are not familiar with stacks, you might find [this helpful](https://www.geeksforgeeks.org/stack-data-structure/#:~:text=Stack%20is%20a%20linear%20data,one%20another%20in%20the%20canteen.).
### Solving the balanced parenthesis problem
First, we start by creating a helper function to check if a pair of parentheses is a match, i.e., `()` is a match, `(}` is not.
```python
def is_match(p1, p2):
if p1 == "(" and p2 == ")":
return True
elif p1 == "[" and p2 == "]":
return True
elif p1 == "{" and p2 == "}":
return True
else:
return False
```
The above function takes a pair of parentheses as parameters `p1` and `p2` and checks if the two match.
### The main function
```python
def balance_parens(paren_string):
stack = Stack()
index = 0
n = len(paren_string)
is_balanced = True
while index < n and is_balanced:
paren = paren_string[index]
if paren in "([{":
stack.push(paren)
else:
if stack.is_empty():
is_balanced = False
else:
top = stack.pop()
if not is_match(top, paren):
print(top, paren)
is_balanced = False
index += 1
if stack.is_empty() and is_balanced:
return True
else:
return False
```
The runtime of this function is `O(n)` time and `O(n)` space.
### *My solution*
My method uses two pointers, one at the beginning and the other at the end of the string. Then the two pointers work their way to the middle of the string, kind of like attacking it from both ends, checking if the brackets are a match.
#### **Cons**
If it encounters a string like this `()(([]))`, it would return false even though this is balanced because index 1 and -2 don't match. Anyone has an idea on how we could solve that? Leave a comment.
### *Code*
```python
def b_parens(paren_string):
n = len(paren_string)
if n % 2 != 0:
return False
n = n // 2
for i in range(n):
p1 = paren_string[i]
p2 = paren_string[~i]
if not is_match(p1, p2):
return False
return True
```
Since we loop through the array once, the time complexity is `O(n)` and the space complexity is `O(1)`. The `~` tilde is a bitwise
operator _NOT_. [This might help](https://www.arduino.cc/reference/en/language/structure/bitwise-operators/bitwisenot/) if you've never heard of it.
Thank you. | k2code | |
771,796 | Artix Linux: Add Arch Linux repos, extra + community | When you use Artix Linux based on Arch Linux, and some of * Arch * Linux repositories are missing... | 0 | 2021-07-28T12:01:00 | https://dev.to/nabbisen/artix-linux-add-arch-linux-repos-extra-community-35ab | artix, archlinux, pacman, aur | When you use [Artix Linux](https://artixlinux.org/) based on [Arch Linux](https://archlinux.org/), and some of \* Arch \* Linux repositories are missing with [`pacman`](https://wiki.archlinux.org/title/pacman) in your operation system, this post might be useful.
According to Artix Linux announcement, the repositories of [extra] / [community] are disabled by default:
> Artix has reached the stage where it can operate without the help of the Arch repositories, including the preparation of its installation media.
> As such, all new weekly ISO images will ship without [extra], [community] and [multilib] enabled in pacman.conf.
Here shows how to enable those repositories. Well, the detail is in [Artix official wiki](https://wiki.artixlinux.org/Main/Repositories#Arch_repositories).
First, activate [Universe repository](https://wiki.artixlinux.org/Main/Repositories#Universe):
```console
# nvim /etc/pacman.conf
```
by appending the lines:
```diff
+ [universe]
+ Server = https://universe.artixlinux.org/$arch
+ Server = https://mirror1.artixlinux.org/universe/$arch
+ Server = https://mirror.pascalpuffke.de/artix-universe/$arch
+ Server = https://artixlinux.qontinuum.space/artixlinux/universe/os/$arch
+ Server = https://mirror1.cl.netactuate.com/artix/universe/$arch
+ Server = https://ftp.crifo.org/artix-universe/$arch
+ Server = https://artix.sakamoto.pl/universe/$arch
```
Then sync:
```console
# pacman -Sy
```
Next, install `artix-archlinux-support`:
```console
# pacman -Syu artix-archlinux-support
```
The output was:
```console
:: Synchronizing package databases...
(...)
universe is up to date
:: Starting full system upgrade...
resolving dependencies...
looking for conflicting packages...
Packages (3) archlinux-keyring-20230320-1 archlinux-mirrorlist-20230226-2
artix-archlinux-support-2-1
Total Download Size: 1.16 MiB
Total Installed Size: 1.65 MiB
:: Proceed with installation? [Y/n] y
:: Retrieving packages...
archlinux-keyrin... 1165.1 KiB 787 KiB/s 00:01 [######################] 100%
artix-archlinux-... 14.6 KiB 71.9 KiB/s 00:00 [######################] 100%
archlinux-mirror... 9.2 KiB 45.5 KiB/s 00:00 [######################] 100%
Total (3/3) 1188.9 KiB 476 KiB/s 00:03 [######################] 100%
(3/3) checking keys in keyring [######################] 100%
(3/3) checking package integrity [######################] 100%
(3/3) loading package files [######################] 100%
(3/3) checking for file conflicts [######################] 100%
(3/3) checking available disk space [######################] 100%
:: Processing package changes...
(1/3) installing archlinux-keyring [######################] 100%
==> Appending keys from archlinux.gpg...
==> Locally signing trusted keys in keyring...
-> Locally signed 5 keys.
==> Importing owner trust values...
gpg: (...)
==> Disabling revoked keys in keyring...
-> Disabled 34 keys.
==> Updating trust database...
gpg: (...)
==> Updating trust database...
gpg: next trustdb check due at 2023-07-03
(2/3) installing archlinux-mirrorlist [######################] 100%
(3/3) installing artix-archlinux-support [######################] 100%
Optional dependencies for artix-archlinux-support
lib32-artix-archlinux-support: archlinux multilib support
:: Running post-transaction hooks...
(1/1) Show archlinux help...
==> Add the arch repos in pacman.conf:
#[testing]
#Include = /etc/pacman.d/mirrorlist-arch
[extra]
Include = /etc/pacman.d/mirrorlist-arch
#[community-testing]
#Include = /etc/pacman.d/mirrorlist-arch
[community]
Include = /etc/pacman.d/mirrorlist-arch
#[multilib-testing]
#Include = /etc/pacman.d/mirrorlist-arch
#[multilib]
#Include = /etc/pacman.d/mirrorlist-arch
==> run: 'pacman-key --populate archlinux'
```
Next, edit `pacman.conf`:
```console
# nvim /etc/pacman.conf
```
to append these lines:
```diff
+ #[testing]
+ #Include = /etc/pacman.d/mirrorlist-arch
+
+
+ [extra]
+ Include = /etc/pacman.d/mirrorlist-arch
+
+
+ #[community-testing]
+ #Include = /etc/pacman.d/mirrorlist-arch
+
+
+ [community]
+ Include = /etc/pacman.d/mirrorlist-arch
+
+
+ #[multilib-testing]
+ #Include = /etc/pacman.d/mirrorlist-arch
+
+
+ #[multilib]
+ #Include = /etc/pacman.d/mirrorlist-arch
```
Then, run the command:
```console
# pacman-key --populate archlinux
```
The output was:
```console
==> Appending keys from archlinux.gpg...
==> Updating trust database...
gpg: next trustdb check due at 2023-07-03
```
That's it.
It is able now to install a package in [extra] or [community] repositories such as [Deno](https://deno.land/) :) | nabbisen |
771,981 | LGMVIP- Internship Experience | Hello Everyone, I am Rutuj Runwal and in this post I will be sharing my experience as a Web... | 0 | 2021-07-27T16:55:55 | https://dev.to/rutujr/lgmvip-internship-experience-5653 | Hello Everyone,
I am <a href="https://www.linkedin.com/in/rutuj-runwal/">Rutuj Runwal</a> and in this post I will be sharing my experience as a Web Developer Intern at Let's Grow More(LGM) through their Virtual Internship Program that I had joined for July 2021.
It was a well-structured internship and as a Web Developer Intern I was expected to complete at least 2 out of the 3 listed tasks using relevant technologies. The tasks were as follows :-
Beginner Level Task :- Create a Single Page Website using HTML, CSS and JavaScript. The website had to follow a predefined design pattern along with a few creative changes.
Intermediate Level Task :- Create a Web Application using ReactJs. The application had to feature an API call to display user data. It also featured a loader to be displayed while the data was still being fetched.
Advanced Level Task :- Student Result Management System using HTML, CSS, JavaScript, PHP and MySQL.
I have successfully completed the first two tasks and had learned a lot in this process.
<b>Beginner Level:</b> I always use bootstrap 5.0 as my go-to library when designing single page applications. It provides you with flexibility and creativity for your website while still maintaining responsiveness throughout your website. It was a good experience to design the webpage according to a predefined design and add extra bits into it to get an immersive experience.
<b>Intermediate Level:</b> I was exploring the wonderous capabilities that ReactJs had and I had already made a few projects using ReactJs earlier: <b><a href="https://rutuj-runwal.github.io/easynotes/">Notes App</a></b> and <b><a href="https://molog-media.netlify.app/">MoLog Marketing</a></b> But I had never used external data from firebase or any API etc into my sites. Thus, this task gave me an insight into using API’s in react and it was the perfect time for me to understand more about React through this task. I used react-bootstrap to set up a clean UI to display the data. I also added a “Load More” functionality into my page to fetch and display more data onto my app.
<img src="https://lh4.googleusercontent.com/Tx3Xa99y-O01VoD4QNJXXv5chnuj5C5Rk8NFTDo0hsxIvJ89AMfrgn-K_sjACiUf2xG0tVC6vjkqCq3-wMzC=w1920-h942-rw" alt="Rutuj Runwal's Web App">
<b>Features:</b>
<ul>
<li>Fetch and Display API Data</li>
<li>Responsive Design</li>
<li>Load More button to get more data</li>
</ul>
All the source code is present on my Github: <a href="https://github.com/Rutuj-Runwal/LGMVIP-WebDev">here</a>
Thus, It was a wonderful experience and an opportunity to explore more.
Let's Connect: <a href="https://bit.ly/let_us_connect" style="text-decoration:none;color:red;">Linkedin</a>
Hey Reader, I have also created a **Anti-malware app**: <a href="https://dev.to/rutujr/context-menu-malware-scanner-using-python-57j2">Click Here👀</a>
| rutujr | |
772,015 | I completely ignored the front end development scene for 6 months. It was fine | The first year I started coding professionally, most of the work I did was in Adobe Flash. Then Steve... | 0 | 2021-07-26T19:34:53 | https://rachsmith.com/i-completely-ignored-the-front-end-development-scene-for-6-months-it-was-fine/ | development, work | ---
title: I completely ignored the front end development scene for 6 months. It was fine
published: true
date: 2021-07-26 10:37:42 UTC
tags: development,work
canonical_url: https://rachsmith.com/i-completely-ignored-the-front-end-development-scene-for-6-months-it-was-fine/
---
The first year I started coding professionally, most of the work I did was in Adobe Flash. Then Steve Jobs [decided to kill Flash](https://en.wikipedia.org/wiki/Thoughts_on_Flash) and it forced me to learn how to animate with JavaScript, CSS3 and HTML Canvas if I wanted to keep landing contract jobs. In the second year, recruiters were asking me if I could "build Parallax websites" when the first mainstream use of it [appeared only 8 months prior](https://onepagelove.com/nike-better-world).
The hectic pace of needing to learn one thing after the next didn't bother me so much because when I was 26 because I was quite happy to spend much of my free time outside of my day job coding. I was really enjoying myself, so the impression that I had to constantly up-skill to maintain my career wasn't a concern. I did wonder, though, how I would ever take enough time off to have a baby, or have other responsibilities that would prevent me from being able to spend so much of my time mastering languages and learning new libraries and frameworks.
Nine years have passed since then. Earlier this year I completed my second stint of 6 months off work in a couple of years. Normally I read blogs, subscribe to several Development related newsletters and stay on top of the latest news regarding JavaScript and the Front End. But for that 6 months I paid no attention to what was happening at work, or in development in general. I completely checked out.
There was a time when I would have thought that taking such a break from Front End wasn’t possible. That I had to be aware of all the new frameworks and language features to keep my career on track. I was misguided in my thinking and was placing importance on the results, rather than the process.
What I’ve learnt through experience is that **the number of languages I’ve learned or the specific frameworks I’ve gained experience with matters very little. What actually matters is my ability to up-skill quickly and effectively**. My success so far has nothing to do with the fact I know React instead of Vue, or have experience with AWS and not Azure. What has contributed to my success is the willingness to learn new tools as the need arises.
My hope in sharing this, is that it might give you permission to stress less about picking the "wrong framework" to learn or feeling like you have to stay aware of every piece of JavaScript news. If you focus on:
- learning how you best learn, and
- practicing effectively communicating the things you've learned
you can't go wrong.
When I returned to work after my 6 month break, [CodePen](https://codepen.io/) was moving their backend code from [Ruby on Rails](https://rubyonrails.org/) to [Go](https://golang.org/), and Front End to [Next.js](https://nextjs.org/). So I am now learning how to program with Go and reading a lot of Next.js docs and resources. If there has been one consistency about my job, it's that the tech is always changing. Although it can be frustrating to go back to being a complete beginner at something over and over, each time I pick up something new I further my expertise in being a lifelong learner. I wouldn't trade that experience for anything.
| rachsmith |
772,089 | Goal Line Software Provides Platform For CFL Fantasy Football | Fantasy football is sometimes jokingly referred to as Dungeons and Dragons for the guys who in high... | 0 | 2021-07-26T14:59:02 | https://dev.to/templatewallet/goal-line-software-provides-platform-for-cfl-fantasy-football-593b | Fantasy football is sometimes jokingly referred to as Dungeons and Dragons for the guys who in high school used to torment the guys who played Dungeons and Dragons. Whatever the case, it is big business in the North American market.
According to the Fantasy Sports and Gaming Association, over 59 million people engaged in fantasy games across the U.S. and Canada. This number is projected to accelerate rapidly in the near future due to the rapid launch of user-friendly fantasy gaming applications and rising internet penetration. With major sporting powerhouses such as Yahoo, ESPN, DraftKings and FanDuel all heavily entrenched in fantasy football, the appetite for these games is only going to grow to be insatiable.
In Canada, the majority of Canadian fantasy football players engage in NFL fantasy football, because that’s the most popular football league in the world and NFL fantasy football platforms are made readily available by software developers. That isn’t the case when it comes to the CFL. While it’s [easier than ever today to watch the CFL in Canada](https://www.canadasportsbetting.ca/news/cfl/how-to-watch-cfl-canada.html), playing CFL fantasy football has always proven to be a challenging pursuit for fans of the three-down game.
##Crossing The Goal Line
The developers at [Goal Line Software](https://football-technology.fifa.com/en/media-tiles/fifa-quality-programme-for-goal-line-technology/) are intent on changing that dynamic. Goal Line Software specializes in customized fantasy football league sites for "non-major" leagues, such as the XFL, CFL, and Indoor Football League, where there is currently no league manager site available.
If there are players out there seeking to organize a CFL fantasy football league, the Goal Line developers are ready to design whatever type of league set up is desired. In order to make it happen, simply give them the rules for the proposed league and they will do everything they can to set up a site that meets those specifications.
They promise to work closely with CFL fantasy football players to make sure the rules of their league are set up properly and that the site functions the way it should. Suggestions for improvements are always welcomed.
The Goal Line Software League Manager will tally scores and help tabulate the standings, providing each league with detailed box scores. It will offer access to schedules for each fantasy league and the professional league that the players are playing with. Each fantasy football franchise owner will be given a password to make lineup changes and free agent requests, and they will be locked from lineup changes at game time for each player. No more posting lineups to a message board that owners can edit after the deadline. In the case of each league, participants can add their own stories to their league home page, or they can use the message board provided for them by the Goal Line Software League Manager. There’s even a prototype of a Fantasy Canadian Football League set up posted on their web page.
Fantasy football only figures to keep growing. Rising internet penetration among the young population, the emergence of next-generation technology (5G), and the quick adoption of smartphones are expected to fuel online fantasy sports gaming. The growing use of fantasy sports gaming apps for brand promotion is also projected to positively impact the market.
The proprietors at Goal Line Software are quick to point out that like the CFL, they aren’t the major players in the football world. They aren’t promoting themselves as the next ESPN Fantasy or Yahoo! Fantasy site. Those major-league sites have a lot of features that Goal Line just can't offer. They are simply providing a site for owners who wish to play a more non-traditional fantasy game, one that the big boys don't support.
##Big Time Players Taking Notice
TSN operated a CFL daily fantasy league during the 2019 season, the last one in which the CFL was active. The league was shut down in 2020 due to the COVID-19 pandemic.
The CFL included weekly CFL fantasy football stories and a CFL fantasy football podcast as part of the coverage on the league’s official website. Both were generated by DailyRoto.com. DailyRoto is a community of DFS players, analysts, and media personalities dedicated to influencing the Daily Fantasy Sports industry. However, these weren’t relatable to the type of season-long CFL fantasy football leagues that Goal Line Software League Manager has developed and are prepared to deliver to CFL fans seeking to operate a season-long CFL fantasy football league. | templatewallet | |
772,233 | TomTom Traffic | In the previous post I introduced the {tomtom} package and showed how it can be used for geographic... | 0 | 2021-09-25T11:10:48 | https://datawookie.dev/blog/2021/07/tomtom-traffic/ | ---
title: TomTom Traffic
published: true
date: 2021-07-25 00:00:00 UTC
tags:
canonical_url: https://datawookie.dev/blog/2021/07/tomtom-traffic/
---

In the [previous post](tomtom-routing) I introduced the `{tomtom}` package and showed how it can be used for geographic routing. Now we’re going to look at the traffic statistics returned by the TomTom API.
As before we’ll need to load the `{tomtom}` package and specify an API. Then we’re ready to roll.
## Bounding Box
We’ll be retrieving traffic incidents within a bounding box. We’ll define the longitude and latitude extrema of that box (in this case centred on Athens, Greece).
```
left <- 23.4
bottom <- 37.9
right <- 24.0
top <- 38.2
```
## Traffic Incidents
Now call the `incident_details()` function to retrieve information on all incidents within that area.
```
incidents <- incident_details(left, bottom, right, top)
```
The expected duration and description of each incident is provided.
```
incidents %>% select(incident, begin, end)
# A tibble: 125 x 3
incident begin end
<int> <chr> <chr>
1 1 2021-07-26T10:44:30Z 2021-07-26T11:02:00Z
2 2 2021-07-26T10:16:38Z 2021-07-26T11:18:00Z
3 3 2021-07-26T10:33:30Z 2021-07-26T11:13:00Z
4 4 2021-07-26T10:46:08Z 2021-07-26T11:12:00Z
5 5 2021-07-26T10:43:00Z 2021-07-26T11:26:00Z
6 6 2021-07-26T08:57:00Z 2021-07-26T11:14:00Z
7 7 2021-07-26T10:30:30Z 2021-07-26T11:13:00Z
8 8 2021-01-04T07:27:00Z 2021-12-31T20:00:00Z
9 9 2021-07-20T09:14:13Z 2021-08-17T19:29:00Z
10 10 2021-01-04T07:27:00Z 2021-12-31T20:00:00Z
# … with 115 more rows
```
You can also get the type of the incident…
```
# A tibble: 3 x 2
incident description
<int> <fct>
1 1 Slow traffic
2 2 Stationary traffic
3 3 Slow traffic
```
… along with a description of where it begins and ends.
```
# A tibble: 3 x 2
incident from
<int> <chr>
1 1 Nea Peramos (Olympia Odos/A8)
2 2 Adrianou Ave (Perifereiaki Aigaleo/A65)
3 3 Γεωργίου Παπανδρέου
# A tibble: 3 x 2
incident to
<int> <chr>
1 1 Stathmos Diodion Eleysinas (Olympia Odos/A8)
2 2 Aspropyrgos-Biomihaniki Periohi (A8) (Perifereiaki Aigaleo/A65)
3 3 Θεοδώρου Βασιλάκη
```
Finally, the `points` field allows you to plot our the locations of each incident.
```
# A tibble: 125 x 8
incident begin end description from to length points
<int> <chr> <chr> <fct> <chr> <chr> <dbl> <list>
1 1 2021-07… 2021-07… Slow traffic Nea Pera… Stathmos Di… 132. <df[,2…
2 2 2021-07… 2021-07… Stationary … Adrianou… Aspropyrgos… 799. <df[,2…
3 3 2021-07… 2021-07… Slow traffic Γεωργίου… Θεοδώρου Βα… 978. <df[,2…
4 4 2021-07… 2021-07… Queuing tra… Κρήτης Αττική Οδός… 1134. <df[,2…
5 5 2021-07… 2021-07… Stationary … Αττική Ο… Κρήτης 866. <df[,2…
6 6 2021-07… 2021-07… Queuing tra… Iera Odo… Aspropyrgos… 3717. <df[,2…
7 7 2021-07… 2021-07… Stationary … Ougko Vi… Labraki Gr.… 204. <df[,2…
8 8 2021-01… 2021-12… Restrictions Mavromih… Kapetan Mat… 81.7 <df[,2…
9 9 2021-07… 2021-08… Closed Koumound… Gitchiou (E… 29.7 <df[,2…
10 10 2021-01… 2021-12… Restrictions Koumound… Mavromihali… 43.0 <df[,2…
# … with 115 more rows
# A tibble: 2,558 x 3
incident lon lat
<int> <dbl> <dbl>
1 1 23.5 38.0
2 1 23.5 38.0
3 1 23.5 38.0
4 1 23.5 38.0
5 1 23.5 38.0
6 1 23.5 38.0
7 1 23.5 38.0
8 2 23.6 38.0
9 2 23.6 38.0
10 2 23.6 38.0
# … with 2,548 more rows
```
Do yourself a favour and open the above map in a separate tab. The intricate details of the city of Athens are quite fascinating.
The current implementation of the `{tomtom}` package is really only scratching the surface of the TomTom API. If anybody is keen to collaborate on this, get in touch! | datawookie | |
772,942 | My Placements Journey | This article is all about my placements journey. I have cleared interviews and got offers from... | 0 | 2021-07-27T14:09:48 | https://dev.to/manvityagi9/my-placements-journey-4m0g | computerscience, interview, softwaredeveloper, faang | This article is all about my placements journey. I have cleared interviews and got offers from Microsoft, Twitter, Amazon, PayPal, Cisco and a couple more. I am sharing my interview experiences not only for these companies but for all the companies where I was rejected too - Google, Sharechat, Atlan, Postman, Amazon. And the reason for that is - "You can't do all the mistakes by yourselves to learn from them, A wise person learns from the mistakes of others." I have shared my mistakes and learnings from different interviews, hopefully it will be helpful.
I am a final year student from a Tier-3 college (and it really doesn't matter), but I have to mention it because so many juniors are worried about this fact and I wanna make clear that - "Accept whatever you got and work towards your goal, Tier-3 College Tag won't block your ways if you upskill enough".
All the experiences would cover the following details:
- Timeline
- How did I apply
- Interview Process
- My Learning
Let's just start then -
Also, Don't get demotivated by my repeated rejections, We will move towards the good things gradually :p

### ZS Associates - SELECTED
- **Role**: Business Technology Analyst
- **How did I apply**: [ZS Campus Beats Challenge](https://www.zs.com/careers/campus-beats)
- **Timeline**: Applied in March, Process in April-May
First interview of 4th year. Though not the desired role or package, but my confidence was boosted after clearing all the rounds.
A lengthy process and not for SDE Role too so I am not penning it down here.
### AMAZON - REJECTED
- **Role**: SDE Intern
- **Timeline**: Applied in May and Interviews in June
- **Applied through**: Amaze Wow
- **Process**
1. Online Coding Test
2. F2F Elimination Round 1:
- 2 Questions - 1 each from Trees and DP
- I solved the Trees one completely, struggled with the DP one
- Rejected in this round itself
- **Learning**:
- I didn't manage my time well, spent too much time on 1st question due to which couldn't code the 2nd within the given time.
- I was pretty bad at explaining, interviewing just doesn't mean solving a question on silent mode.
### POSTMAN - REJECTED
- **Role**: SDE Intern
- **Timeline**: Applied and Interviewed in August
- **Applied through**: Careers Site (With referral)
- **Process**
1. Online Coding Test
2. 1st F2F Round: **JavaScript, Computer Networks, Databases, Resume Projects**. All these were asked in great detail, cross-question from each answer, deep dive into all questions. Some questions that I remember: SSL Verification Process, many confusing questions around `this` in JavaScript, How HTTP and HTTPs connections are established, questions around working of NodeJS etc. I was rejected after this round itself. No DSA asked.
3. FYI, 2 more F2F Rounds were expected if I had cleared the previous round.
- **Learning**: Till now, I was majorly focusing on DSA, with this interview, I realized, I need to thoroughly study all CS Subject Fundamentals and just reading Top 50 interview Questions of OS before the interview isn't gonna work :p
### GOOGLE - REJECTED
- **Role**: Software Developer
- **Timeline**: Applied in August, Interviewed in September
- **Applied through**: Careers Site (With referral)
- **Process**:
1. 5 On-site Rounds(4 tech + 1 Googlyness) were to be scheduled. First 3 on Day 1 and remaining 2 only if the feedback from previous 3 rounds was positive.
2. **Interview Day**: I was already quite nervous. In the first interview, 2 questions were asked, I solved both, one with expected time and space complexity but for the other question, the interviewer expected a more optimised solution. I sat for the 2nd interview with increased nervousness, only 1 question was asked, which I solved and coded but again the interviewer pushed for a more optimized approach. By this time, I knew that I have lost this chance and with no expectations, sat for the 3rd Round, this time I solved and coded the solution with best possible complexity, covered all edge cases etc. and the interviewer seemed happy with my performance.
**Topics of questions - DP, Binary Search, Graph, Hashmap**
3. I got a feedback call within the same week from the recruiter and I was rejected once again.
- **Learning**: After this interview, I could clearly see the areas I needed to work on. The rejection and feedback from Google instead of demotivating me, lifted my spirits. I got 1 YES and 2 NOs from the 3 rounds, but the good thing was that even the 2 interviewers sent this feedback "She was way too near the optimization needed, it was a matter of some more minutes, and She would have solved it." They also told me my strong points along with where I lacked. I clearly knew that I needed to work more on Speed and Problem Solving Skills and I definitely started on it soon after this interview by giving more and more live contests and upsolving them regularly on Codeforces, atcoder, Leetcode, Codechef(short-only), binarysearch.io.
Another big takeaway was giving interviews with a calm mind and confidence. My nervousness really slowed down my brain.
### SHARECHAT - REJECTED
- **Role**: Frontend Intern
- **Timeline**: Applied and Interviewed in September
- **Applied through**: A google form was all around. I just filled it and got the test link.
- **Process**
1. Online Coding Test(3 questions)
2. DSA Round(elimination round): 2 questions(**Topics - Graph and Hashmap**) were asked, complete optimised, running, clean, bug-free code was expected for both. I performed quite good in this round. Interviewer seemed impressed. After a couple of hours, recruiter called and informed that I have my next round the next day.
3. Frontend Dev Round(elimination round): HTML, CSS, JavaScript, ReactJS - This round revolved around these things only.
4. Within the same week, I got the rejection mail.
- **Little more background**: This was the 3rd time in the same year itself, that I received test link from Sharechat. First time, for Backend Intern Role(Out of 3, solved 2 questions completely and 1 partially, didn't get interview call), 2nd time for SDE Intern Role(solved all 3 questions completely, still didn't get interview call), 3rd time for Frontend Intern Role(solved all 3 questions completely, and got interview call this time). Now the sad thing at that time was that I had not been into frontend development at all, Backend Development was where both my skills and interest lied, but I thought that I will practice some frontend before the interviews. Now a more sad thing was that the interviews were scheduled just in a couple of days -- on the same day, my 3 google interviews were scheduled :P. As a result, I couldn't prepare anything for the frontend round.
- **Learning**: Well, I was happy after performing good in DSA Round. Bad performance in frontend dev round didn't affect me because I hadn't prepared for it.
### ATLAN - REJECTED
- **Role**: Backend Intern
- **Timeline**: Applied in July, Interviewed in September
- **Applied through**: Careers Site(without referral), the application form was lengthy and asked about many things including - projects, open source contributions etc.
- **Process**:
1. **Project Submission**: They gave a problem statement, I had to build a solution(Android App or Website). I enjoyed making this project.
2. **1st F2F Round**: No DSA again, Many questions around the project I submitted in the previous round. Some questions revolved around **scalability approaches, system design basics, reliability and failure** in big projects. I answered most of the questions. And the interviewer seemed quite happy. There were some questions like - http multipart request, MySQL master slave replication that I couldn't answer.
3. **Result**: This time I wasn't expecting rejection, but who cares about expectations, I was again rejected with a message that my past-experiences, projects, stack doesn't suit the requirement.
- **Learning**: Stop expecting, You can get rejected even after you feel you did good.
### INNOVACCER - PPO
Little good news in September, I received Pre Placement Offer from Innovaccer, where I did my summer intern. For the interview experience and process refer my other [article](https://manvityagi770.medium.com/innovaccer-interview-experience-2d0808b50b0d).
## A Break that I took from Interviewing
I was tired by this point, for some or the other reason, I was getting rejected again and again. I was working on my weak areas, analysing for each interview, why I couldn't make it and improving on those things. But every interview gave me a new reason of rejection.
The companies that I have listed above are only the ones I got interview calls from, Leave alone the companies where I applied and didn't get a reply and the companies from where I received test link but didn't get interview calls(2 cases here - For some, I didn't perform well enough in the tests, for some I didn't get interview call even after solving the tests completely).
I stopped applying to any companies at this point and just practiced more for around 1.5 months silently. No LinkedIn, No interviews, only coding and brushing fundamentals again.
### CISCO - SELECTED
- **Role**: SDE Intern
- **Timeline**: Applied in July, interviewed in November. I had totally forgotten that I even applied here.
- **Applied through**: Careers Site(with Referral).
- **Process**:
1. Online Test: 2 Coding Questions and MCQs. Only Java, Python and C were allowed.
2. 1st Technical Round(60 mins): 2 Coding Questions
- 1st question's optimization was based on using a linear String Matching Algo in one part of the algorithm, I implemented KMP.
- 2nd question was to check whether a graph is a tree.
- Write pseudo code for Semaphore Working
- Many questions from Operating Systems - Threads, Processes, Memory Management etc.
3. 2nd Technical Round(45 mins):
- Based on CS Fundamentals and Resume
- OS, DBMS, CN, REST API Design, Questions around my projects and skills that I mentioned in my resume
4. HR Round:
This was more of a formality, They informed about the stipend, duration etc and asked some questions like Why Cisco etc.
In a couple of days, I received the selection mail 🎉
**Note**: All rounds were eliminatory rounds and were conducted on the same day with a gap of couple of hours.
- **Learning** - This was kind of my first success at interviews and I realized that a calm mind without any expectations helped me during the interview. This was also the first time, I was not nervous before the interviews, Why? This time I had tailored my mind with - "कर्म किए जा फल की इच्छा मत कर", in English - "Do your duty without thinking about results". Before the interviews, I just told myself to talk to the guys, solve the questions they ask and chill. And that helped :)
### PAYPAL - SELECTED
- **Role**: SDE Intern
- **Timeline**: I applied around August and got test link in November and had interviews in December.
- **Applied through**: Careers Site(without Referral). University Recruitment was probably the name of this hiring event.
- **Process**:
1. Online Test: 3 Coding Questions. The most interesting questions that I got in any test till now.
2. 1st Technical Round(45 mins):
- Trapping Rain Water Problem, Its a Leetcode Hard Problem. Complete optimised code was expected.
- Asked me to explain the approach of the 2 questions from the online round and asked if I had any other approaches to solve them.
- 2 Puzzles. It was fun solving them.
3. 2nd Technical Round(30 mins):
- Based on CS Fundamentals and Resume
- The interviewer asked me to introduce myself along with the work that I have done in my previous internships or any projects that I wanted to discuss. She cross-questioned meanwhile.
- The interviewer was clearly impressed with my answers and overall profile.
4. Managerial Round:
- This Round taught me to never be overconfident. I always thought that HR Rounds are a piece of cake for me, so I never really prepared or even thought about them.
- The interviewer asked me many questions about myself - my aspirations, my principles of life, some situation based questions, my weaknesses, and many more and To be honest, I didn’t really feel good after the interview, I thought he is not gonna select me because, during the interview, he focused on my weaknesses a lot, most of his questions revolved around my weaknesses, it’s like I couldn’t even tell one of my profile/work highlights.
In a couple of days, I received the selection mail 🎉. I joined PayPal and after the internship received a full-time offer from them.
**Note**: All rounds were eliminatory rounds and were conducted on different days.
- **Learning** - Most interviewers are very supportive and encouraging. Speaking with confidence and putting up a happy face instead of a scared one transmits good vibes across.
### AMAZON - SELECTED
- **Role**: Software Development Engineer
- **Timeline**: Applied in January, Interviewed in March
- **Applied through**: Careers Site(with Referral).
- **Process**:
1. Online Test: 2 Coding Questions and MCQs(Quant, Reasoning, English, Personality). I found it easy as compared to other tests I had given.
2. 1st Technical Round(60 mins): 2 Coding Questions
- Rotten Oranges Variant(a Leetcode Medium Question)
- A question based on Topological Sort
3. 2nd Technical Round(60 mins):
- Graph Question - Used Djisktra Algo
- DP Question (I don't remember the question)
4. Bar Raiser Round(this was probably the name):
- A mix of everything that is asked in interviews, It took well above 90 mins
- 1 DSA - Binary Search Problem with some tricks and needed optimizations - Good Question
- In-depth discussion of work in my previous internships
- In-depth discussion of one project - He asked to write the code of one of the APIs of my project and asked to do some tweaks in the database calls inside it.
- Discussion about my volunteering and leadership experiences.
- Why Amazon
- 2 Behavioural questions checking Amazon Leadership Principles
In the same week, I received the selection call 🎉
**Note**: All rounds were eliminatory rounds and were conducted in 2 days.
- **Learning** - The interviewer advised me to never stop working on "[Girl Code It](http://girlcodeit.com/)"(it's an [organisation](https://www.linkedin.com/company/girl-code-it/) that I run). He said, everybody works for money, promotions, a better life etc. but only a few have selfless purposes, Don't let it go.
### MICROSOFT - SELECTED
- **Role**: Software Developer
- **Timeline**: Applied in January, Interviewed in February
- **Applied through**: Microsoft Engage Hackathon
- **Process**:
1. Hackathon: 5 Problem Statements were given. My project was shortlisted and I was called for the further interview process.
2. Online Test: 3 Coding Questions and MCQs. 2 coding questions were easy but the 3rd was one was a damn tough problem on graphs.
3. 1st Technical Round(45 mins):
- Resume and Projects Grilling
- Hackathon Project Discussed
- Questions around REST APIs, HTTP Verbs, request and response headers, SQL vs NOSQL - Usecases, ACID properties etc.
4. 2nd Technical Round(30 mins):
- 3 Coding Questions
- **Topics: Linked List, Hashmaps, DP**
- Average difficulty
5. AA Round(As appropriate):
- The recruiter told me that interviewer could ask anything in this round - coding question, fundamentals, projects, HR questions etc.
- 1 Coding Question (From arrays, it was a new and tricky question and I don't remember it exactly) and asked Why I wanted to join Microsoft.
- All of it went well overall. I couldn't tell the optimised approach at first but after some thinking I gave the expected solution soon. The interviewer even made a comment that he liked the way I approached the question from different directions and liked my confidence even after not hitting the right approach in the first go.
In a couple of days, I received the selection call 🎉
**Note**: All rounds were eliminatory rounds and were conducted on different days.
- **Learning** - By this time, so much had changed about the way I interview, I once ruined my Amazon and Google Interviews because of being nervous after telling the brute approach, I have written it before, and will again repeat - Don't think too much, Nervousness can slow down your brain :p
### PALANTIR - REJECTED
- **Role**: Software Developer
- **Timeline**: Applied in December, Interviewed in February
- **Location**: London
- **Applied through**: Careers Site(without referral)
- **Process**:
1. Online Coding Test: 3 Coding Questions. The questions were different and difficult than the usual ones, I solved 2 completely and 1 partially. Also, this was the only company where I got an interview call without hitting a 100% score in coding questions.
2. 1st F2F Coding Round(60 mins):
- 3 Coding Questions
- **Topics**: Strings, Arrays, Trees, Hashmaps
- The questions were easy but running code for all 3 was expected, and the implementation of each of them were lengthy. I coded as fast as I could and completed and explained my solutions to the interviewer.
3. Another Coding Round:
- A tricky Binary Tree Problem, later the interviewer asked the same question for generic tree. It went good.
4. Learning Round:
- Some rules of a new language were shared to me during the interview itself, and I was asked a few questions which I had to solve using that language syntax. It was very much like SQL. This round went good too.
5. Decomposition Round:
- It was more like a system design round if I would label it. A scenario for an app was given and I had to discuss it's high-level design. I wasn't prepared for it, but still did my best and interviewer seemed neutral. I couldn't judge how he felt about my performance.
In a couple of days, I received the rejection and feedback call 🎉.
The recruiter told me that, the feedback from all rounds is positive except the Decomposition Round. They liked me but still wouldn't be able to proceed with my application.
**Note**: The interview process and support was one of the best I experienced.
- **Learning** -
- Prepare System Design :P
- Though, I wasn't selected, but I gave my best and had a great experience.
### TWITTER - SELECTED
**Role** - Software Engineer
**Timeline** - June, Last interview that I gave :p
Twitter asks the candidates to sign an NDA before the interview process, So I won't be able to share any specifics. My friend has penned down the general process, you can refer it [here](https://levelup.gitconnected.com/my-twitter-interview-experience-ba621aa50b87).
### Some more Tips
**Online Test**: Try to solve all the questions. I have faced rejection even after solving all questions few times, leave alone the hopes of getting an interview without solving all questions completely.
Giving live or even virtual contests on codeforces.com, codechef.com(short only), atcoder.com, binarysearch.io, leetcode.com will equip you with problem solving skills, quick implementation ability, effective debugging, edge-cases recognition and resolution.
**Within the actual Interview**: I have a lot to share on this topic, so I will write a separate article altogether.
**Getting calls from companies**: A good resume, an impressive LinkedIn profile, applying to as many companies as possible and and an added referral(I took all the referrals through LinkedIn only) is the only answer.
If you are not getting replies from companies, not getting shortlisted, not getting selected after interviews, Just remember -
"जरूरी थोड़ी है, कि जो पत्थर तुम मारो, उससे आम टूटे ही टूटे, आखिर कुछ कोशिशें तैयारी के लिए भी होती हैं"
English Translation - "It isn't necessary that every stone you throw will hit the target, after all, some efforts are also made for preparation.""
**Note**:
There is a possibility that there might be some deviations in the timeline that I have mentioned and the order/number of interviews might also differ for someone else.
Good Luck for your interviews!
Also, Please share the article with those who can be helped with it and I hope it gave you a general idea of how the SDE Interviews at various companies look like.
If you want to know something else, Feel free to comment. | manvityagi9 |
772,278 | Hacker tools Proxybroker | ProxyBroker .. image::... | 0 | 2021-07-26T17:28:45 | https://dev.to/introvertnvm/hacker-tools-proxybroker-3n4e | ProxyBroker
===========
.. image:: https://img.shields.io/pypi/v/proxybroker.svg?style=flat-square
:target: https://pypi.python.org/pypi/proxybroker/
.. image:: https://img.shields.io/travis/constverum/ProxyBroker.svg?style=flat-square
:target: https://travis-ci.org/constverum/ProxyBroker
.. image:: https://img.shields.io/pypi/wheel/proxybroker.svg?style=flat-square
:target: https://pypi.python.org/pypi/proxybroker/
.. image:: https://img.shields.io/pypi/pyversions/proxybroker.svg?style=flat-square
:target: https://pypi.python.org/pypi/proxybroker/
.. image:: https://img.shields.io/pypi/l/proxybroker.svg?style=flat-square
:target: https://pypi.python.org/pypi/proxybroker/
ProxyBroker is an open source tool that asynchronously finds public proxies from multiple sources and concurrently checks them.
.. image:: https://raw.githubusercontent.com/constverum/ProxyBroker/master/docs/source/_static/index_find_example.gif | introvertnvm | |
772,328 | uollka | kkks | 0 | 2021-07-26T18:37:05 | https://dev.to/faizannehal1/uollka-34a9 | kkks | faizannehal1 | |
772,338 | Python Basics, Python 101! | Python is one of the high-level programming languages in this age. It is both an object-oriented... | 0 | 2021-07-26T18:56:51 | https://dev.to/shazi/python-basics-python-101-47kp | python, programming | Python is one of the high-level programming languages in this age. It is both an object-oriented programming language and a structural language. If you are still in a dilemma of whether or not to learn python, this article can help you reach a better conclusion.
##Python Definition
As mentioned earlier, Python is a high-level general-purpose programming language developed by Guido Van Rossum in the late 1980s. It was later first released in 1991. Since then, python has undergone many developments making it easier for other developers and tech industries to use it.
##Why choose python?
###1. Easy installation process
Python installation is done in just a few minutes. All you have to do is:
• Browse the python version you want to install.
• Download the chosen python installer.
• Run the downloaded executable file.
• Install the preferred python version while agreeing to the said terms.
• Start your program. The basic python code for a beginner is:
'''python
print (“Hello world”)
'''
This simple code lets you know that everything is properly installed and hence you can start writing your programs.
###2. Easy to learn
One major python benefit is that its code is closely similar to the English language. For example, it’s easier to understand what the code written above is used for, that is to return the phrase ‘Hello world’.
Compared to other programming languages, python programs have fewer lines which are easier to understand. The codes are also executed once writing the code is over making work easier.
Also, you don't need to learn python in a physical institution. So many self-taught programmers used the power of the internet to gain their knowledge.
##3. It is widely used.
Python is an all-purpose programming language which means it can be used for almost anything. Major companies like Google, Spotify, Netflix, and many more use the language in their applications. Other uses are mentioned below.
##What can python be used for?
###1. Building calculators
Yes, you heard me right. Mathematical calculators that you use online have been built using some python paradigm. Of course, these calculators have been built under many complex algorithms that may be hard to understand at the beginner level.
Still, it's possible to build your basic calculator even at a beginner level using the following code. Just make sure you are familiar with various data types in python.
'''python
Num1 = input (“Enter the first number”)
Num2 = input (“Enter the second number”)
Result = int (Num1) + int (Num2)
Print (result)
,,,
You can change the operator to any mathematical operator type you want.
###2. Web applications
Although HTML and JavaScript are the prime languages for web development, python is still widely used in creating web applications.
For instance, python frameworks like Django and Pyramid can be used in building server-side web applications.
Websites like Amazon and Pinterest have applied python algorithms to their platforms.
###3. Creating mobile apps
It's easy to create an app that functions. The reason for the ease is the use of python in the formations. Various widely known apps have been built on python foundations like Netflix, Quora, Uber, and many more.
The good news is that you don’t need years of experience to build your first app. You can even do it after one month of learning python basic programs.
###4. Artificial intelligence and Machine Learning
Artificial intelligence and Machine learning have become the face of almost everything globally. Many fields including data science, robotics, businesses and many more use it in their day-to-day activities. It’s obvious that you also use these fields daily without knowing it.
The most interesting part is that python is widely applied in these fields. Python plays a major role in building neural networks and making predictions.
Also, several libraries present in python languages like NumPy and Pandas helps in data cleaning, data analysis, and data visualization hence an important tool in data science.
For example, the DropBox Desktop Client was created fully from the python program.
##Bottom Line
Python is one of the finest programming languages to have been developed. If you are thinking of starting the coding work, python is a good option for you.
| shazi |
772,345 | Tailwind CSS Simple Table Example | In this tutorial we will create simple tailwind css table, tailwind css table components, table with... | 0 | 2021-07-26T19:18:02 | https://larainfo.com/blogs/tailwind-css-simple-table-example | tailwindcss, webdev, tutorial, css | In this tutorial we will create simple tailwind css table, tailwind css table components, table with icon, table with divider, examples with Tailwind CSS
####Tool Use
_Tailwind CSS 2.x_
_Heroicons Icons_
👉 [View Demo](https://larainfo.com/blogs/tailwind-css-simple-table-example)
####Setup Project
Using CDN
```html
<link href="https://unpkg.com/tailwindcss@^2/dist/tailwind.min.css" rel="stylesheet">
```
or
[The Easiest way to install Tailwind CSS with Tailwind CLI](https://larainfo.com/blogs/the-easiest-way-to-install-tailwind-css-with-tailwind-cli)
[How to Install Tailwind CSS with NPM](https://larainfo.com/blogs/how-to-install-tailwind-css-with-npm)
####Example 1
Simple Table
```html
<div class="container flex justify-center mx-auto">
<div class="flex flex-col">
<div class="w-full">
<div class="border-b border-gray-200 shadow">
<table>
<thead class="bg-gray-50">
<tr>
<th class="px-6 py-2 text-xs text-gray-500">
ID
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Name
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Email
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Created_at
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Edit
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Delete
</th>
</tr>
</thead>
<tbody class="bg-white">
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-white bg-blue-400 rounded">Edit</a>
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-white bg-red-400 rounded">Delete</a>
</td>
</tr>
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-white bg-blue-400 rounded">Edit</a>
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-white bg-red-400 rounded">Delete</a>
</td>
</tr>
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-white bg-blue-400 rounded">Edit</a>
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-white bg-red-400 rounded">Delete</a>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
```

####Example 2
Table with Divider
```html
<div class="container flex justify-center mx-auto">
<div class="flex flex-col">
<div class="w-full">
<div class="border-b border-gray-200 shadow">
<table class="divide-y divide-gray-300 ">
<thead class="bg-gray-50">
<tr>
<th class="px-6 py-2 text-xs text-gray-500">
ID
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Name
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Email
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Created_at
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Edit
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Delete
</th>
</tr>
</thead>
<tbody class="bg-white divide-y divide-gray-300">
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-blue-600 bg-blue-200 rounded-full">Edit</a>
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-red-400 bg-red-200 rounded-full">Delete</a>
</td>
</tr>
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-blue-600 bg-blue-200 rounded-full">Edit</a>
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-red-400 bg-red-200 rounded-full">Delete</a>
</td>
</tr>
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-blue-600 bg-blue-200 rounded-full">Edit</a>
</td>
<td class="px-6 py-4">
<a href="#" class="px-4 py-1 text-sm text-red-400 bg-red-200 rounded-full">Delete</a>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
```

####Example 3
Table with Icon
```html
<div class="container flex justify-center mx-auto">
<div class="flex flex-col">
<div class="w-full">
<div class="border-b border-gray-200 shadow">
<table class="divide-y divide-gray-300 ">
<thead class="bg-gray-50">
<tr>
<th class="px-6 py-2 text-xs text-gray-500">
ID
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Name
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Email
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Created_at
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Edit
</th>
<th class="px-6 py-2 text-xs text-gray-500">
Delete
</th>
</tr>
</thead>
<tbody class="bg-white divide-y divide-gray-300">
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 text-blue-400" fill="none"
viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M11 5H6a2 2 0 00-2 2v11a2 2 0 002 2h11a2 2 0 002-2v-5m-1.414-9.414a2 2 0 112.828 2.828L11.828 15H9v-2.828l8.586-8.586z" />
</svg>
</a>
</td>
<td class="px-6 py-4">
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 text-red-400" fill="none"
viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16" />
</svg>
</a>
</td>
</tr>
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 text-blue-400" fill="none"
viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M11 5H6a2 2 0 00-2 2v11a2 2 0 002 2h11a2 2 0 002-2v-5m-1.414-9.414a2 2 0 112.828 2.828L11.828 15H9v-2.828l8.586-8.586z" />
</svg>
</a>
</td>
<td class="px-6 py-4">
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 text-red-400" fill="none"
viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16" />
</svg>
</a>
</td>
</tr>
<tr class="whitespace-nowrap">
<td class="px-6 py-4 text-sm text-gray-500">
1
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-900">
Jon doe
</div>
</td>
<td class="px-6 py-4">
<div class="text-sm text-gray-500">jhondoe@example.com</div>
</td>
<td class="px-6 py-4 text-sm text-gray-500">
2021-1-12
</td>
<td class="px-6 py-4">
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 text-blue-400" fill="none"
viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M11 5H6a2 2 0 00-2 2v11a2 2 0 002 2h11a2 2 0 002-2v-5m-1.414-9.414a2 2 0 112.828 2.828L11.828 15H9v-2.828l8.586-8.586z" />
</svg>
</a>
</td>
<td class="px-6 py-4">
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 text-red-400" fill="none"
viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16" />
</svg>
</a>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
```

###See Also 👇
[Tailwind CSS Simple Button Examples](https://larainfo.com/blogs/tailwind-css-simple-button-examples)
[Tailwind CSS Simple Responsive Image Gallery with Grid](https://larainfo.com/blogs/tailwind-css-simple-responsive-image-gallery-with-grid)
[Tailwind CSS Simple Alert Components Examples](https://larainfo.com/blogs/tailwind-css-simple-alert-components-examples)
[Tailwind CSS Simple Card Examples](https://larainfo.com/blogs/tailwind-css-simple-card-examples)
[Tailwind CSS Badge Examples](https://larainfo.com/blogs/tailwind-css-badge-examples)
[Tailwind CSS Simple Modals Examples](https://larainfo.com/blogs/tailwind-css-simple-modals-examples)
[Tailwind CSS Simple Avatar Examples](https://larainfo.com/blogs/tailwind-css-simple-avatar-examples)
| saim_ansari |
772,461 | Setting Up and Configuring WSL | Linux is by far the best operating system for developers, but, people still use Windows and other... | 0 | 2021-07-28T20:51:25 | https://dev.to/quantalabs/setting-up-and-configuring-wsl-392c | windows, linux, wsl, bash | Linux is by far the best operating system for developers, but, people still use Windows and other operating systems, or have another personal computer which for some reason uses Windows, and so you don't use Linux. Of course, you could literally install Linux onto your computer, but, if you can't, or, just want to have windows for accessibility, enter WSL - Windows Subsystem for Linux. WSL allows you to run a Linux Virtual Machine on your windows machine to run Linux commands and run `.deb` or `.rpm` files, depending on the distribution you choose.
## Our steps
- Install WSL
- Install a Linux Distribution
- Configure our Linux Distribution
- Install VSCode, Python, and other development tools
Let's get started with install wsl.
## 1. Install WSL
First things first, we need to install WSL. Open your settings app on Windows, and click "Apps."

Once at the Apps menu, the right sidebar has a "Related Settings" section, which includes Programs and Features, which you need to go to. Once there, there should be a sidebar, where you should click "Turn Windows features on or off."

From this, you should have a screen like this:

Check the "Window Subsystem for Linux" box, and select Ok. After that, you should have a Restart now button, and click that.
## 2. Install a Linux Distribution
Now, we can install our Linux Distribution. You can install which ever distribution you want, but I'll be showing you the Ubuntu distribution. Go to [aka.ms/wslstore](https://aka.ms/wslstore), which will open up the Microsoft Store, showing you Linux Distributions. In the store, click "Ubuntu" and then "Install." Once installed, open up your command prompt, and run:
```
Ubuntu
```
If that doesn't work, you can open up the Ubuntu app, which should output something like this:

From here, it will prompt you to enter your username and password, which can be anything, not your windows username and password, though, you could put that. Make sure you remember it, because we'll be using `sudo` to install, which requires you to enter your password. If you do lose it, you can run:
```bash
passwd <username>
```
which will reset your password.
## 3. Configure our Linux Distribution
Next thing we need to do is configure our distribution, and before we get into install VSC and Python, or anything else, we first need to configure our `.bashrc` with some useful aliases and other important scripts. Before any of this, create a new directory called `Coding/` or `Projects/`, and clone all your repos there.
Next, we need to configure our `.bashrc`. First, run:
```sh
curl https://getmic.ro | bash
sudo mv micro /usr/bin
```
This will install micro and move it into the `/usr/bin` directory. For those of you who don't know, micro is a command line editor useful for editing small files from the terminal, like `.bashrc`, with keyboard shortcuts and other useful things. Now that we have it installed, you can run:
```sh
micro ~/.bashrc
```
to open `.bashrc`, and start editing it. You can add:
```sh
alias activate-<PROJECT>="cd <PROJECT_DIR>"
```
To quickly switch between projects. From here, you can also add other aliases or any other command you want to run on startup.
Now, we can start to install the tools we want to run. If you have Visual Studio Code, then normally you'd run:
```bash
code
```
to up VSC, but we need to do some extra configurations. Launch VSC- not from your linux distribution- and install the **Visual Studio Code Remote - WSL** extension. This will allow you to manage your Linux and Windows projects independently. Now, we can run:
```sh
code
```
to launch up VSC, and there might be some output which updates your Visual Studio Code, which is for your Linux distribution to install.
### Anaconda
If you're like me and use anaconda, then you can use:
```bash
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
```
Now, `whereis python` should give you an output like `/home/user/anaconda3/bin/python`. If it doesn't, then anaconda isn't on path, so you can add to your `.bashrc`:
```bash
# ~/.bashrc
# Add to end of your bashrc.
export PATH=/home/user/anaconda3/bin:$PATH
```
This installs miniconda, but if you'd like all the built-in-packages, there are detailed instructions [here](https://gist.github.com/kauffmanes/5e74916617f9993bc3479f401dfec7da).
### Other great tools
I created [this small gist](https://gist.github.com/Quantalabs/8640dfca495248daf997951a77d69b4c) which installs some basic stuff to get you going, including:
- ZSH/
- Git
and others.
{% gist https://gist.github.com/Quantalabs/8640dfca495248daf997951a77d69b4c %}
However, there's also [starship](https://starship.rs) which is an amazing shell, and I had a bit of trouble with the installation, so I'll bear you the trouble and show you how I installed it.
#### Starship

Fist, we need to install the binary:
```bash
sh -c "$(curl -fsSL https://starship.rs/install.sh)"
```
The next step varies on which shell your using. The previous gist installed ZSH, and if you'd like to use ZSH, then add the following line to `.zshrc`:
```bash
# ~/.zshrc
eval "$(starship init zsh)"
```
For bash users:
```bash
# ~/.bashrc
eval "$(starship init bash)"
```
And if you plan on using fish, you can add this to your fish config:
```bash
# ~/.config/fish/config.fish
starship init fish | source
```
There's a whole lot of other configurations out there, and you can view them [here](https://starship.rs/guide/#%F0%9F%9A%80-installation).
And you have starship!
But... if you want to make your starship stand out, like the one below, things get a little harder.

We first need to configure starship. Run:
```bash
mkdir -p ~/.config && touch ~/.config/starship.toml
```
to create the config file, and open it up however you want:
```bash
code ~/.config/starship.toml # With VSC
micro ~/.config/starship.toml # With micro
```
Now, before we edit the config file, you need to download any Nerd Font, I chose FiraCode. You can download them from [here](https://nerdfonts.com/font-downloads)- and move the zip file into `\\wsl$\Ubuntu\home\{USERNAME}\`. Extract the files, and then run:
```bash
mkdir ~/.fonts/
mv *.ttf ~/.fonts/
fc-cache -f -v
```
Now, extract the same file, just on your windows system, and then on file explorer, select all the TrueType Font Files, or `.ttf` files, and right-click them, and select the install button, which should install the font. Now, in Visual Studio Code, change the "Terminal.Integrated.Font Family" to `"FiraCode Nerd Font", Monaco`. Now, use the config from [here](https://starship.rs/presets/#configuration) and paste it into your starship config file. After that launch up your terminal in any project, and it'll show you something like this:

That's WSL! Comment any other cool things you added to starship, zsh, bash, or anything else related to the post that you think I might have missed! | quantalabs |
1,319,186 | Git Partial Clones | I recently was introduced to Sparse Directories in SVN. In SVN, you can initially clone a repository... | 0 | 2023-01-05T22:25:33 | https://brandonrozek.com/blog/git-partial-clones/ | git | ---
title: Git Partial Clones
published: true
date: 2022-02-07 22:07:08 UTC
tags: Git
canonical_url: https://brandonrozek.com/blog/git-partial-clones/
---
I recently was introduced to [Sparse Directories in SVN](https://svnbook.red-bean.com/en/1.8/svn.advanced.sparsedirs.html). In SVN, you can initially clone a repository and have it be empty until you checkout the specific files needed. I wondered if I can do the same with `git`. For the _tl;dr_ skip to the conclusion section.
As a benchmark, we’re going to reference the size of a cryptographic library I helped author. As a baseline, let’s see how big the repository is before adding any flags.
```
git clone https://github.com/symcollab/cryptosolve
du -sh cryptosolve
90M cryptosolve
```
## Using the `--filter` flag
With the `filter` flag, blobs that fall under a specified criteria do not get automatically downloaded during a clone. The blobs do, however, get downloaded whenever its associated files get checked out. By setting the flag to `blob:none`, we are telling git to not download any files initially. Though since the main branch gets checked out by default during a clone, git will still download the blobs associated with the main branch.
```
git clone --filter=blob:none https://github.com/symcollab/cryptosolve
du -sh cryptosolve
2.1M cryptosolve
```
## Using the `--no-checkout` flag
We can then improve the last command by adding the `no-checkout` flag. This flag will not construct any of the files in the current branch. If you don’t include include the filter flag from before, then there really isn’t much of a space savings since all the information is stored in the git database.
```
git clone --no-checkout https://github.com/symcollab/cryptosolve
du -sh cryptosolve
89M cryptosolve
```
You can see that there are no files checked out with a `ls -a`.
```
. .. .git
```
Though with the filter flag, we can see the space savings!
```
git clone \
--no-checkout \
--filter=blob:none \
https://github.com/symcollab/cryptosolve
du -sh cryptosolve
508K cryptosolve
```
## Using the `--sparse` flag
The sparse flag makes it so that when we checkout a reference, only the immediate files in the root directory are constructed. Additional commands then need to be issued in order to checkout other directories. With the sparse flag by itself, there isn’t much savings since all the information is still downloaded and stored in the git database.
```
git clone --sparse https://github.com/symcollab/cryptosolve
du -sh cryptosolve
89M cryptosolve
```
## Conclusion
The power comes from when we combine all these flags together.
```
git clone \
--filter=blob:none \
--sparse \
--no-checkout \
https://github.com/symcollab/cryptosolve
du -sh cryptosolve
516K cryptosolve
```
It’s not different from the `--filter=blob:none --no-checkout` command initially, but when we checkout a branch we can see that not all the blobs get downloaded.
```
cd cryptosolve
git checkout main
cd ..
du -sh cryptosolve
644K cryptosolve
```
You can fetch folders as you please with the following command:
```
git sparse-checkout add FOLDERNAME
```
You can even set it so that a specific folder is shown at the root of your directory.
```
git sparse-checkout set FOLDERNAME
``` | brandonrozek |
772,513 | Developing a cost effective alternative to Amazon RDS automatic backups | Amazon RDS is great. It does some truly incredible things with almost 0 things to worry about for the... | 21,365 | 2021-08-11T12:22:38 | https://dev.to/ashiqursuperfly/cost-effective-alternative-to-amazon-rds-database-backups-1ll5 | aws, mysql, database, devops | Amazon RDS is great. It does some truly incredible things with almost 0 things to worry about for the developer. However, like most good things in life :) RDS is not very cheap. Also, there are a number of other good reasons to setup your own database inside a compute instance (like EC2) instead of using RDS. Yes, if you use RDS, AWS takes full responsibility for the administration, availability, scalability and backups of your database but you do loose some manual control over your database. If you are the kind of person that prefers the manual control over everything and want to explore the idea of manually setting up your own database, the first important issue you need to deal with is make sure your data survives any potential disasters :) . In this article, we would first setup our own database backups and then automate the process using bash and python scripting. We will be using a MySQL docker container for our database but, the process is generic and you should be able to set it up for any database you prefer.
**Prerequisites**
```
- docker installed on system
- docker-compose installed on system
- python3 installed on system
```
### Steps
#### **1.** Setup MySQL docker container
If we have docker and docker-compose installed in the system, we can quickly spin up a MySQL container using the following **docker-compose.yml** file.
**Docker-Compose**
```yml
version: '3.7'
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
volumes:
- mysql_data_volume:/var/lib/mysql
env_file:
- .env
volumes:
mysql_data_volume:
```
Now, to start the container:
```sh
docker-compose up --build
```
Now, note down the container name from:
```
sudo docker ps
```
In my case the command outputs:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2f5b2941c93 mysql:5.7 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp db-backup_db_1
```
So, our container name is db-backup_db_1, this is follows the following convention, `{folder-name}_{docker-compose-service-name}_{count-of-containers-of-this-service}`
Now, our database is ready. We assume, this database is connected to some application that generates some data and our job is to periodically make backups of that data. So, if necessary, we can simply restore the database with the data from a specific point in time.
Notice we have a environment variables file called .env in our docker-compose, we would get to that soon.
#### **2.** Setting up S3 bucket
We cannot just keep our generated data dumps lying in our machine's file storage. Because, if the machine goes down we would loose all our backups. So, we need to store the backups on a persistent file storage like Amazon S3. S3 is widely considered to be one of the best file storage services out there and its very cheap. In this article, we would not go through the process of creating S3 buckets but in case you dont already know, its very easy and can be done from the aws console using just a couple of clicks. You can also get an access_key_id and secret_access_key by setting up programmatic access from the IAM console.
Now, we keep our secrets on the .env file like so,
```
AWS_ACCESS_KEY_ID=*********
AWS_SECRET_ACCESS_KEY=******************
AWS_S3_REGION_NAME=*********
AWS_STORAGE_BUCKET_NAME=*********
MYSQL_DATABASE=*********
MYSQL_ROOT_PASSWORD=*********
MYSQL_USER=*********
MYSQL_PASSWORD=*********
```
Secrets include AWS secrets and the database secrets.
#### **3.** Generating Backups/Dumps
In order to generate mysql data dumps we have to first connect into our database container then run the **mysqldump** command.
We can do this using the following one liner:
```
sudo docker exec db-backup_db_1 sh -c 'mysqldump -u root -p${MYSQL_ROOT_PASSWORD} ${MYSQL_DATABASE} > dump.sql'
```
This will create a data dump called 'dump.sql' inside the database container. Now, we have to copy the dump from inside the container.
```
sudo docker cp db-backup_db_1:dump.sql .
```
Now, we just have to upload the file to our S3 bucket. We will do this using the boto3 python package.
#### **4.** Uploading generated dumps to S3 Bucket
We create a python script called upload_to_s3.py like so,
**upload_to_s3.py**
```py
import sys
from botocore.exceptions import ClientError
import boto3
import os
from datetime import datetime
S3_FOLDER = 'dumps'
def upload_s3(local_file_path, s3_key):
s3 = boto3.client(
's3',
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY")
)
bucket_name = os.getenv("AWS_STORAGE_BUCKET_NAME")
try:
s3.upload_file(local_file_path, bucket_name, s3_key)
except ClientError as e:
print(f"failed uploading to s3 {e}")
return False
return True
def main():
if len(sys.argv) == 0:
print("Error: No File Name Specified !")
return
if not os.getenv("AWS_ACCESS_KEY_ID") or not os.getenv("AWS_SECRET_ACCESS_KEY") or not os.getenv("AWS_STORAGE_BUCKET_NAME"):
print("Error: Could not Find AWS S3 Secrets in Environment")
return
upload_s3(sys.argv[1] + ".sql", S3_FOLDER + "/" + sys.argv[1] + "-" + str(datetime.now()) + ".sql")
if __name__ == '__main__':
main()
```
To run the script,
```
# make sure you have boto installed in your python venv
pip install boto3
```
and then,
```
python3 upload_to_s3.py dump
```
This script expects a command line argument with the name of the dump file without the '.sql' extension and the aws secrets in the system environment variables. Then, it uploads the dump file to the s3 bucket under a folder called 'dumps'.
#### Final Bash Script
**backup_db.sh**
```
while read -r l; do export "$(sed 's/=.*$//' <<<$l)"="$(sed -E 's/^[^=]+=//' <<<$l)"; done < <(grep -E -v '^\s*(#|$)' $1)
sudo docker exec db-backup_db_1 sh -c 'mysqldump -u root -p${MYSQL_ROOT_PASSWORD} ${MYSQL_DATABASE} > dump.sql'
sudo docker cp db-backup_db_1:dump.sql .
python3 upload_to_s3.py dump
sudo rm dump.sql
```
The bash script expects the name of the .env file as command line argument.
The first line is a handy little one liner that parses the .env file and exports the environment vars in the system. (P.S: i didnt come up with it myself obviously o.O)
Then, it generates the dump and uploads the dump to the s3 bucket as we discussed. Finally, we remove the local copy of the dump, since we dont need it anymore.
Now each time we run the script,
```
bash backup_db.sh .env
```
We would see a new data dump in our s3 bucket,

#### **5.** Doing it periodically
We can easily do it periodically using a cron job. We can set any period we want using the following syntax,
```
sudo crontab -e
1 2 3 4 5 /path/to/script # add this line to crontab file
```
where,
```
1: Minutes (0-59)
2: Hours (0-23)
3: Days (1-31)
4: Month (1-12)
5: Day of the week(1-7)
/path/to/script - path to our script
```
e.g: we can generate a data dump each week at 8:05 am Sunday using the following,
```
5 8 * * Sun /path/to/script
``` | ashiqursuperfly |
772,642 | Addressing Key Challenges of Oracle EBS to Oracle Cloud Migration with Continuous Testing | Undoubtedly, there are numerous benefits of migration from Oracle EBS to Oracle Cloud such as low... | 0 | 2021-07-27T03:12:12 | https://dev.to/opkey_ssts/addressing-key-challenges-of-oracle-ebs-to-oracle-cloud-migration-with-continuous-testing-1abj | Undoubtedly, there are numerous benefits of migration from Oracle EBS to Oracle Cloud such as low cost of ownership, scalability, flexibility, and continuous innovation. However, for business users and owners, business continuity is of great importance since inaccurate Oracle Cloud migration can lead to business disruption and risks. Oracle Cloud migration can be a very painful experience and cost you revenue as well as a reputation if performed incorrectly. To ensure that applications are stable, functional, and secure in the new Oracle Cloud environment, you need to bring in a robust testing strategy. Testing ensures that data flow is constant, backend processes are working fine with all compliance and security requirements in place.
In this article, we will address key concerns of managers and business owners who are looking to migrate to Oracle Cloud Apps from Oracle EBS. We will also discuss how a <B>“Risk-based Test”</B> focussed Oracle Cloud migration strategy can help you in keeping business risks at bay.
<B>Key points need to be considered while designing Oracle Cloud migration testing strategy</B>
<B>Existing Integrations –</B> Many of the enterprises have integrated productivity tools such as Google Docs, payroll services, 3Pl, or Microsoft Office 365 to EBS. Key managers need to plan the testing across process integration, adapter related, runtime issues, & file-server related.
<B>Existing Customizations –</B> Customers have changed/evolved the EBS flows. However, Oracle Cloud offers only configurations and limited customizations. So, enterprises may need to adapt some of the business flows available in Oracle Cloud. Regress testing can only ensure that whether or not new workflows support day-to-day operations.
<B>Workflows and Approvals –</B> In EBS, workflows can be customized. However, in Oracle Cloud, the approvals are configured using Oracle Business Process Management (BPM) cloud service. From data validation and verification to regulatory compliance and security roles, all needs to be validated. This can only be achieves with security testing.
<B>Test Data Management –</B> Enterprises often face a shortage of realistic information to execute successful testing. In the case of Oracle Cloud Apps, transactions contain sensitive information. At times, testing teams do not have the access to transactional data and the tools to extract the data sources. So, enterprises need to make provision of high-quality data with the right quantity and format so that successful testing can be executed.
<B>Thus, your testing strategy should be woven around</B>
<B>Configuration Testing</B> – One of the key focus areas of testing should be around configurations and their testing. Few examples are listed below.
Ledger configuration
Sub-ledger accounting rules
Tax geographies
Organization geographies
<B>Business Process Testing</B> – Critical business processes and workflows need to be validated. Cross platform compatibility needs to be tested and it should be validated that objects are working as expected.
<B>Integration Testing</B> – Integration testing ensures that the interactions between different units such as third-party applications, productivity tools, and legacy software across Oracle Cloud is completed smoothly without any complication.
<B>Security Testing</B> – Undoubtedly, Oracle Cloud offers good security standards, but enterprises still need to implement additional controls around security and visibility. These data security policies, and secure configurations need to be validated to ensure compliance requirements.
<B>Problem with conventional testing approaches</B>
Traditional testing methodology involves testing at the end of development activity. In this, you’ll get to know about bugs at very late stage. If critical bugs are discovered late, it may lead to too much of rework, redesign and rethinking of strategy. The end result would be it will adversely impact time to market. The ideal approach is to start testing as soon as possible. To transform your enterprise, you need to transform your testing strategy.
<B>Introduce Continuous Testing into your development</B>
Continuous testing is all about testing early and often. In this, unit tests are executed continuously to check the build and feedback is provided continuously to the Dev team. Continuous testing helps managers to make informed decisions to optimize the value of a release. Continuous testing offers several benefits some of which are listed below.
Improved efficiency and time savings
Early identification of defects
Faster responsiveness to changing business demands
Improved accuracy in identifying defects
Quicker time to market
Superior customer experience
However, continuous testing cannot be implemented without test automation. To introduce continuous testing in development, executives and managers have to introduce automation and leverage tools available for environment provisioning to test continuously at the developer-machine level. With the right test automation tool, you can have testing running in parallel with the development. All this will save time and money, while helping you to stick to your timelines.
<B>The proposed test automation solution should be </B>
<B>Zero/ Low Code –</B>Zero/ low code test automation platforms save lot of time in test script creation as simple play-back recorder can be used to create test scripts. Everyone in your organization can use such tool to test application.
<B>Testing smartly with AI –</B>Since AI based testing tool can understand the intended use of an application, they can create test cases to deliver stable results. AI-powered UI testing offers more accurate visual validation.
<B>Ease of maintenance & Reusability –</B> There is huge cost involved in test script recreation. By bringing test automation tools that automatically heal test scripts, you can save huge time and money. Tools that leverage machine learning and AI technologies for managing, healing, and maintaining application objects and elements in test suites can help in minimizing the test creation and execution efforts.
<B>Stay ahead of the curve with OpKey</B>
OpKey is a <a href="https://www.opkey.com/oracle/"> Continuous Oracle Cloud Testing Platform</a> that enables Oracle Cloud customers to leverage test automation to take full advantage of continuous innovation. As a globally recognized zero code test automation platform for Oracle Cloud Apps, OpKey dramatically reduces testing time and efforts while offering better test coverage to reduce defects.
<B>Title:</B> Addressing Key Challenges of Oracle EBS to Oracle Cloud Migration with Continuous Testing
<B>Description:</B> In this blog, we’ve discussed the challenges that business users and owners may face while migrating from Oracle EBS to Oracle Cloud. We’ve presented “OpKey” as continuous test automation and discussed how it can help in ensuring the stability of applications in new cloud environment.
<B>Canonical Url:</B> https://www.opkey.com/blog/addressing-key-challenges-of-oracle-ebs-to-oracle-cloud-migration-with-continuous-testing
| opkey_ssts | |
772,727 | Feel like ExpressJs while using Python Flask | Introduction Do you love to write back-end code using ExpressJs? Do you like the auto... | 0 | 2021-07-27T06:38:11 | https://dev.to/marktennyson/feel-like-expressjs-while-using-python-flask-1ldk | flask, python, express, node | [](https://pepy.tech/project/flaske) [](https://pepy.tech/project/flaske/month) [](https://pepy.tech/project/flaske/week)
<img src="https://raw.githubusercontent.com/marktennyson/flaske/main/logos/flaske-logo.png">
### Introduction
Do you love to write back-end code using `ExpressJs`? Do you like the auto completion features of `Vscode` while using typing based language or framework? Do you want to get all of above mentioned features while using a Python based framework called `Flask` ?
<br/>
I have created a new python module called [Flaske](https://github.com/marktennyson/flaske), to provide these features.
### How flaske provide you the features like expressjs
Flaske basically provides you the request and response object as the parameters of the view function very similar to the view functions of expressJs. The inbuilt properties and the methods of the request and response object will provide you a interactive feel like expressJs. We are using the `munch` module to provide the attribute-style access very similar to the Javascript. Below I have tried to mention some of the examples to demonstrate the features of Flaske better.
### Installation
Install from official PYPI
```bash
python3 -m pip install flaske
```
Or It could be installed from the source code.
```bash
git clone https://github.com/marktennyson/flaske.git && cd flaske/
python3 setup.py install
```
### Important Links
[PYPI link](https://pypi.org/project/flaske)
[Github link](https://github.com/marktennyson/flaske)
[Documentation link](https://flaske.vercel.app)
### Examples
#### A Basic example:
```python
from flaske import Flask
app = Flask(__name__)
@app.get("/")
def index(req, res):
return res.json(req.header)
```
#### Now the flask 2.0 support the asynchronus view function. You can implement this with flaske too.
```python
from flaske import Flask
app = Flask(__name__)
@app.get("/")
async def index(req, res):
return res.json(req.header)
```
#### You can use the python typing for a better view of the codes and auto completion.
```python
from flaske import Flask
from flaske.typing import Request, Response
app = Flask(__name__)
@app.get("/")
def index(req:Request, res:Response):
return res.json(req.header)
```
### Basic Documentation
The official and full documentation for this project is available at: https://flaske.vercel.app.
Here I have tried to provide some of the basic features of this project.
#### Request class:
N.B: all of the properties of the Request class will return an instance of Munch.
This will provide you the feel of the Javascript object.
##### property - json
So if your app is receiving data as json format, you can use `json` property of the request class to access the data.
It's internally using the `get_json` method to provide the data.
For example:
```python
@app.post("/send-json")
def send_json(req, res):
name = req.json.name
email = req.json.email
return res.json(name=name, email=email)
```
##### property - query
This object provides you the url based parameter.
It's internally using the `args` property to provide the data.
For example:
```python
@app.get("/get-query")
def get_query(req, res):
name=req.query.name
email = req.query.email
return res.send(dict(name=name, email=email))
```
##### property - body
This object provides you the all the parameters from the Form.
It's internally using the `form` property to provide the data.
For example:
```python
@app.get("/get-form-data")
def get_form_data(req, res):
name=req.body.name
email = req.body.email
return res.send(dict(name=name, email=email))
```
##### property - header
This object provides you the all the parameters of the request header.
It's internally using the `header` property to provide the data.
For example:
```python
@app.get("/get-form-data")
def get_form_data(req, res):
return res.send(req.header)
```
#### Response class
The default response class and the methods or functions of the response class are the following.
##### function - set_status
This is used to set the response header status.
for example:
```python
@app.route("/set-status")
def set_statuser(req, res):
return res.set_status(404).send("your requested page is not found.")
```
##### function - flash
To flash a message at the UI.
for example:
```python
@app.route('/flash')
def flasher(req, res):
return res.flash("this is the flash message").end()
```
##### function - send
It sends the HTTP response.
for example:
```python
@app.route("/send")
def sender(req, res):
return res.send("hello world")
#or
return res.send("<h1>hello world</h1>")
#or
return res.set_status(404).send("not found")
```
##### function - json
To return the json seriliazed response.
for example:
```python
@app.route("/json")
def jsoner(req, res):
return res.json(name="aniket sarkar")
#or
return res.json({'name': 'aniket sarkar'})
#or
return res.json([1,2,3,4])
```
##### function - end
To end the current resonse process.
for example:
```python
@app.route("/end")
def ender(req, res):
return res.end()
#or
return res.end(404) # to raise a 404 error.
```
##### function - render
Renders a html and sends the rendered HTML string to the client.
for example:
```python
@app.route('/render')
def renderer(req, res):
context=dict(name="Aniket Sarkar", planet="Pluto")
return res.render("index.html", context)
#or
return res.render("index.html", name="Aniket Sarkar", planet="Pluto")
```
##### function - redirect
redirect to specified route.
for example:
```python
@app.post("/login")
def login(req, res):
#if login success
return res.redirect("/dashboard")
```
##### function - get
Get the header information by the given key.
for example:
```python
@app.route("/get")
def getter(req, res):
print (res.get("Content-Type"))
return res.end()
```
##### function - set
Set the header information.
for example:
```python
@app.route("/header-seter")
def header_setter(req, res):
res.set('Content-Type', 'application/json')
#or
res.set({'Content-Type':'application/json'})
return res.end()
```
##### function - type
Sets the Content-Type HTTP header to the MIME type as determined by the specified type.
for example:
```python
@app.route("/set-mime")
def mimer(req, res):
res.type('application/json')
#or
res.type(".html")
#or
res.type("json")
```
##### function - attachment
send the attachments by using this method.
The default attachment folder name is `attachments`.
You can always change it by changing the config parameter.
the config parameter is `ATTACHMENTS_FOLDER`.
for example:
```python
@app.route('/attachments')
def attach(req, res):
filename = req.query.filename
return res.attachment(file_name)
```
##### function - send_file
Send the contents of a file to the client.Its internally using the send_file method from werkzeug.
##### function - clear_cookie
Clear a cookie. Fails silently if key doesn't exist.
##### function - set_cookie
Sets a cookie.
##### function - make_response
make a http response. It's same as `Flask.wrappers.Request`
### Development
#### Contribution procedure.
1. Form and clone this repository.
2. Make some changes as required.
3. Write unit test to showcase its functionality.
4. Submit a pull request under `development` branch.
#### Run this project on your local machine.
1. create a virtual environment on the project root directory.
2. install all the required dependencies from requirements.txt file.
3. make any changes on you local code.
4. then install the module on your virtual environment using `python setup.py install` command.
5. The above command will install the `flaske` module on your virtual environment.
6. Now create a separate project inside the example folder and start testing for your code changes.
7. If you face any difficulties to perform the above steps, then plese contact me at: `aniketsarkar@yahoo.com`. | marktennyson |
772,747 | How To Get High-Quality Medical Translations? | When it comes to translation accuracy and industry knowledge, nothing beats the significance held by... | 0 | 2021-07-27T07:25:42 | https://dev.to/mars_translation/how-to-get-high-quality-medical-translations-3lej | translation, professional, medical, company | When it comes to translation accuracy and industry knowledge, nothing beats the significance held by a properly delivered <b> <a href="https://www.marstranslation.com/industry/healthcare-translation-services">medical translation service</a> </b>. Beyond the language, expertise is a place where a medical translators’ skill set finds its abode.
Focusing on the intended audience of any medical document, being sensitive to project timelines, and having a strong understanding of medical terminologies are some important factors that medical translation agencies must adhere to.
If pharmaceutical information, medical software documentation, or medical instructions contain even one small error, it can have dire consequences on not only the medical procedures of a hospital but also on the lives of the patients.
There are multiple examples of patients who were adversely affected by translation errors that were made on imported medical device instructions and inserts. Therefore, the priority for selecting a medical document translation provider should not be that if it’s the quickest, cheapest, or the easiest, but if it’s the most accurate, error-free, and quality-oriented.
Product instructions, studies, researches, certifications, clinical trial paperwork are the kinds of content that medical companies need to translate if they want to sell their product outside of their country. But in this case, as well, the most important thing is to ensure that the translations are accurate and error-free.
In this article, we’ll discuss some of the important tenets that must be adhered to get high-quality medical translations;
<h2>Researching For The Best Options Available In The Market</h2>
The service providers that can best serve your needs should be actively sought out. You can look at reviews or rankings of the best LSPs, ask for recommendations from friends and acquaintances, or surf the internet.
The following factor should be taken into consideration when you are shortlisting;
● The most preferable companies are the ones with the most experience but do not immediately discard new players. Feedback or comments of previous clients can be taken into account if you want to analyze your shortlist.
● It’s imperative to be aware of the entire process that takes place when you are availing the service of a translation agency. Do not shy away from asking for explanations and details of how the translation is being carried out.
● As long as the quality of the output is not compromised, faster is always better. Because it would be advantageous to get the output early so it can be corrected and reviewed, even though such translations are not usually urgent tasks.
<h2>Importance of Translation Technology</h2>
Computer-aided technology can be applied to make the work of human translators more efficient, although raw machine translation is not suitable for medical devices. The quality and cost-effectiveness would ameliorate if you create and leverage translation memory and build unique glossaries. Handling the translation of similar products and managing updates can be effectively accomplished by utilizing Translation memory. Submitting translation requests and generating reports will be easier for you if you use advanced translation technology with integrated tools.
<h2>Putting Emphasis on Specialized Medical Translators</h2>
Let’s just cut to the chase and discuss the most important tenet needed to get a high-quality translation service for medical data. The most essential thing for getting accurate translations is the use of professional, in-country translators with a medical background and years of translating medical content.
You get a lot of errors if you do not hire a specialized medical translator because there’s so much terminology for so many different types of medical specialties.
Moreover, an ideal translation organization must have an experienced team including a proofreader, multilingual QA testers, IT and web specialists, medical industry experts, localization engineers, internationalization, and project manager.
<h2>A Dedicated Project Manager Makes the Difference</h2>
For the attention of your accounts’ project manager, you don’t want to compete with other clients. Instead of having revolving translators and project managers who keep becoming unavailable as they attend to the concerns of other clients, the team assigned to you should focus on completing your project. Moreover, you have to ensure that they will be there to promptly respond to concerns and will only be working on your project.
Contextual accuracy and a high level of precision are required for Healthcare translations. By choosing a cheap service that does not provide the attention and responsiveness you should be getting, don't compromise the quality of your final output.
<h2>Quality Process of Translation Firms</h2>
For medical diagnostic translations, accuracy is critical, as we have mentioned. Monitoring and continuous improvement processes, service levels, procedures, and a traceable quality process with defined standards is something that a healthcare translation organization must have. Certain elements may be important to one company but not significant to another, hence quality can be difficult to measure.
Furthermore;
● Something may not be “wrong’, so people can let their subjectivity cloud their judgment.
● Having not enough reference information or having a rushed timeline are some of the factors outside of the translation process that can significantly impact the work.
● Differences between you and the translation firm may arise pertaining to work expectations, perceptions, and quality.
A quality plan that takes your company’s needs into account, must be defined by working with a <b> <a href="https://www.marstranslation.com/">translation company</a> </b>. Therefore, to measure quality, the process should abide by an accurate and objective methodology.
<h2>Conclusion</h2>
Thereby, do not choose a translation agency haphazardly and without doing any research. To find the best possible option for your organization, give careful thought and spend some time. Trust us, this time will prove to be very beneficial in the long term. You have to ensure that the translations you use don't end up creating problems for your company, later on, hence pursuing quality of service should be your topmost priority.
Moreover, if you want to select the best medication translation agency make sure that they maintain translation memories for all translation documents because this will ensure fast turnaround time and cost-effective translation of updates.
Conclusively, high-quality translations require an organization that specializes in healthcare translations that focuses on a “must-have approach” rather than a “nice-to-have approach”, because people's lives are at stake.
| mars_translation |
772,876 | Positioning elements with Grid | In this post, I will show you the Grid basics and how I use it to place the content in common... | 6,105 | 2021-07-27T09:37:25 | https://www.dawntraoz.com/blog/positioning-elements-with-grid/ | css, grid, basics, usecases | In this post, I will show you the Grid basics and how I use it to place the content in common situations.
{% tweet 1419954287355043842 %}
## What is Grid?
Grid ([CSS Grid Layout](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout)) is defined as a tool to divide a page into main regions or to define the relationship in terms of size, position and layer.
> The Grid layout is compatible with the vast majority of browsers, some like Opera Mini and IE do not support it, in [Can I use](https://caniuse.com/?search=grid) you can see what properties are supported by which browsers.
It is also known as the two-dimension layout system that provide us with the best alternative to the tables we used in the past, and has taken our user interfaces to the next level.
The grid layout allows us to align the elements in columns and rows, space the elements from the container element, position the child elements by overlapping them or forming layers, among other features.
This facilitates the creation of dynamic and responsive layouts, as we will see throughout the article.
## Where to learn more?
Although in this article we will see the initial concepts and a couple of use cases, tips and tricks, I recommend that if you are interested in knowing the ins and outs of Grid, you should take a look at these resources.
In these courses and articles they explain Grid, show you different use cases and the little details they have learned from experience.
- [Understanding CSS Grid](https://www.smashingmagazine.com/2020/01/understanding-css-grid-container/) Series by Rachel Andrew at Smashing Magazine.
- [Learn CSS Grid](https://cssgrid.io/) course by Wes Bos (free).
- [A Complete Guide to Grid](https://css-tricks.com/snippets/css/complete-guide-grid/) by Chris House at CSS-Tricks.
## Grid properties
Now that you know the Grid concept, we are going to see the **properties** we need to shape our layout, distinguishing between the properties of the *parent* or *child* element.
Being the ***parent*** element, the one that contains one or more ***child*** elements that will be sized, aligned, layered and redistributed by the available space. Let's see an example:
```html
<div class="parent grid">
<p class="child grid-item"></p>
<p class="child grid-item"></p>
<span class="child grid-item"></span>
</div>
```
> The parent will be in charge of defining the grid and the children will be in charge of positioning or aligning themselves in specific places, if necessary.
**Parent** (grid *container*) properties:
- **display** - specifies the type of rendering box of an element. With the value *grid*, every direct child will be in a grid context.
```css
display: grid;
```
- **grid-template** - a shorthand property for defining grid columns, rows, and areas at once.
```css
grid-template: none|grid-template-rows / grid-template-columns|grid-template-areas grid-template-rows / grid-template-columns
```
But let's take a look at each property individually to see how to define them.
- **grid-template-rows** - specifies the line names (optional) and track size of the grid rows (horizontal tracks).
```css
grid-template-rows: none|[line-name] track-size [line-name-2];
```

- **grid-template-columns** - specifies the line names (optional) and track size of the grid columns (vertical tracks).
```css
grid-template-columns: none|[line-name] track-size [line-name-2];
```
> A **grid track** is the space between any two lines on the grid. As we can see in the image below, between linename and linename2 we have defined a column track of 1fr in size.

To give value to the size of each row/column track, we must specify the values separating them by spaces and using different units:
- **By common units**: length (px, rem, ...), percentage (%) and fr.
> The **fr** unit represents a fraction of the available space in the grid container.
- **By grid items content**: *min-content,* minimum size of the content, *max-content*, maximum size of the content and *auto*, similar to minmax(min-content, max-content).
- **By functions**:
- *minmax(min-size, max-size)* - It defines a size range, greater than or equal to *min*, and less than or equal to *max*.
- *repeat(n-columns, track-size)* - It allows defining numerous columns that exhibit a recurring pattern in a more compact form.
- *fit-content(track-size)* - It uses the space available, but not less than the *min-content* and not more than the *max-content* of the children.
```css
/* Common units */
grid-template-columns: [linename] 1fr [linename2] 2fr [linename3] 1fr [linename4];
grid-template-rows: [linename] 1fr [linename2] 1fr [linename3] auto [linename4];
/* Grid items content */
grid-template-columns: 1fr max-content 2fr;
grid-template-rows: 1fr min-content max-content;
/* Functions (& combined) */
grid-template-columns: repeat(3, minmax(0, 1fr));
grid-template-rows: minmax(60px, 1fr) fit-content(75%);
```
- **grid-template-areas** - specifies named grid areas by setting the grid cells and assigning names to them.
No grid item is associated to these areas, but any child element can reference any area with the grid placement properties (grid-row/grid-column, grid-area), which we will see below.
```css
grid-template-areas: none|.|area-strings;
```
The best way to understand this property is to exemplify it, and you will see it clearly.
```css
grid-template-columns: 300px repeat(3, 1fr);
grid-template-rows: minmax(60px, 1fr) 4fr minmax(60px, 1fr);
grid-template-areas: "sidebar header header header"
"sidebar content content content"
"sidebar footer footer footer";
```
The defined **strings** in grid-template-areas are the **row tracks,** and the times that a name is repeated is the **columns** that occupies that area in that row. As you can see in the image below, the header occupies 3 columns and 1 row:

- **grid-auto-columns** - specifies the **size** of an **implicitly** created (auto-created) grid **column track**.
> By definition, if we haven't explicitly defined in grid-template-columns the size of a column track where a grid item has been positioned, implicit grid tracks are created to contain it.
```css
grid-auto-columns: implicit-tracks-size;
```
The size unit can be any of the ones we have used for grid-template-columns and grid-template-rows.
```css
grid-template-columns: 300px;
grid-auto-columns: 1fr;
```

- **grid-auto-rows** - specifies the **size** of an **implicitly-created** grid **row track**.
```css
grid-auto-rows: implicit-tracks-size;
```
As *grid-auto-columns*, we will use any size unit for the implicit row tracks:
```css
grid-auto-rows: 6rem;
```
- **grid-auto-flow** - controls how the auto-placement algorithm works, specifying exactly how auto-placed items get flowed into the grid (*we can think of it as the flex-direction of grid*).
```css
grid-auto-flow: row|column|dense|[row|column] dense;
```
While **row** and **column**, we can guess where the children of the grid will be placed:
- ***row*** placing the items in the **available horizontal track**, if the current row becomes full then a new row will be started.

- ***column*** in the **available vertical track**, once the items have filled all the rows in that particular column, then a new one will be created.

Instead, **dense** tries to **fill in the gaps** earlier in the grid. In the image below, you can see how the third element (blue) is placed in the space left between the first and second element.

> WARNING - **grid-auto-flow: dense;** can cause items to appear visually out of order, causing accessibility problems.
- **gap** - sets the gaps (gutters) between rows and columns. It is a shorthand for row-gap and column-gap.
```css
gap: row-col-gap|row-gap column-gap;
```
> (row-col-gap) As in *padding,* if we specify a single value it will be set for both row and column.
- **row-gap** - sets the size of the gap in y-axis, between row tracks.
```css
row-gap: 20px;
```
- **column-gap** - sets the size of the gap in x-axis, between column tracks.
```css
column-gap: 5%;
```
**Child** (grid *item*) properties:
- **grid-row** - specifies where a child is placed within the grid row tracks and how much it occupies.
It can be a **line** (placed in 1 row), a **span** (placed in more than 1 row), or **auto** (original placement), thereby specifying the inline-start and inline-end edge of the grid area it occupies.
```css
grid-row: grid-line|grid-row-start / grid-row-end;
```
Let's see what possibilities we have to define the start or end of the child item with the *-start and *-end properties.
- **grid-row-start** - specifies a grid item’s start position within the grid row.
```css
grid-row-start: [number|name]|span [number|name]|auto;
```
**line**: *number* to refer to a numbered grid line, or *name* to refer to a named grid line.
**span [number|name]**: the child item will occupy the number of grid rows provided, or until it reaches the line with the specified name.
**auto**: automatic placement on the original position, with a span of 1.
```css
grid-row-start: span 2; /* The item occupies 2 rows */
```
- **grid-row-end**: specifies a grid item’s end position within the grid row. The values that can be specified are the same as for grid-row-start.
```css
grid-row-end: number|name|span [number|name]|auto;
```
Combining both would look like:
```css
grid-row: span 2 / 5; /* Fit 2 rows and end in the numbered line 5 */
```

- **grid-column** - specifies where a child is placed within the grid column tracks and how much it occupies. Same as *grid-row* but this time with the vertical tracks.
```css
grid-column: grid-line|grid-column-start / grid-column-end;
```
A shorthand for the CSS properties:
- **grid-column-start** - specifies a grid item’s start position within the grid column by specifying a line, a span, or auto.
```css
grid-column-start: [number|name]|span [number|name]|auto;
```
- **grid-column-end** - specifies a grid item’s end position within the grid column, also by specifying a line, a span, or auto.
```css
grid-column-end: [number|name]|span [number|name]|auto;
```
- **grid-area** - specifies where a child is placed within the grid and its size.
Its value can be expressed as a shorthand of grid-row-start / grid-row-end / grid-column-start / grid-column-end or with the name of the area created in grid-template-areas.
```css
grid-area: named-area|grid-row-start / grid-row-end / grid-column-start / grid-column-end;
```
Let's see a simple example for each case:
```css
/* Given grid-template-areas: "content content sidebar"; */
grid-area: content;
/* Specifying line numbers:
grid-row-start: 1
grid-column-start: 1
grid-row-end: auto
grid-column-end: 3
*/
grid-area: 1 / 1 / auto / 3; /* OR grid-area: 1 / span 2; */
```
And these would be the properties that will help us to define the structure of our grid and position its elements individually. Now let's see how to align the elements in the same way, globally or from themselves.
## Alignment properties
Whenever we want to align our layout elements, which are located in cells, we will have to use different properties from the parent but sometimes also from the children, depending on their behavior.
> A **grid cell** is the smallest unit on a grid. Once a grid is defined as a parent, the child items will lay themselves out in one cell each of the defined grid.

Let's see the possibilities we have to align our content by the container or child item.
**Parent** (grid *container*) properties:
- **place-items** - allows you to align child items along the block (y-axis/column) and inline (x-axis/row) directions at once. A shorthand for align-items and justify-items.
```css
place-items: align-items justify-items;
```
To see the values that can be defined, let's take a look at each one of the properties:
- **align-items** - specifies the alignment of grid items on the block direction (y-axis). It sets the *align-self* value on all child items as a group (we will see this property in the child properties section).
```css
align-items: normal|center|start|end|stretch;
```
- **center** causes the elements to be aligned to the center of themselves (grid item cell).
- **start** & **end** causes the elements to be aligned at the beginning of themselves (cell top) or end (cell bottom).
- **stretch** causes the grid items to have the same height as their cell, filling the whole space vertically. The value by default (*normal*).

- **justify-items** - specifies the alignment of grid items on the inline direction (x-axis). It sets the *justify-self* value on all child items as a group.
```css
justify-items: normal|center|start|end|stretch;
```
- **center** causes the elements to be aligned to the center of themselves horizontally.
- **start** & **end** causes the elements to be aligned at the beginning or the end of themselves in the x-axis.
- **stretch** causes the grid items to have the same height as their cell, filling the whole space horizontally. The value by default (*normal*).

- **place-content** - allows you to align the content along the block (y-axis/column) and inline (x-axis/row) directions at once. A shorthand for align-content and justify-content.
```css
place-content: align-content justify-content;
```
Let's take a closer look at the possible values:
- **align-content** - specifies the distribution of space between and around content items in the block direction (y-axis).
```css
align-content: normal|center|start|end|space-around|space-between|space-evenly|stretch;
```
- **center** causes the elements to be aligned at the center of the grid with respect to the y-axis.
- **start** & **end** causes the elements to be aligned at the beginning (top) or end (bottom) of the grid with respect to the y-axis.
- **space-between** causes the grid items to be distributed evenly, being the first item at the start of the grid, and the last at the end.
- **space-around** causes the grid items to be distributed evenly, having half-size space on top/bottom.
- **space-evenly** causes the grid items to be distributed evenly, having equal space around them.
- **stretch** causes the grid auto-sized items to have their size increased equally so that the combined size exactly fills the alignment container.

- **justify-content** - specifies the distribution of space between and around content items in the inline direction (x-axis).
```css
justify-content: normal|center|start|end|space-around|space-between|space-evenly|stretch;
```
- **center** causes the elements to be aligned at the center of the grid with respect to the x-axis.
- **start** & **end** causes the elements to be aligned at the beginning (left) or end (right) of the grid with respect to the x-axis.
- **space-between** causes the grid items to be distributed evenly, being the first item at the left of the grid, and the last at the right.
- **space-around** causes the grid items to be distributed evenly, having half-size space on left/right.
- **space-evenly** causes the grid items to be distributed evenly, having equal space around them.
- **stretch** causes the grid auto-sized items to have their size increased equally so that the combined size exactly fills the alignment container.

**Child** (grid *item*) properties:
- **place-self** - allows you to align an individual child item along the block (y-axis/column) and inline (x-axis/row) directions at once. A shorthand for align-self and justify-self.
```css
place-self: align-self justify-self;
```
- **align-self** - overrides the align-items value and aligns the item inside the grid area or cell along the block direction (y-axis).
```css
align-self: center|start|end|stretch;
```
- **center** aligns the content to the center of the grid cell in the y-axis.
- **start & end** aligns the content to the start/end of the grid cell in the y-axis.
- **stretch** fills the grid cell in the y-axis (normal value).

- **justify-self** - overrides the justify-items value and aligns the item inside the grid area or cell along the inline direction (x-axis).
```css
justify-self: center|start|end|stretch;
```
- **center** aligns the content to the center of the grid cell in the x-axis.
- **start & end** aligns the content to the start/end of the grid cell in the x-axis.
- **stretch** fills the grid cell in the x-axis (normal value).

> **BONUS** - [Masonry Layout](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout/Masonry_Layout) is an experimental feature, a layout method where 1 axis uses common values (usually columns), and the other the masonry value. In the axis of the masonry (usually the row), instead of leaving gaps after the small elements, the elements of the next row move up to fill the gaps. (AWESOME, isn't it?)
## Real use cases
Now, I am going to show you how I usually solve, with CSS Grid layout, different situations that I found in website I've developed:
### Situation 1 - Grid sidebar layout
Imagine you are developing a layout that includes a sidebar on the left, which follows you as you scroll, and the rest of the website, header, content and footer, on the right.
Here I show you an example of what I mean and how it could be distributed:

To do this, we will first define the grid container and the cells or areas we will need to place our elements, and then the position of the child items in the grid:
```html
<div class="grid grid-cols-[300px,minmax(0,1fr)] grid-rows-[60px,1fr] min-h-screen">
<aside id="sidebar" class="sticky top-0 h-screen">Sidebar</aside>
<header class="row-span-1 col-start-2">Header</header>
<section role="main" class="col-start-2">Content</section>
<footer class="col-start-2">Footer</footer>
</div>
```
- **CSS Parent**
```css
.grid {
display: grid;
}
.grid-cols-[300px,minmax(0,1fr)] {
/* 2 column tracks, 1st 300px, 2nd max the space available */
grid-template-columns: 300px minmax(0, 1fr);
}
.grid-rows-[60px,1fr] {
/* 2 row tracks, 1st 60px, 2nd the space available */
grid-template-rows: 60px 1fr;
}
.min-h-screen {
min-height: 100vh; /* Fill the full screen */
}
```
> **Why is it important to define minmax(0, 1fr) instead of 1fr?** Because by defining minmax and adding the possibility to have a smaller size than 1fr, the content does not overflow. This way, elements like a slider can be added to that responsive column without any problem.
- **CSS Children**
```css
/**
* Aside - Sidebar
*/
.sticky {
position: sticky; /* Follow you when scrolling */
}
.top-0 {
top: 0px;
}
.h-screen {
height: 100vh; /* Fill the screen vertically */
}
/**
* Header
*/
.row-span-1 {
grid-row: span 1 / span 1; /* Position: first row track */
}
.col-start-2 {
grid-column-start: 2; /* Position: second column track */
}
/**
* Content & Footer
*/
.col-start-2 {
grid-column-start: 2; /* Position: second column track (original position for the row) */
}
```
> Note: The examples will be styled with TailwindCSS using JIT mode, but I'm still going to add the generated CSS for those who don't use it.
### Situation 2 - Grid post thumbnail layout
In this situation, our goal is to represent the thumbnail of our article, showing the image, the title, the description, the date and a like button to save it in our list of interests.
Let's imagine that the image below is the outline of how we want to represent it:

The first thing will be to define our elements in semantic HTML and start defining the grid in the parent, in this case the children will not need to specify their position because they will occupy the space that corresponds to them by location in the HTML.
```html
<article class="grid grid-cols-[120px,minmax(0,1fr),64px] grid-rows-1 gap-4">
<!-- Figure: By default will be added in the first column of 120px -->
<figure>
<img src="image_url" alt="image_alt" width="120" height="120" />
</figure>
<!-- Header (Title, excerpt and date): By default will be added in the second (available space) column -->
<header class="py-4">
<h2>Heading</h2>
<p class="pb-8">Description or excerpt</p>
<p>Date</p>
</header>
<!-- Like button: By default will be added in the last column of 64px -->
<button class="w-8 h-8 p-4">
<svg width="32" height="32">Like Icon</svg>
</button>
</article>
```
As before, let's differentiate between the style applied to the parent and the one applied to the children:
- **CSS Parent**
```css
.grid {
display: grid;
}
.grid-cols-[120px,minmax(0,1fr),64px] {
/* 3 column tracks, 1st fixed of 120px, 2nd responsive from 0 to 1fr, and 3rd fixed of 64px */
grid-template-columns: 120px minmax(0, 1fr) 64px;
}
.grid-rows-1 {
/* 1 row track responsive from 0 to 1fr */
grid-template-rows: repeat(1, minmax(0, 1fr));
}
.gap-4 {
gap: 1rem; /* gap between row and column tracks of 1rem (16px) */
}
```
- **CSS Children**
```css
/**
* Header: Internal padding
*/
.py-4 {
padding-top: 1rem;
padding-bottom: 1rem;
}
/**
* Button: Size and internal padding
*/
.w-8 {
width: 2rem; /* (32px) */
}
.h-8 {
height: 2rem; /* (32px) */
}
.p-4 {
padding: 1rem;
}
```
### Situation 3 - Grid responsive layout
Another quite common situation that I usually find is the grid responsive by nature, that as the screen grows more elements are added to the previous rows, automatically defining the columns.
In this way, when the element of x pixels fits in the previous row it moves up and the next one moves to the position of this one.
If the width of the grid in the image below were larger, item 5 would be part of the first row.

In this situation it is even easier to define the grid container, with just *grid-template-columns* we will be able to get that result, but in order to have space between the items we will add the gap property as well:
- **HTML**
```html
<div class="grid grid-cols-[repeat(auto-fit, 150px)] gap-8">
<article>Item 1</article>
<article>Item 2</article>
<article>Item 3</article>
<article>Item 4</article>
<article>Item 5</article>
</div>
```
- **CSS**
```css
.grid {
display: grid;
}
.grid-cols-[repeat(auto-fit, 150px)] {
grid-template-columns: repeat(auto-fit, 150px);
}
.gap-8 {
gap: 2rem; /* (32px) */
}
```
Basically, in this article I intended to have listed what I will need to review in the future when defining a new grid. And a couple of use cases to keep it fresh.
I hope you find it as useful as I did and have a wonderful week full of grids and layout! | dawntraoz |
782,571 | 🔥Latest Animation & Incredible Spinner🔥 | Hey lovey dear! Today I wanna going share some Latest and incredible Animation created with just CSS... | 0 | 2021-08-05T13:01:03 | https://dev.to/wowrakibul02/latest-animation-incredible-spinner-5aj9 | animation, spinner, css, html | Hey lovey dear!
Today I wanna going share some Latest and incredible Animation created with just CSS and HTML by **__Wow Rakibul__** . Really it's very amazing and so lets see:
{% codepen https://codepen.io/wowrakibul02/pen/xxdaowz %}
<iframe height="300" style="width: 100%;" scrolling="no" title="" src="https://codepen.io/wowrakibul02/embed/preview/xxdaowz?default-tab=css%2Cresult" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
See the Pen <a href="https://codepen.io/wowrakibul02/pen/xxdaowz">
</a> by Wow Rakibul (<a href="https://codepen.io/wowrakibul02">@wowrakibul02</a>)
on <a href="https://codepen.io">CodePen</a>.
</iframe> | wowrakibul02 |
782,580 | My road to Front-End Dev | Hello beautiful people. Let me introduce myself: My name is Federico, 27, Argentinian, IT worker for... | 0 | 2021-08-05T13:10:09 | https://dev.to/federhico/my-road-to-front-end-dev-5dc9 | Hello beautiful people. Let me introduce myself: My name is Federico, 27, Argentinian, IT worker for the last seven years more or less.
The intention behind this post is to start writing/blogging in English. I was struggling a lot choosing the topic for it. Despite having enough knowledge for writing something more technical, my final decision was to write about how do I evolve from that kid that wanted to hack de NASA to who I'm 10 years later.
This story begins when I was a kid/teenager. I had some experience playing games, browsing the web, and that kind of stuff, but I needed something more. The thing that mainly called my attention was "cracking" games, installing them, copy-pasting some .exe files to the program files folder to make it work without bought them was something that I need to understand. I started googling "what's a crack?" "how do cracks work?" and figured out that cracks were modified software, crackers were kind of hackers and discovered a whole new world.
I decided that I wanted to crack/hack things, so, naturally, I needed to understand how Software works. After hours of googling stuff and scrolling into long forums threads figured out that I need to learn to code.
That was the starting of a long hate-love relationship. I started writing some .bat files that shut down the computer, then more complex scripts that behaved like a Q-A mini-game until I wrote my first "masterpiece", a contact book full written on batch (hopefully I will find the post where I shared that code).
After that, the next goal was to do something more user-friendly I was tired of seeing the MS-DOS console. So, at my 14's I learned HTML, looking back is funny that the tutorial had a table-based version and a "newer version" with divs. Was the year 2007 and Javascript for me was something "weird" that allows making snow on my page.
A few years after that, when I finished high school I was completely sure that I wanted to study something related to software development. I started college, followed the path to "Software Developer", a lot of C#, Windows Forms, PHP, and JAVASCRIPT. The year was 2013 and Angular.Js was something, I got an internship job with Angular.js and from that moment I forgot all about my hacking/cracking intention, my new goal was to make cute dynamic websites. I had that job for almost a year and was the kick-off of my career.
My second job was on an ERP company, doing help-desk and writing some windows forms apps working as aux-software for the ERP, mainly allowing to import excel files and hitting the database to insert the required data. 1 year in that company trying to convince the head of developers (some 50 years guy that only liked Visual Basic and doesn't trust in web-based stuff) that we can do better if we make a web platform with all the needed features and stop carrying .exe files in a pen drive to the client's servers and installing it there. new job, please.
I had an interview with another ERP company that was developing a web platform full of aux apps in Angular 2. I didn't have to think twice, said goodbye, and went to work there. One year working on the research and development team (me and my boss) working on an Angular + Node project. First time using Trello, writing user stories, using GIT, and all the good stuff that I learned in college but I hadn't the opportunity to apply yet.
Again, one year working in that company, that experience allowed me to get into my current company where started working with USA-based clients, distributed, multi-cultural teams with full SCRUM work schema. Today I'm still working there and after 2 years I got a second job doing some freelance stuff in another company, also as an Angular dev.
That's my story, my path, maybe is similar (or not) to yours but I wanted to share it in order to, like I said at the beginning of the post, "write something in English", I apologize for the bad grammar and other redaction issues that you as a native or more proficient English user can find. Hopping to hear your story in the comments. | federhico | |
782,615 | How to Track DevOps Events with AWS Kinesis Data Streams | To run a cloud platform in production, your team needs to know how things are running. There are... | 0 | 2021-08-05T14:05:23 | https://dev.to/jwallace/how-to-track-devops-events-with-aws-kinesis-data-streams-3jod | devops, aws, kinesis, serverless | To run a cloud platform in production, your team needs to know how things are running. There are seemingly endless metrics, measurements, and logs to analyze to ensure the platform is running as it should. Keeping clients satisfied so they continue using your platform is the goal of any cloud company.
Along with the significant amount of data you can collect from your system comes just as many tools you may use to collect them. On AWS alone, you might make use of Lambda, CloudTrail, CloudWatch, and XRay. Each of these tools also has a subset of tools that can be useful for tracking your information. However, the most interesting is not the individual data points but the analysis of that data. To properly analyze, data needs to be accessible by the same analytical tools. AWS Kinesis Data Streams can provide a method to amalgamate data quickly and efficiently for this analysis.
#Features of Kinesis Data Streams
Kinesis data streams have many features that allow them to be used in a wide breadth of use cases. In this article, we will highlight features especially critical for analyzing platform health.
##Real-Time Streaming Performance
Kinesis Data Stream allows data to flow through the queue at very high speeds. Each shard can consume 1MB/s input and provide 2MB/s output. AWS also limits inputs by the number of writes (1000 PUT requests per second). If you require more per second information, add new shards to increase the capacity of the stream. Adding shards will add to the capacity available directly.
With scaling speeds, streaming from a Kinesis Data Stream to a real-time analytics process can provide fast results. For DevOps security, notifications can be sent to users efficiently, so teams may address problems earlier, even while they are occurring. This speed can significantly shorten the downtime of your platform.
##Easily Scale Capacity
Your platform may require different capacity settings based on predicted or spontaneous usage spikes. Kinesis Data Streams can dynamically scale with capacity ranging from the megabytes available with a single shard up to terabytes. The number of PUT requests can also scale up to millions of records per second. This scaling capacity means Kinesis can grow as your platform gains users and requires more throughput. You can stick with the same tool as your business grows and not need to rebuild infrastructure as you scale.
##Resource-Linked Cost
Like many AWS services, with Kinesis Data Streams, you pay for what you use. For each shard created, AWS charges for shard hours. The actual cost is dependent on the AWS region used, ranging from $0.03/shard hour in Sao Paulo to $0.015/shard hour in North Virginia. Users are also charged per million PUT payload units, again with a cost dependent on the region and similar to the shard hour cost. AWS charges for optional features like encryption and data retention separately.
##Security and Encryption
AWS encrypts data in transit by default. They also allow users to encrypt data at rest optionally. Developers can choose between managing their encryption keys or having AWS encryption applied using AWS KMS. For streaming, security data encryption at rest could be necessary. Data from AWS CloudTrail or private user information should be encrypted to limit the ability of attacks to get information of use.
#Kinesis Data Streams Versus SQS
Data Streams are AWS-managed infrastructure. When setting up this service, you do not need to consider storage, provisioning, or deployment of the stream. Both Data Streams and SQS are AWS-managed queue services. Each can be useful for different requirements and flows through your cloud platform. Here, we are discussing analyzing DevOps data to detect security, scalability, and bugs in your cloud platform. Features of Kinesis Data Streams make it the better choice for this end.
Kinesis can provide ordering of records which is not available with standard SQS queues. A value called the sequence number is assigned to the kinesis value. The data is guaranteed unique per partition key per shard. Data is guaranteed to arrive at the consumer in the correct order using this value.
Kinesis also can read and re-read data in the same order by the same or new consumers. Data is stored in the queue after reading for a predetermined amount of time. This differs from SQS, which will hold data only until a consumer processes it. Both Kinesis and SQS offer retries to read data.
SQS does not give the ability to have multiple consumers listen to the same queue. SQS provides load balancing if multiple consumers are reading from a queue. Kinesis, however, will provide the same data to all consumers. Throughput is calculated for each consumer. If you need real-time speed and have a significant amount of data, consider using the available enhanced fanout setting on Kinesis Data Streams. This setting will enable each consumer to have its throughput capacity without affecting other connected consumers
#Writing to Kinesis Data Streams
AWS Kinesis data streams can collect data from many sources in AWS. Kinesis can then forward data to different analytics tools like the [Coralogix log analytics](https://coralogix.com/platform/log-analytics/) platform.
##AWS Lambda and Kinesis Data Streams
AWS Lambda is a serverless compute system managed by AWS. These functions are commonly used in cloud computing to run the features of your system. Alternatively, developers may choose to run Fargate tasks, or EC2 compute functions. Each of these can interface to Kinesis using a similar methodology.
Compute functions can send data to Kinesis Data Streams for further analysis. This data may include interactions with APIs, data from outside sources, or results from Lambda itself. To write to your Data Stream from Lambda, use the AWS SDK. Developers can add various valuable data to the Kinesis data stream using the function laid out below.
```javascript
let kinesis = new AWS.Kinesis();
kinesis.putRecord({
Data: 'STRING_VALUE',
PartitionKey: 'STRING_VALUE',
StreamName: 'STRING_VALUE',
ExplicitHashKey: 'STRING_VALUE',
SequenceNumberForOrdering: 'STRING_VALUE'
}).promise();
```
##AWS CloudWatch to Kinesis Data Streams
CloudWatch allows users to configure subscriptions. These subscriptions will automatically send data to different AWS services, including Kinesis Data Streams. Subscriptions include filter configurations that allow developers to limit what data is sent to Kinesis.
Developers can also use these filters to send data to different Data Streams, allowing for different processing to occur based on the data’s content. For example, data needed to process DevOps logs may go to a single stream bound for an analytics engine, while user data may go to a different stream bound for long-term storage.
Use the AWS CLI to set up the subscription to a Kinesis Data Stream using the following commands. You must create the stream before assigning a subscription to it. You will also need to create an IAM role for your subscription to write to your stream. For a complete description of the steps to create a CloudWatch subscription, see the [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/streaming-cloudwatch-logs/).
##AWS CloudTrail to Kinesis Data Streams
CloudTrail can be configured to send data directly to AWS S3 or AWS CloudWatch, but not to AWS Kinesis. SInce CloudTrail can write directly to AWS CloudWatch, we can use the above configuration linking CloudWatch to Kinesis Data Streams to collect CloudTrail data.
If you are creating your CloudTrail from the console, an option to configure a CloudWatch linkage is available. Once turned on, CloudWatch pricing applies to your integration.
#Consuming Kinesis Data Streams
Kinesis Data Streams can send data to different consumers for analysis or storage. Consumers available include AWS Lambda, Fargate, and EC2 compute functions. You can also configure data streams to send directly to another Kinesis product like Kinesis Analytics. Using AWS compute functions and stored data, you can calculate metrics and store information. However, doing this requires significant manual work and foreknowledge of what to look for in data. Kinesis Analytics makes computation easier by applying user-built SQL or Apache apps to process your data.
Developers can also configure Kinesis to send data to third-party tools that remove the need to set up analytics. Coralogix provides several tools that can analyze different data to produce essential metrics and notifications for your platform. The [security platform](https://coralogix.com/platform/security/) can analyze AWS information [streaming from Kinesis](https://coralogix.com/integrations/aws-kinesis-with-lambda-function/) to give insights about breaches and retrospective analysis of your security weak points. Coralogix [log analytics](https://coralogix.com/platform/log-analytics/) system can take CloudWatch data and notify your team when your platform is not performing optimally.
#Summary
Cloud platforms use Real-time analytics to ensure your system is functioning optimally and is secure. AWS services can provide all stream data to AWS Kinesis, a real-time queue with an extensive range of capabilities that can be set to accommodate your platform’s needs. Kinesis can be set up to send data to multiple endpoints if different analysis is needed on the same data. Consumers of the Kinesis stream will perform the analytics. These may be solutions made by your platform team on AWS compute functions like Lambda or Fargate. They can be semi-manual functions made by your platform team using Kinesis Analytics tools. The most efficient way to perform analytics is to use a third-party tool designed for your needs, like Coralogix’s [security](https://coralogix.com/platform/security/) or [log analytics](https://coralogix.com/platform/log-analytics/) platforms.
| jwallace |
782,658 | The 4 Best Color Sites on the Internet for web developers | If you are a designer/web developer/etc, then you must know the pain of choosing... | 13,969 | 2021-08-05T15:35:01 | https://dev.to/khazifire/the-4-best-color-sites-on-the-internet-for-web-developers-5nk | webdev, productivity, uiweekly, design | #### If you are a designer/web developer/etc, then you must know the pain of choosing colors.

>
## Color is a power which directly influences the soul. - Wassily Kandinsky
Many people, including myself, find it hard to choose colors. With endless options/various, picking the perfect color to use in your design or presentation is at times overwhelming and may even feel impossible.

### But why should you struggle when you can use these 4 free online tools.
#### 1. [Coolors](https://coolors.co):
Coolors is my favorite, allows you to create your color palette or get inspired by thousands of stunning color schemes. It's just perfect

#### 2. [My Color](https://mycolor.space) :
We all have colors that we like, for example, I love dark tones. This tool generates color palates based on your favorite color, so if you like blue, it will generate colors that go alongside blue.

#### 3. [Color Hunt](https://colorhunt.co):
Choosing colors is like a hunt, it's either you find the color you like or return empty-handed. This tool helps you discover the perfect palette for your next project and also inspires with new colors pallets each day

#### 4. [ColorMind](http://colormind.io):
This tool uses deep learning to generate color schemes, for example, you can even upload any picture of your choice, and it will generate color palates from that picture

__________________________________________________________________
## Summary:
Choosing colors is hard, but its not impossible; by using the tools listed above, you will be able to find amazing colors to use in your project, presentations, designs, or anything that involves colors.
Best of luck utilizing these free assets on your future visual communication projects or presentations.
If you enjoyed this post, follow me on on [twitter](https://twitter.com/khazifire/) for similar content. | khazifire |
782,663 | Top 5 Benefits of Outsourcing Software Development Ukraine | The article was originally published on Ascendix Tech's blog For the last 10 years, Ukraine has... | 0 | 2021-08-05T15:47:18 | https://dev.to/ascendixtech/top-5-benefits-of-outsourcing-software-development-ukraine-3jf | laerning, todayilearned, technology, startup | *The article was originally published on [Ascendix Tech's blog](https://ascendixtech.com/outsourcing-software-development-ukraine-benefits/)*
For the last 10 years, Ukraine has become one of the leading outsourcing software development centers for dozens of countries around the world.
What’s more, Ascendix has outsourced IT services to India, Mexico, Argentina, and China having different experiences with both positive and negative outcomes.
After a while, we have received a message from a Ukraine-based IT services provider inviting us to give them a try and consider Ukraine as a great offshore dev location.
This way, we had settled a new office in Ukraine 5 years ago and since then we’ve been growing our team with over 150 employees in 2021.
Due to the fact that this topic was vital for our business and it’s still essential for lots of startups and companies around the world, we want to share 5 objective reasons and benefits of outsourcing software development Ukraine without using rose-tinted glasses.
Hope you’ll enjoy reading this post and share your feedback in the comments.
# IT market in Ukraine: 2021 Overview
We want to NOTE that most statistics and analysis data was taken from an in-depth report [‘Ukraine: The Home of Great Devs. 2021 Tech Market Report’ prepared by Beetroot](https://beetroot.co/wp-content/uploads/sites/2/2021/03/Ukraine-the-Home-Of-Great-Devs-2021-_-Ebook-v3-2.0-3.pdf) and other sources.
Let’s start with the quick 5 facts about Ukraine as a prosperous IT services market cluster.
​
**Ukraine** **IT Services Market: 5 Quick Facts**
1. \> 200,000 active tech professionals with over 36,000 tech graduates were spotted in 2021.
2. Over 35% of the active IT labor force have 5-10 years of technial background while 14.2% of tech professionals provide 10 years of working experience.
3. According to PwC forecast, the Ukrainian IT services market volume is projected to reach $8.4 bln by 2025 along with 243,000 IT professionals.
4. The United States (>50%), Great Britain (\~30%), and Western European countries are among the top clients of the Ukrainian IT services market.
5. Over 85% of Ukrainian tech specialists have at least an intermediate English level while >30% of them show Advanced (C1) English proficiency level.

​
**Ukraine as a progressive R&D** **center in Europe**
The investment-attractive nature of the Ukrainian tech cluster has drawn attention and motivated many entrepreneurs, startups, and companies to give it a try and establish new R&D centers in Ukraine.
Let’s the best known and popular companies having their R&D offices in Ukraine:
* Gameloft
* Huawei
* Upwork
* Oracle
* Google
* Plarium
* Grammarly
* Siemens
* Samsung
* eBay
* And others.

​
**Top Investment-Intensive Tech Startups from Ukraine**
Aside from outsourcing, Ukraine has also become a great and competitive IT product-based center with dozens of startups and companies that have greatly succeeded in the world arena with millions of investments.
Among the most prosperous companies, we should mention the following ones:
* GitLab with $268M raised investments in 2019
* Grammarly with over $90M round of funding in 2019
* People.ai with $60M of raised funds
* Restream (a collaboration between Vinnytsya, Ukraine, and Austin, USA) with $50M of investments raised during the first 2020 round
* Reface raising $5.5M in 2020.

​
**How many hours ahead is Ukraine?**
One of the core reasons of becoming an investment-attractive location for outsourcing software development is the suitable timing difference for most European locations and even the Americas compared to other competitors of Ukraine.
In terms of the Western European countries, they mostly have from 1 to 4 hours difference which allows collaborating efficiently for 4-7 business hours.
Considering North American countries, the companies usually have 6h time zone difference which allows using 2 hours on average for performing strategic discussions, project progress updates, and even cross-development team interactions.
This means that Ukraine app developers work on projects while the US clients fall asleep and still stay active while Americans wake up in the morning.
​
**Competitive Ukraine outsourcing rates in 2021**
According to Beetroot’s report, the average Middle Software Developer's monthly salary in Ukraine is about $3770.
As a comparison, US middle technical specialists get paid $86,523/year which means $7210 per month.
So, if you hire software developers Ukraine, you approximately save up to $3440 per month in average compared to nearshoring resources in the US.
# Top 5 Benefits of Outsourcing Software Development Ukraine
**#1 Large talent pool of >200,000 tech specialists**
As mentioned earlier, the number of tech specialists has exceeded 200,000 in 2021 along with 36,000 tech graduates per year.
In turn, Ukraine could have provided only 130,000 tech specialists in 2017 with only 23,000 IT graduates that year.
Below you can see the results of StackOverflow Developer Survey 2021 with the ranking showing the top programming languages and tools preferred by Ukrainian professional developers.

Below you can see the detailed percent-based chart of technologies mostly used by Ukraine tech community.

What does this all mean to your business?
Put it simply, the current Ukrainian IT market growth rates allow to ensure that there are no potential warnings and risks even considering COVID-19 and unstable political environment in the nearest future.
​
**#2 The quality meets the price**
The fact that Ukraine provides a growing number of IT specialists and reasonable developer rates transforms into a high price-quality relationship. It allows Ukrainian software development companies to compete with many European and US technology providers.
So, the average Ukraine outsourcing rates are $25-$50 while US companies charge $55-$85 providing the same technical background.
Moreover, Ukraine offers more cost-efficient rates compared to Poland, Belarus, and Czech Republic that make the list of the top East European IT services providers.
According to PayScale, the average monthly salary of a software developer in Poland is approximately $4,540. In turn, the same tech specialist in Ukraine gets paid $2,260 per month based on Indeed’s data.

These numbers mean that outsourcing software development Ukraine is a great opportunity to save funds thus accessing a large talent pool with a solid technical background and high final quality of delivery.
​
**#3 Strong English proficiency level**
Efficient and roadblock-free communication is among the top priorities for software development providers and companies to deliver fast, high-quality, and result-oriented technology solutions.
If you experience miscommunication issues and other challenges while performing calls and meetings with a technology provider, it will most likely negatively influence the project deadlines and final product quality.
In terms of software development outsourcing Ukraine, you can be sure of being roughly on the same page with 85% of development teams according to [The Portrait of IT Specialist 2020 report by Dou.ua](https://dou.ua/lenta/articles/portrait-2020/).
Product and Project Managers, Business Analysts, Solution Architects, and Team/Technical Leads are among the key tech positions that perform advanced English proficiency levels.
Senior PMs are the best communicators as the majority of them have Advanced+ English level.

This benefit allows you to make sure that most Ukraine app developers will get your project requirements in the right way and provide alternative technology solutions (if needed) without wasting your time and nerves.
​
**#4 Suitable geolocation and time zone difference**
As we’ve already stated, Ukraine is 1-3h ahead of most European countries that makes it possible to share at least 7 business hours for collaboration.
For example, a company from London has only 2 shared business hours with an outsourcing software development team based in Vietnam and only 1h with the Philippines.
In contrast, Ukraine provides 7 out 9 shared business hours for the UK and 8 out of 9 for most other European companies.
Interestingly, the Ukrainian geographical location allows us to be the #1 nearshoring option for CEE countries.
What’s more, Ukrainian airlines offer (depending on COVID-19 restrictions) daily direct flights to most European locations that take only 2-3 hours from Kyiv to Lisbon, for example.
For example, you can greatly save funds on tickets by purchasing Ryanair and Wizzair low-cost fly tickets that offer two-way direct flights from London, Berlin, Vienna, and other cities for $20-$60 per ticket.
Besides, 81 European and US countries don’t need to be issued a visa for visiting Ukraine and approximately 50 other countries just need to share online their residents’ online passports to visit Ukraine for a short period of time.
This all means that a convenient location and suitable time zone difference feature the list of the core reasons many startups and companies outsource development Ukraine.
​
**#5 Rapid Annual Growth of IT** **Industry Ukraine**
We’ve shared the current Ukrainian IT services market growth rates, but now we want to discuss more in-depth statistics, so you get a better understanding.
So, during the last 5 years, Ukraine has significantly grown and invested in increasing the volume of IT services export.
Let’s review these 5 years in greater detail:
​
||5-Year Ukrainian IT Services Market Overview||
|:-|:-|:-|
|Year|Number of tech specialists|Market value, billions|
|2016|153,000|$1,98|
|2017|172,000|$2,49|
|2018|184,000|$3,2|
|2019|192,000|$4,17|
|2020|205,000|$5,03|
|2025 (estimated)|242,000|$8,4|
​
​
Has **COVID-19 impacted the Ukrainian IT services export?**
The pandemic influenced the IT industry in Ukraine only positively as software development provides flexible remote opportunities.
In witness whereof, we want to show the Ukrainian IT job market dynamics from Jan 2020 to Jan 2021 according to the DOU report (one of the top IT blogs and communities in Ukraine).
**NOTE:** below you can see the accumulated number of tech vacancies by month.

As you can see, the Ukrainian IT jobs market has shown a solid decrease from Jan to Apr 2020 due to multiple factors like winter holidays, an upcoming pandemic, etc.
However, the Ukrainian IT market has returned to its previous positions in July 2020 and since then we can only spot a significant increase with a short pause for winter holidays in Dec 2020.
Surprisingly, Ukraine has performed an all-time record in terms of the number of tech-related vacancies in Jan 2021 with the number of over 8,132 positions.
Moreover, the recent years have been featured as the most investment-intensive period for the Ukrainian IT market.
Specifically, GitLab, Restream, Reface, Grammarly, People.ai, and other tech startups have accumulated totally over $500 million since 2019.
**Ultimately, why outsourcing software development Ukraine is** **a progressive and profitable choice for multiple companies?**
So, why outsource to Ukraine:
* Decrease labor costs through cost-efficient rates
* Approach an enormous talent pool with over 200,000 tech specialists
* Get high-quality delivery with strong technical background
* Dodge plenty of legal restrictions
* Work within a convenient time zone
* Benefit from minimal cultural differences.

# Conclusion
We hope our brief list of reasons and benefits of outsourcing software development Ukraine will become in handy for you while considering IT outsourcing and choosing an ideal offshore dev location.
Always study the pros and cons carefully to make a well-considered decision as it will influence the final product quality, development costs, and future success of your business idea.
If you’re interested in getting more detailed information on outsourcing software development Ukraine with infographics, numbers, charts, and tables, check the full blog article **[Top 5 Benefits of Outsourcing Software Development Ukraine](https://ascendixtech.com/outsourcing-software-development-ukraine-benefits/)**. | ascendixtech |
782,673 | File upload in angular 12 | e Avni Tech | Check my latest blog on File upload in angular 12... | 0 | 2021-08-05T16:13:28 | https://dev.to/eavnitech/file-upload-in-angular-12-e-avni-tech-3inj | angular | Check my latest blog on File upload in angular 12 https://eavnitech.com/blog/file-upload-in-angular-12/
| eavnitech |
782,710 | Why is everyone so excited about PolyWork? First impressions aren't good! | Hi, just wondered what I was missing (as I am sure there is something!). Just signed up to PolyWork... | 0 | 2021-08-05T17:22:14 | https://dev.to/grahamthedev/why-is-everyone-so-excited-about-polywork-first-impressions-aren-t-good-37d1 | discuss, healthydebate, webdev | Hi, just wondered what I was missing (as I am sure there is something!).
Just signed up to [PolyWork](https://www.polywork.com/) with a VIP code and I just don't get it?
I spent ages scrolling through the never ending list of tags to add to my profile (which had repeated tags all over the place), weird but I get the concept.
Anyway got it all set up and...now what? I added a link to a post (the editor is not great...and you can't add `alt` descriptions so I had to improvise...so that immediately puts a big nail in the coffin for me anyway) - great, now what?
Headed over to the "multiverse" - nothing really makes me want to follow anyone on there as it is just a list of names, went to "space station"...sure maybe I will contact a couple of investors but yet again it seems very basic and limited.
I have yet to see a feed with articles or anything like that to give me ideas of who I want to follow.
The whole thing is slow and clunky, having to press a couple of times to get pages to load.
So please, what am I missing...is it all marketing hype with the "invite codes" rubbish and the pretty graphics, or do they actually have something special and I am just not getting it (and if that is the case, could someone tell me how I am *meant* to be using it!).
Thanks in advance.
| grahamthedev |
782,897 | 🧩 Spring Cloud Messaging With AWS and LocalStack | 📚 Learn how to simulate #AWS services locally using LocalStack with a #Spring Boot... | 0 | 2021-08-05T18:04:49 | https://dev.to/robertinoc_dev/spring-cloud-messaging-with-aws-and-localstack-270c | spring, java, aws | ### 📚 Learn how to simulate #AWS services locally using LocalStack with a #Spring Boot application.
### <br>
**TL;DR:** This article demonstrates the use of LocalStack and how it can simulate many of the AWS services locally. We will use Spring Cloud Messaging to create a publisher-subscriber sample application. We will use Amazon SNS and SQS to do that.
The sample app can be found [here](https://github.com/maniish-jaiin/spring-cloud-messaging-sample).
## Spring Cloud Messaging With AWS and LocalStack
### Introduction
With an ever-growing demand for cloud services, Spring provides amazing support to integrate with Cloud providers and relevant services. [Spring Cloud for Amazon Web Services](https://spring.io/projects/spring-cloud-aws) is one such project that makes it easy to integrate with AWS services using familiar Spring APIs.
In this article, we will look into a simple application that acts as a message producer and a consumer using Amazon SNS and SQS. On top of that, we will not create an AWS account or use AWS services directly from AWS. We will instead use LocalStack, which will allow us to create AWS resources locally.
The sample app can be found [here](https://github.com/maniish-jaiin/spring-cloud-messaging-sample).
### Pre-requisites:
1. Basic knowledge of AWS, [AWS CLI](https://github.com/aws/aws-cli), and related services like Amazon SQS.
2. Basic knowledge of [Java 11](https://docs.oracle.com/en/java/javase/11/docs/api/index.html) and Spring Boot `2.4.7`.
3. [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) for the setup.
[Read more...](https://auth0.com/blog/spring-cloud-messaging-with-aws-and-localstack/?utm_source=content_synd&utm_medium=sc&utm_campaign=aws) | robertinoc_dev |
783,210 | JS Array Methods ! 🐱🏍 | WHAT IS A JS ARRAY ? The JavaScript Array class is a global object that is used in the... | 0 | 2021-08-09T07:08:06 | https://dev.to/mayank0508/js-array-methods-1pk1 | javascript, webdev, codenewbie, programming | ##WHAT IS A JS ARRAY ?
The JavaScript Array class is a global object that is used in the construction of arrays; which are high-level, list-like objects.
###*Arrays provide a lot of methods. To make things easier.*
##We will be talking about 4 Array Methods :
###*1.map*
###*2.filter*
###*3.sort*
###*4.reduce*

#1) Array.prototype.map()
So the basic need for using the map() method is to modify a given data, the map() method creates a new array populated with the results of calling a provided function on every element in the calling array. it return the same amount of data passed by the array but in a modified form
```javascript
const inventors = [
{ first: 'Albert', last: 'Einstein', year: 1879, passed: 1955 },
{ first: 'Isaac', last: 'Newton', year: 1643, passed: 1727 },
{ first: 'Galileo', last: 'Galilei', year: 1564, passed: 1642 },
{ first: 'Marie', last: 'Curie', year: 1867, passed: 1934 },
{ first: 'Johannes', last: 'Kepler', year: 1571, passed: 1630 },
{ first: 'Nicolaus', last: 'Copernicus', year: 1473, passed: 1543 }
```
```javascript
const fullName = inventors.map(
inventor => `${inventor.first} ${inventor.last}`
);
console.log(fullName); // it returns the full name of the inventors using the map method
```
#2) Array.prototype.filter()
So the basic need for using the filter() method is to filter out a given data, the filter() method creates a new array with all elements that pass the test implemented by the provided function.
Its returns the filtered arrays which might not include every element you have passed init.
```javascript
const inventors = [
{ first: 'Albert', last: 'Einstein', year: 1879, passed: 1955 },
{ first: 'Isaac', last: 'Newton', year: 1643, passed: 1727 },
{ first: 'Galileo', last: 'Galilei', year: 1564, passed: 1642 },
{ first: 'Marie', last: 'Curie', year: 1867, passed: 1934 },
{ first: 'Johannes', last: 'Kepler', year: 1571, passed: 1630 },
{ first: 'Nicolaus', last: 'Copernicus', year: 1473, passed: 1543 }
```
```javascript
const filter = inventors.filter(
inventor => inventor.year >= 1500 && inventor.year <= 1599
);
console.table(filter); // filter helps us here to filter out the list of inventors year dates
```
#3) Array.prototype.sort()
So the basic need for using the sort() method is to sort out a given data, the sort() method sorts the elements of an array in place and returns the sorted array. The default sort order is ascending. It will return the same amount of data that has been passed !
```javascript
const inventors = [
{ first: 'Albert', last: 'Einstein', year: 1879, passed: 1955 },
{ first: 'Isaac', last: 'Newton', year: 1643, passed: 1727 },
{ first: 'Galileo', last: 'Galilei', year: 1564, passed: 1642 },
{ first: 'Marie', last: 'Curie', year: 1867, passed: 1934 },
{ first: 'Johannes', last: 'Kepler', year: 1571, passed: 1630 },
{ first: 'Nicolaus', last: 'Copernicus', year: 1473, passed: 1543 }
```
```javascript
const sorted = inventors.sort((a, b) => (a.passed > b.passed ? 1 : -1));
console.table(sorted); // this method helps with the sorting of the results/arrays
```
#3) Array.prototype.reduce()
So the basic need for using the reduce() method is to sort out a given data, the reduce() method executes a reducer function i.e (that you provide) on each element of the array, resulting in a single output value, It returns a single value.
```javascript
const inventors = [
{ first: 'Albert', last: 'Einstein', year: 1879, passed: 1955 },
{ first: 'Isaac', last: 'Newton', year: 1643, passed: 1727 },
{ first: 'Galileo', last: 'Galilei', year: 1564, passed: 1642 },
{ first: 'Marie', last: 'Curie', year: 1867, passed: 1934 },
{ first: 'Johannes', last: 'Kepler', year: 1571, passed: 1630 },
{ first: 'Nicolaus', last: 'Copernicus', year: 1473, passed: 1543 }
```
```javascript
const total = inventors.reduce((total, inventor) => {
return total + (inventor.passed - inventor.year);
}, 0); // this method helps us to calculate the total number of years that were lived by the inventors using the reduce method
console.log(total);
```
*Some more JS Array Methods are:-*

#That's IT

*This Blog was inspired was Wes Bos JavaScript30 course*
#BONUS MEME

#HAPPY CODING 🚀
| mayank0508 |
783,238 | Why you should NOT HATE Java! | Everyone hates Java, even I did (once). The memes, the videos, I mean literally everywhere there is... | 0 | 2021-08-06T06:40:17 | https://dev.to/byteslash/why-you-should-not-hate-java-186n | computerscience, programming, java, coding | Everyone hates Java, even I did (once).
The memes, the videos, I mean literally everywhere there is spread one thing - **java's popularity is declining...**
I wondered why people hated it so much, to a point where people are even ready to buy merch out of it :P
So I decided on **trying out java myself...** <3

## So why do people hate Java?

Java has been popular for decades. There's no such proper reason to hate Java, rather there are some things that prick many developers.
1. **Java is VERBOSE.** <br/>
Verbosity is a good as well as a not-so-good feature. Like it makes debugging much easier and reliable but at the same time, you gotta write a lot of code. Many beginner developers complain about this thing a lot - why do we just write 1 line in python and 7 lines in java `just to print Hello, World!` on screen? But they don't understand what both languages are meant to perform!
2. **Forced OOP** <br/>
Java is a pure Object Oriented Programming language and it was designed specifically from the very start. Even if you want to execute small programs, you have to compulsorily wrap it into classes-object form, which doesn't make sense to many beginners or devs of other languages.
3. **Memory Hog Language** <br/>
By its very design, Java is a memory hog. You simply cannot make a memory-efficient program that handles enormous data while still preserving good object-oriented abstraction in your program. This extra memory consumption doesn't matter if you are making a small-scale application. But imagine making a video editing application that has to handle gigabytes of data in real-time! That's just insane...
## My journey with Java

When I first encountered java, I found that language ***so cringes to learn***. I too had that mentality of why to write long code! Instead, just use python to make life simpler. But I was wrong, I didn't even know the differences between those languages and I was simply comparing them.
Languages are simply tools that help you craft your desired application. So it doesn't matter which language you're using until you build super-efficient apps that drive you more users at end of the day :D
I learned Java and saw enormous applications of it! Like we can literally build highly scalable and enterprise-level apps using java. The Netflix you watch uses java to serve content to you ASAP.
## Instead of hating java, try it out once
I'm sure you won't regret learning it once you build some cool projects with it. All you can build is an android app or a desktop app for your windows pc. Are you a web geek? try out spring boot and make scalable backend apps with it!
Just have an end goal in your mind and start your development journey! You will soon stop comparing languages and use them wisely :D

> I hope you have got some value from this article and if so, don't forget to share it with all your friends and colleagues. Because sharing is caring :P | sahilpabale |
783,472 | What's the difference between a Developer and a CTO? | How would you explain the difference to your mom? | 0 | 2021-08-06T10:17:29 | https://dev.to/vsaw/what-s-the-difference-between-a-developer-and-a-cto-pe | discuss, management, career | ---
title: What's the difference between a Developer and a CTO?
published: true
description: How would you explain the difference to your mom?
tags: discuss, management, career
//cover_image: https://direct_url_to_image.jpg
---
In my opinion it can be summarised like this:
Dev: Technology is key!
CTO: Technology is a tool!
Don't get me wrong, it's important to be good at your craft, but in the end you will not be measured by how clean your code is, but by achieving your companies goals!
Or how would you describe the difference? | vsaw |
783,481 | Yet another Tetris clone with React | One more thing I wanted to add to the title was “and HTML elements” or “without Canvas” but I did not... | 0 | 2021-08-08T05:56:41 | https://dev.to/efearas/yet-another-tetris-clone-with-react-ejn | react, gamedev | One more thing I wanted to add to the title was “and HTML elements” or “without Canvas” but I did not as It would make the title longer than the introduction. Before I started this small fun project I expected that using HTML elements would be the thing but it turned out that event handlers and react state was the thing.
This will be an article on tips and maybe tricks if you are a seasoned React developer who wants to develop a simple game while staying in the React territory. This is not a React gaming tutorial and if it was the only thing I would say would be “don’t! don’t develop a game with React!”.
On the other hand, developing a game in React definitely made me a better React developer and I strongly advise you to do it to improve your React skills if you have been a forms/lists/fetch developer since you started React development.
Before going over the tips I would like to inform you that all code is at https://github.com/efearas/yet-another-tetris-clone and feel free to use it in whatever way you want and if you want to give it a try: https://tetris-clone.s3.us-west-2.amazonaws.com/index.html
###Tip 1: Game timer
While playing you may happen to think that you are in control as you are holding the controller but you are not, it is the game timer who is in charge of controlling the whole game and painting the next scene you are about to experience.
The problem about the timer(setInterval, setTimeout) which is actually an event(other event handlers also have the same problem) does not have access to the final state, what it has as a state is what state was present when the event was declared.
To overcome or maybe workaround this problem I created a state variable called timer and a useEffect function to watch this state variable which triggers a setTimeout to create a game loop.
```javascript
const [timer, setTimer] = useState(0);
useEffect(
() => {
setTimer(1)
}, []
)
useEffect(
() => {
if (timer > 0 && gameRunning) {
tick();
setTimeout(() => {
setTimer(timer + 1);
}, GAME_INTERVAL_MS);
}
}, [timer]
)
```
###Tip 2: Handling key and swipe events
If you are updating state while handling an event, it gets tricky. The event handlers normally use the state when they were first declared not when they are executed. Thankfully there is an alternative version of “setState” function which takes a function as a parameter and feeds that function with the current state as a parameter. Please see useKeyDown hook for details.
```javascript
const handleKeyDown = (e) => {
setShapes(
shapes => {
let movingBlock = Object.assign(Object.create(Object.getPrototypeOf(shapes.movingBlock)), shapes.movingBlock)
switch (e.keyCode) {
case 39://right
movingBlock.moveRight(shapes.frontierAndStoppedBlocks);
break;
case 37://left
movingBlock.moveLeft(shapes.frontierAndStoppedBlocks);
break;
case 40://down
movingBlock.moveAllWayDown(shapes.frontierAndStoppedBlocks);
break;
case 38://up
movingBlock.rotate(shapes.frontierAndStoppedBlocks);
break;
}
let currentShapes = { ...shapes }
currentShapes.movingBlock = movingBlock;
return currentShapes;
}
)
}
```
To handle the swipe events on mobile, I created the useSwipeEvents hook which just triggers keydown events that have already been implemented in useKeyDown.
###Tip 3: Drawing shapes
All Tetris shapes consist of 4 squares positioned differently so what I did was to position 4 divs based on the shape type. There is a base class called Shape and the real shapes are derived from this class.
The points property of Shape class stores the points as an array of x and y values.
###Tip 4: Moving shapes gracefully
Just applied the transition and transform css properties and the browser took it from there.
Do not worry about the calc and min css functions as they are for handling responsive layout. If you are targeting desktop or mobile only, then you will probably not need them.
```javascript
const ShapeRender = ({ x, y, color, marginTop, transitionDuration }) => {
return (
<div style={{
backgroundColor: color,
width: 'min(10vw,50px)',
height: 'min(10vw,50px)',
position: 'fixed',
transition: transitionDuration ? transitionDuration : null,
zIndex: 1,
transform: `translate(min(calc(${x}*10vw),${x * 50}px), min(calc(${y}*10vw + ${marginTop}), calc(${y * 50}px + ${marginTop})))`,
}} ></div>
)
}
```
###Tip 5: Flashing animation
When a row of blocks without a space collapses (the aim of the game) a flashing animation occurs on collapsing rows. I used keyframes and styled components to mimic lightning.
```javascript
const Animation = keyframes`
0% { opacity: 0; }
30% { background-color: yellow; }
50% { background-color: orange; }
70% { opacity: 0.7; }
100% { opacity: 0; }
`;
```
###Tip 6: Rotating shapes
There are many different approaches involving Matrices. Please refer to https://stackoverflow.com/questions/233850/tetris-piece-rotation-algorithm for a thorough discussion. I chose the Ferit’s approach which is; first transpose the matrix representing the shape and then reverse the order of columns to rotate the shape clockwise.
The relevant code is in the rotate method of Shape base class. Since the square does not need to be rotated, the rotate method is overridden in inherited Square class.
```javascript
rotate(frontier) {
this.rotationMatrix = reverseColumnsOfAMatrix(transpose(this.rotationMatrix));
let leftMostX = Math.min(...this.points.map(([pointX, pointY]) => pointX))
let topMostY = Math.min(...this.points.map(([pointX, pointY]) => pointY))
let newPointsArray = [];
this.rotationMatrix.map(
(row, rowIndex) =>
row.map(
(col, colIndex) => {
if (col === 1) {
newPointsArray.push([leftMostX + colIndex, topMostY + rowIndex])
}
}
)
);
if (this.isPointsInsideTheFrontier(newPointsArray, frontier))
return this;
this.points = newPointsArray;
return this;
}
```
###Closing Notes
As Kent C. Dodds says: "I think too many people go from "passing props" -> "context" too quickly." (https://kentcdodds.com/blog/application-state-management-with-react) , I stayed away using Context as much as I can and most of the application state is on component level or using props. Avoid over-engineering and enjoy simplicity!
| efearas |
783,493 | BECOMING A DEV | Those who have a 'why' to live, can bear with almost any 'how'.” Everyday routine in my mind. | 0 | 2021-08-06T11:01:13 | https://dev.to/rainbowwzrd/becoming-a-dev-487d | javascript, webdev, blockchain | Those who have a 'why' to live, can bear with almost any 'how'.”
Everyday routine in my mind. | rainbowwzrd |
783,524 | How to See Which Branch Your Teammate is on in Webstorm
| “What branch are you on?” is often the first question you ask when a teammate says “The tests aren't... | 0 | 2021-08-09T12:44:29 | https://dev.to/gitlive/how-to-see-which-branch-your-teammate-is-on-in-webstorm-32c8 | webdev, productivity, programming, tutorial | *“What branch are you on?”* is often the first question you ask when a teammate says *“The tests aren't passing”* or *“The build failed”*. Here’s how you can get an answer to that question without even needing to ask.
In Webstorm, open up the Team Window by clicking GitLive on the bottom tool window bar.

This will show a list of your teammates, if they are online (or away) and the issue they are currently working on. Locate the teammate you are interested in and click the arrow on the left-hand side of their avatar.

Now you will see all the repositories they have cloned including the name of their current branch!

**NOTE**: If an issue has been connected to the branch you’ll see the issue name instead, if that's the case just hover your cursor over the issue name and a tooltip will appear showing you the branch name (and even the commit hash)

Don't have the GitLive extension installed yet? You can find it [here](https://plugins.jetbrains.com/plugin/11955-gitlive) and, if you need help setting it up, follow the instructions in [GitLive docs](https://docs.git.live/docs/installation/). | fredgl |
783,698 | Abstraction in java | မင်္ဂလာပါဗျ။ ကျနော်ဒီနေ့ပြောပြပေးသွားမှာက abstraction အကြောင်းပါ။ သူကတော့ အခြား class တွေလိုမျိုး... | 0 | 2021-08-06T14:14:19 | https://dev.to/ucsysg/abstraction-in-java-1ab1 | java | မင်္ဂလာပါဗျ။ ကျနော်ဒီနေ့ပြောပြပေးသွားမှာက abstraction အကြောင်းပါ။ သူကတော့ အခြား class တွေလိုမျိုး object တခုအနေနဲ့အသုံးပြုလို့ရမှာ မဟုတ်ပါဘူး။ ဒါပေမယ့် သူ့ကို အခြားသော class တွေကနေ extends လုပ်ပြီးအသုံးပြုနိုင်မှာပါ။ example code ကိုကြည့်လိုက်ရအောင်။
```java
abstract class Animal { //create abstract class
}
class Cat extends Animal{ //create a class with Animal extended
}
public class App {
public static void main(String[] args) throws Exception {
System.out.println("hello java");
}
}
```
အပေါ်က code မှာဆိုလို့ရှိရင် ကျနော်တို့ **Animal** ဆိုတဲ့ abstract class ကို **Cat** ဆိုတဲ့ ရိုးရိုး class ကနေ extends လုပ်ပြီးတော့ **Animal** class ကိုအသုံးပြုသွားမှာပါ။
```java
abstract class Animal { //create abstract class
public void sleep(){
System.out.println("zzz");
}
}
class Cat extends Animal{ //create a class with Animal extended
}
public class App {
public static void main(String[] args) throws Exception {
Cat arKyel = new Cat();
arKyel.sleep();
}
}
```
အပေါ်က code မှာဆိုလို့ရှိရင် **Animal** class ထဲမှာ ရေးထားတဲ့ sleep ဆိုတဲ့ function ကို **Cat** ဆိုတဲ့ class ကိုသုံးကာ object တခုဆောက်ပြီးပြန်ခေါ်လို့ရပါတယ်။ အဲ့ဒီမှာ မေးခွန်းတခုရှိမယ် ထင်ပါတယ်။ Animal ကိုသုံးပြီး object ဘာလို့မဆောက်တာလဲပေါ့နော်။ ကျနော်တို့ အပေါ်မှာ ပြောထားသလိုမျိုး abstraction class က ပုံမှန် class တွေလိုမျိုး `new Animal();` ဆိုပြီး object create လုပ်လို့မရပါဘူး။ အဲ့အတွက်ကြောင့် Cat class ကနေတဆင့် **arKyel** (sry it's my cat name :)))ဆိုတဲ့ object ကို create ပြီး **sleep** functionကိုခေါ်သုံးတာပါ။ function မှမဟုတ်ပါဘူး အခြားသော abstraction class ထဲမှာရှိတဲ့ ဟာတွေအကုန်လုံးကို **Cat** class ကနေတဆင့် ခေါ်သုံးလို့ရမှာပါ။
abstraction class အကြောင်းပြီးပြီဆိုတော့ abstraction function အကြောင်းသွားရအောင်။ abstraction function ဆိုတာက function ရှေ့မှာ abstract ထည့်လိုက်ရုံပါပဲ။
```java
abstract class Animal { //create abstract class
public void sleep(){
System.out.println("zzz");
}
abstract public void makeSound();
}
class Cat extends Animal{
}
public class App {
public static void main(String[] args) throws Exception {
Cat arKyel = new Cat();
arKyel.sleep();
}
}
```
အပေါ်က code လိုမျိုး **makeSound()** function ရှေ့မှာ abstract ထည့်လိုက်ရုံနဲ့ abstraction function ဖြစ်သွားမှာပါ။ ဒါပေမယ့် အပေါ် က code ကို run ကြည့်ရင် error တက်နေမှာပါ။ ဘာလို့လဲဆိုတော့ **makeSound()** ဆိုတဲ့ abstraction function က သူ့ကိုပြန်ခေါ်သုံးတဲ့အခါ ဘာလုပ်ရမှန်း မသိလို့ပဲဖြစ်ပါတယ်။ အဲ့တာကြောင့် **makeSound()** function အတွက် code တွေရေးပေးရပါဦးမယ်။ ဒါပေမယ့် သူကလည်း အခြား function တွေလိုမျိုး `abstract public void makeSound(){\\codes}` ဆိုပြီး သွားလုပ်လို့မရပါဘူး။ သူ့ကို extends လုပ်ထားတဲ့ class တွေကနေတဆင့် ပြန်ရေးပေးရမှာပါ။
```java
abstract class Animal { //create abstract class
abstract public void makeSound();
}
class Cat extends Animal{
@Override
public void makeSound() {
System.out.println("meow meow");
}
}
public class App {
public static void main(String[] args) throws Exception {
Cat arKyel = new Cat();
arKyel.makeSound(); //call makeSound function through a object
}
}
```
အပေါ်က code ကိုကြည့်မယ်ဆိုရင် **makeSound** ဆိုတဲ့ function ကို **Cat** class မှာ ပြန်ရေးထားတာကိုတွေ့ရမှာပါ။ အဲ့အတွက်ကြောင့် object ကနေတဆင့် ပြန်ခေါ်သုံးလို့ရသွားမှာပါ။ ဒါပေမယ့် မေးခွန်းတခု ရှိမယ်ထင်ပါတယ်။ **makeSound** function ကို abstract အနေနဲ့ **Animal** class မှာ create မလုပ်ပဲ **Cat** class မှာရိုးရိုး function အနေနဲ့ create တန်းလုပ်လိုက်လည်း result ကတော့တူနေမှာပါ။ အဲ့ဒီမှာတခုပြောချင်တာက function ပုံမှန် function အနေနဲ့ မသုံးပဲ abstraction function အနေနဲ့ဘာလို့အသုံးပြုကြတာလဲပေါ့နော်။ ဆိုတော့ ကျနော်တို့ create လုပ်ထားတဲ့ **Animal** class ကိုတချက်ကြည့်လိုက်ရအောင်။ Animal တိုင်းက စကားပြောကြပေမယ့် type(eg: cat, dog) အပေါ်မူတည်ပြီး တကောင်နဲ့တကောင်တူကြမှာမဟုတ်ပါဘူး။ animal တိုင်း စကားပြောကြတဲ့ အမှန်တရားကို abstraction function အနေနဲ့ထည့်ထားတာဖြစ်ပါတယ်။ ပိုပြီးမြင်သာသွားအောင် dog တကောင်လောက် create လုပ်ကြရအောင်။
```java
abstract class Animal { //create abstract class
abstract public void makeSound();
}
class Cat extends Animal{
@Override
public void makeSound() {
System.out.println("meow meow");
}
}
class Dog extends Animal{
@Override
public void makeSound() {
System.out.println("woof woof");
}
}
public class App {
public static void main(String[] args) throws Exception {
Cat arKyel = new Cat();
arKyel.makeSound(); //call makeSound function through a object
Dog waTote = new Dog();
waTote.makeSound(); //call makeSound function through a object
}
}
```
ထုံးစံအတိုင်းပဲ ကျနော်တို့ **Dog** class ကို `extends Animal` လုပ်ပြီး create လုပ်ကာ **makeSound** ဆိုတဲ့ abstraction function ကို class ထဲမှာ ပြန်ရေးလုပ်လိုက်ပါတယ်။ ပြီးတော့ **Dog** class ကို သုံးပြီး object တခု တည်ဆောက်ကာ **makeSound** ဆိုတဲ့ abstraction function ကိုပြန်ခေါ်သုံးလို့ရပါပြီ။
ကျနော် ပြောသွားတဲ့ထဲမှာ နားမလည်တာရှိရင် ဒါမှမဟုတ် ဖြည့်စွတ် ပြောချင် တာများရှိရင် discussion ထဲမှာ ဝင်ရောက်ရေးသွားလို့ရပါတယ်။ မှားယွင်းနေတာရှိရင်လည်းပြောပြပေးသွားလို့ရပါတယ် ။arigatou:))။
.zuki
| ucsysg |
789,855 | AWS Lambda using CDK | Infrastructure as code has become a go-to process to automatically provision and manage cloud... | 0 | 2021-08-12T17:20:38 | https://dev.to/rahulmlokurte/aws-lambda-using-cdk-173e | aws, cdk, lambda, serverless | ---
title: AWS Lambda using CDK
published: true
description:
tags: #aws #cdk #lambda #serverless
//cover_image: https://direct_url_to_image.jpg
---
Infrastructure as code has become a go-to process to automatically provision and manage cloud resources. AWS provides two options for infrastructure as code.
1. AWS CloudFormation
2. AWS Cloud Development Kit
With CloudFormation, we have to write a lot of YAML templates or JSON files. As AWS adds more services, we have to add more files to CloudFormation. It becomes difficult to work with lots of files. YAML/JSON is based on data serialization and not an actual programming language. The AWS CDK will overcome the limitations of cloud formation by enabling the reuse of code and proper testing.
AWS CDK is a framework that allows developers to use familiar programming languages to define AWS cloud infrastructure and provision it. CDK provides the **_Constructs_** cloud component that cover many of the AWS services and features. It helps us to define our application infrastructure at high level.
we will create a Lambda Function and the infrastructure around lambda function using AWS CDK.
Create a new directory on your system.
```sh
mkdir cdk-greetapp && cd cdk-greetapp
```
We will use cdk init to create a new Javascript CDK project:
```sh
cdk init --language javascript
```
The cdk init command creates a number of files and folders inside the **_cdk-greetapp_** directory to help us organize the source code for your AWS CDK app.
We can list the stacks in our app by running the below command. It will show CdkGreetappStack.
```sh
$ cdk ls
CdkGreetappStack
```
Let us install AWS lambda construct library.
```sh
npm install @aws-cdk/aws-lambda
```
Edit the file lib/cdk-greetapp-stack.js to create an AWS lambda resource as shown below.
```javascript
const cdk = require("@aws-cdk/core");
const lambda = require("@aws-cdk/aws-lambda");
class CdkGreetappStack extends cdk.Stack {
/**
*
* @param {cdk.Construct} scope
* @param {string} id
* @param {cdk.StackProps=} props
*/
constructor(scope, id, props) {
super(scope, id, props);
// defines an AWS Lambda resource
const greet = new lambda.Function(this, "GreetHandler", {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset("lambda"),
handler: "greet.handler",
});
}
}
module.exports = { CdkGreetappStack };
```
- Lambda Function uses NodeJS 14.x runtime
- The handler code is loaded from the directory named **_lambda_** where we will add the lambda code.
- The name of the handler function is greet.handler where **_greet_** is the name of file and **_handler_** is exported function name.
Lets create a directory name **_lambda_** in root folder and add a file **_greet.js_**.
```sh
mkdir lambda
cd lambda
touch greet.js
```
Add the lambda code to **_greet.js_**
```javascript
exports.handler = async function (event) {
console.log("request:", JSON.stringify(event, undefined, 2));
let response = {
statusCode: 200,
body: `Hello ${event.path}. Welcome to CDK!`,
};
return response;
};
```
Before deploying the AWS resource, we can take a look on what resources will be getting created by using below command.
```sh
cdk diff
```

**NOTE**: If we have multiple profiles set in our system, we need to tell cdk to look into particular profile. This can be done, by adding below key-value in **_cdk.json_** which was generated when we created a CDK project.
```json
"profile": "<YOUR_PROFILE_NAME>"
```
Now, once we are ok with the resources which will be created, we can deploy it using below command
```sh
cdk deploy
```
Let us open the AWS Lambda console

Select Amazon API Gateway AWS Proxy from the Event template list.

Click on Test, we can see that, we get the proper response as shown below.

## Conclusion
we saw how to create a lambda function and also lambda resource by using AWS Cloud Development Kit. We also saw various commands related to CDK for initiating projects, deploying the resources to AWS. The code repository link is [here](https://github.com/rahulmlokurte/aws-usage/tree/main/aws-cdk/cdk-greetapp)
If you want to do hands-on, I have created a Youtube Video. You can check out [here](https://www.youtube.com/watch?v=kALoDWqFIX4&t=3s)
| rahulmlokurte |
783,715 | Deploy Strapi on AWS/GCP/Digital Ocean using Porter | Intro Porter is a Platform as a Service (PaaS) that runs in your own cloud provider. It... | 0 | 2021-08-06T16:39:42 | https://blog.porter.run/deploy-strapi-on-aws-gcp-digital-ocean-using-porter/ | javascript, cms, strapi, node | # Intro
[Porter](https://porter.run) is a Platform as a Service (PaaS) that runs in your own cloud provider. It brings the convenience of platforms like Heroku, Netlify, and Vercel into a cloud provider of your choice. Under the hood, Porter runs on top of a Kubernetes cluster but abstracts away its complexity to the extent that you don't even have to know that it's running on Kubernetes.

This is a quick guide on how to deploy [Strapi](https://strapi.io) to a Kubernetes cluster in AWS/GCP/DO using Porter. This guide uses PostgresDB by default - to customize your database settings, modify the files in `/app/config/env/production` in the [example repository](https://github.com/porter-dev/strapi).
# Quick Deploy
1. Create an account on [Porter](https://dashboard.getporter.dev).
2. [One-click provision a Kubernetes cluster](https://docs.porter.run/docs/getting-started-with-porter-on-aws) in a cloud provider of your choice, or [connect an existing cluster](https://docs.porter.run/docs/cli-documentation#connecting-to-an-existing-cluster) if you have one already.
3. Fork [this repository](https://github.com/porter-dev/strapi).
4. From the [Launch tab](https://dashboard.getporter.dev/launch), navigate to **Web Service > Deploy from Git Repository**. Then select the forked repository and `Dockerfile` in the root directory.
5. Configure the port to `1337` and set environment variable to `NODE_ENV=production`. Depending on your database settings, you might want to add more environment variables. More on this in the section below.
6. Set the assigned resources to Strapi's [recommended settings](https://strapi.io/documentation/developer-docs/latest/setup-deployment-guides/deployment.html#general-guidelines) (i.e. 2048Mi RAM, 1000 CPU), then hit deploy!

## Deploying PostgresDB
1. Strapi instance deployed through Porter connects to a PostgresDB by default. You can connect Strapi instance deployed on Porter to any external database, but it is also possible to connect to a database that is deployed on Porter. Follow [this guide to deploy a PostgresDB instance to your cluster in one click](https://docs.getporter.dev/docs/postgresdb).
2. After the database has been deployed, navigate to the **Environment Variables** tab of your deployed Strapi instance. Configure the following environment variables:
```
NODE_ENV=production
DATABASE_HOST=
DATABASE_PORT=5432
DATABASE_NAME=
DATABASE_USERNAME=
DATABASE_PASSWORD=
```
To determine what the correct environment variables are in order to connect to the deployed database, [see this guide](https://docs.getporter.dev/docs/postgresdb#connecting-to-the-database).
# Development
To develop, clone the [example repository](https://github.com/porter-dev/strapi) to your local environment and run `npm install && npm run develop;` from the `app` directory. Porter will automatically handle CI/CD and propagate your changes to production on every push to the repository.
# Questions?
Join the [Porter Discord community](https://discord.gg/FaaFjb6DXA) if you have any questions or need help.
| shimtrevor |
784,088 | Another 5 Helpful Python String Methods | Hello, I'm Aya Bouchiha, this is the part 2 of 5 useful python string methods. part 1 ... | 13,790 | 2021-08-07T00:19:12 | https://dev.to/ayabouchiha/another-5-helpful-python-string-methods-jd | python, beginners, codenewbie, programming | Hello, I'm [Aya Bouchiha](developer.aya.b@gmail.com),
this is the **part 2 of 5 useful python string methods**.
+ [part 1](https://dev.to/ayabouchiha/5-useful-python-string-methods-4pe7)
## capitalize()
**capitalize()**: this string method converts to uppercase the first letter of the first word in the specified string.
```python
first_name = "aya"
print(first_name.capitalize()) # Aya
```
## isalpha()
**isalpha()**: checks if all string's characters are alphabets letters **[A-z]**.
```python
print('AyaBouchiha'.isalpha()) # True
print('Aya Bouchiha'.isalpha()) # False
print('Aya-Bouchiha'.isalpha()) # False
print('AyaBouchiha 1'.isalpha()) # False
print('Aya Bouchiha!'.isalpha()) # False
```
## isdigit()
**isdigit()**: checks if all string's characters are digits.
```python
print('100'.isdigit()) # True
print('12a'.isdigit()) # False
print('100 000'.isdigit()) # False
print('+212-6000-00000'.isdigit()) # False
print('12a'.isdigit()) # False
print('3\u00B2') # 3²
print('3\u00B2'.isdigit()) # True
```
## isalnum()
**isalnum()**: checks if all string's characters are alphanumeric (alphabets, numbers).
```python
print('2021'.isalnum()) # True
print('30kviews'.isalnum()) # True
print('+212600000000'.isalnum()) # False
print('dev.to'.isalnum()) # False
print('developer.aya.b@gmail.com'.isalnum()) # False
print('Aya Bouchiha'.isalnum()) # False
```
## strip()
**strip(characters(optional))**: lets you delete the given characters (by default => *whitespaces*) at the start and at the end of the specified string.
```python
print(' Aya Bouchiha '.strip()) # Aya Bouchiha
print(' +212612345678 '.strip(' +')) # 212612345678
print('Hi, I\'m Aya Bouchiha'.strip('Hi, ')) # I'm Aya Bouchiha
```
### Summary
+ **capitalize()**: converts to uppercase the first letter of the first word in the specified string.
+ **isalpha()**: checks if all string's characters are alphabets letters.
+ **isdigit()**: checks if all string's characters are digits.
+ **isalnum()**: checks if all string's characters are alphanumeric.
+ **strip()**: deletes the given characters (by default => *whitespaces*) at the start and at the end of the specified string.
### Suggested Posts
+ [All You Need To Know About Python JSON Module](https://dev.to/ayabouchiha/all-you-need-to-know-about-python-json-module-5ef0)
+ [Youtube Courses, Projects To Learn Javascript](https://dev.to/ayabouchiha/youtube-courses-projects-to-master-javascript-3lhc)
+ [part-1: 5 Useful Python String Methods](https://dev.to/ayabouchiha/5-useful-python-string-methods-4pe7)
+ [5 Helpful Python Math Module Methods](https://dev.to/ayabouchiha/5-helpful-python-math-module-methods-44gf)
To Contact Me:
email: developer.aya.b@gmail.com
telegram: [Aya Bouchiha](https://t.me/AyaBouchiha)
Have a great day! | ayabouchiha |
784,120 | Joining..! | Hey I just joined now :) cout << "hello world"; print("hello world") console.log("hello... | 0 | 2021-08-07T04:08:49 | https://dev.to/reza_sadeghzadeh/joining-4mck | cpp, python, javascript | Hey I just joined now :)
cout << "hello world";
print("hello world")
console.log("hello world"); | reza_sadeghzadeh |
784,268 | Why you should opt interior expert to make workplace more productive? | Let us all admit that work nowadays has become the most essential part of our lives. Unlike, earlier... | 0 | 2021-08-07T08:35:37 | https://dev.to/kanikapanchal03/why-you-should-opt-interior-expert-to-make-workplace-more-productive-28md | Let us all admit that work nowadays has become the most essential part of our lives. Unlike, earlier times when work used to be just done for the sake of earning bread for the family. It most certainly has changed entirely for us and our coming generations.
Nowadays it's more like the driving force in our lives, the purpose of our lives and the satisfaction that drives most of us for wanting to do more. Imagine spending major part of our lives in a space that is boring and dull? Kills the enthusiasm, doesn't it? Now on the contrary imagine a beautiful workspace, especially designed, keeping in mind your needs, requirements, necessity and purpose, immediately becomes thrilling, doesn't it? This is most certainly a paradise for all the workaholics.
Understanding the need for a perfect workspace turns out to be the first step for a major elevation in our career graphs. Why not? Being on the top of the game is the major need of the hour for all of us. To bring in all the success, work satisfaction and mental peace ( most important ) an organized workspace is the first step of the ladder. Since we are all so caught up all the time, we most certainly don't have any spare time to sit and design our workspaces too, even if we want to, that's where an interior expert steps in.
We all know what to do at work and so do they. Understanding our needs, requirements, taste and our work vibe an interior ensures that he creates an environment for us wherein we can be most productive and happy in.
And we must only depend on the best for our future. Speaking of the best <b>commercial interior design company in UAE</b>, we must count on only one that promises to deliver- utmost satisfaction, innovative designs- to match with our taste and class, works as fast as possible, promises in house assistance, and fits in our budget too. Considering all that we talked about right now, we cannot, rather must not risk our future with just anyone. Having a clear picture in mind and wanting something unique that represents "you" should be not a task when asked to deliver from <b>fitouthubme</b>.
It is indeed a <b>leading interior design company in UAE</b>, that promises customer satisfaction and more than anything it promises post delivery service too. Doesn't it all sound like a dream come true? Someone taking care of a space where we spend most of our time in and literally adding a hundred stars to it!! Sounds absolutely thrilling.
Now that we put our trust in a brand, we must know what it is or how it really is. About fitouthubme, it is one of a <a target="_blank" rel="nofollow" href="https://www.fitouthubme.com/">leading interior design company in UAE</a> that promises to deliver whatever you want from it. They have been in this business for 9 years now, which ensures them being the <b>top interior designing company in UAE</b>. The brand also has successfully delivered 1024 projects and grabbed on 28 awards and is spread across with 7 branches globally.
When trusting a brand, which is a <b>top interior designer in UAE</b>, all random hazard questions must be answered too, which are successfully answered since they have also uploaded their certificates ensuring all safety protocols covered.
And last but not the least, in fact most essential part is to see what kind of work they deliver, and thankfully they have also uploaded their testimonials.
This brings in the ultimate satisfaction of being able to trust them and get our workspace amplified.
Why wait? Let us all take work from home or work from office to another level of personalized touch and satisfaction.
| kanikapanchal03 | |
784,444 | My ever growing neovim init.lua | What I have learning about neovim init.lua | 0 | 2021-08-07T11:08:03 | https://dev.to/voyeg3r/my-ever-growing-neovim-init-lua-h0p | neovim, lua | ---
title: My ever growing neovim init.lua
published: true
description: What I have learning about neovim init.lua
tags: neovim, lua
//cover_image: https://direct_url_to_image.jpg
---
## Intro
Since I have now neovim ~5~ 0.9.4 installed and knowing it can run faster with init.lua I started to gradually annotating and copying pieces of others [neo]vimmers to reproduce my old environment.
The full configuration resides now at:
https://bitbucket.org/sergio/mylazy-nvim/src/master/
I will improve this article gradually, for now each file and its place
will be at the top of each file
## This article will be rewritten
because now I am using the lazy.nvim plugin manager and a bunch of things have changed since I published the first version of this article.
## init.lua
``` lua
-- Filename: /home/sergio/.config/nvim/init.lua
-- Last Change: Mon, 06 Nov 2023 - 15:08
require('core.options')
require('core.filetype')
require('core.keymaps')
require('core.autocommands')
require('core.bootstrap')
require('core.commands')
require('core.files')
require('core.theme')
```
## options
```lua
-- Filename: options.lua
-- Last Change: Tue, 11 Jul 2023 - 16:23
-- Compile lua to bytecode if the nvim version supports it.
-- https://github.com/armyers/NormalNvim
-- if vim.loader and vim.fn.has "nvim-0.9.1" == 1 then vim.loader.enable() end
local opt = vim.opt
local g = vim.g
-- IMPROVE NEOVIM STARTUP
-- https://github.com/editorconfig/editorconfig-vim/issues/50
-- vim.g.loaded_python_provier = 0
-- vim.g.loaded_python3_provider = 1
-- vim.g.python_host_skip_check = 1
-- vim.g.python_host_prog='/bin/python2'
-- vim.g.python3_host_skip_check = 1
-- vim.g.python3_host_prog='/bin/python3'
-- vim.opt.pyxversion=3
-- vim.g.EditorConfig_core_mode = 'external_command'
-- https://vi.stackexchange.com/a/5318/7339
vim.g.matchparen_timeout = 20
vim.g.matchparen_insert_timeout = 20
vim.g.python3_host_prog = vim.loop.os_homedir() .. "/.virtualenvs/neovim/bin/python3"
local options = {
-- keywordprg = ':help',
winbar = '%=%m %F',
virtualedit = "block",
modelines = 5,
modelineexpr = false,
modeline = true,
cursorline = false,
cursorcolumn = false,
splitright = true,
splitbelow = true,
smartcase = true,
hlsearch = false,
ignorecase = true,
incsearch = true,
inccommand = "nosplit",
hidden = true,
autoindent = true,
termguicolors = true,
showmode = false,
showmatch = true,
matchtime = 2,
wildmode = "longest:full,full",
number = true,
linebreak = true,
joinspaces = false,
timeoutlen = 500,
ttimeoutlen = 10, -- https://vi.stackexchange.com/a/4471/7339
path = vim.opt.path + "**",
isfname = vim.opt.isfname:append("@-@"),
autochdir = false,
relativenumber = true,
numberwidth = 2,
shada = "!,'50,<50,s10,h,r/tmp",
expandtab = true,
smarttab = true,
smartindent = true,
shiftround = true,
shiftwidth = 2,
tabstop = 2,
foldenable = false,
foldlevel = 99,
foldlevelstart = 99,
foldcolumn = '1',
foldmethod = "expr",
foldexpr = "nvim_treesitter#foldexpr()",
undofile = true,
showtabline = 0,
mouse = 'a',
scrolloff = 3,
sidescrolloff = 3,
wrap = true,
list = true,
listchars = { leadmultispace = "│ ", multispace = "│ ", tab = "│ ", },
--lazyredraw = true,
updatetime = 250,
laststatus = 3,
confirm = false,
conceallevel = 3,
cmdheight = 0,
-- filetype = 'on', -- handled by filetypefiletype = 'on' --lugin
}
for k, v in pairs(options) do
vim.opt[k] = v
end
if vim.fn.has("nvim-0.10") == 1 then
opt.smoothscroll = true
end
-- disable builtins plugins
local disabled_built_ins = {
"2html_plugin",
"getscript",
"getscriptPlugin",
"gzip",
"logipat",
"matchit",
"netrw",
"netrwFileHandlers",
"loaded_remote_plugins",
"loaded_tutor_mode_plugin",
"netrwPlugin",
"netrwSettings",
"rrhelper",
"spellfile_plugin",
"tar",
"tarPlugin",
"vimball",
"vimballPlugin",
"zip",
"zipPlugin",
"matchparen", -- matchparen.nvim disables the default
}
for _, plugin in pairs(disabled_built_ins) do
vim.g["loaded_" .. plugin] = 1
end
if vim.fn.executable("rg") then
-- if ripgrep installed, use that as a grepper
vim.opt.grepprg = "rg --vimgrep --no-heading --smart-case"
vim.opt.grepformat = "%f:%l:%c:%m,%f:%l:%m"
end
--lua require("notify")("install ripgrep!")
if vim.fn.executable("prettier") then
opt.formatprg = "prettier --stdin-filepath=%"
end
--lua require("notify")("Install prettier formater!")
opt.formatoptions = "l"
opt.formatoptions = opt.formatoptions
- "a" -- Auto formatting is BAD.
- "t" -- Don't auto format my code. I got linters for that.
+ "c" -- In general, I like it when comments respect textwidth
- "o" -- O and o, don't continue comments
+ "r" -- But do continue when pressing enter.
+ "n" -- Indent past the formatlistpat, not underneath it.
+ "j" -- Auto-remove comments if possible.
- "2" -- I'm not in gradeschool anymore
opt.guicursor = {
"n-v:block",
"i-c-ci-ve:ver25",
"r-cr:hor20",
"o:hor50",
"i:blinkwait700-blinkoff400-blinkon250-Cursor/lCursor",
"sm:block-blinkwait175-blinkoff150-blinkon175",
}
-- window-local options
window_options = {
numberwidth = 2,
number = true,
relativenumber = true,
linebreak = true,
cursorline = true,
foldenable = false,
}
for k, v in pairs(window_options) do
vim.wo[k] = v
end
```
## filetype
```lua
-- Filename: filetype.lua
-- Last Change: Wed, 25 Oct 2023 - 05:49
-- lua file detection feature:
-- https://github.com/neovim/neovim/pull/16600#issuecomment-990409210
-- filetype.lua is sourced before filetype.vim so any filetypes defined in
-- filetype.lua will take precedence.
-- on my init.lua i make a require to this file, so then I can place
-- it on my ~/.config/nvim/lua/core/ folder
vim.g.do_filetype_lua = 1
--vim.g.did_load_filetypes = 0
vim.filetype.add({
extension = {
conf = "conf",
config = "conf",
md = "markdown",
lua = "lua",
sh = "sh",
zsh = "sh",
h = function(path, bufnr)
if vim.fn.search("\\C^#include <[^>.]\\+>$", "nw") ~= 0 then
return "cpp"
end
return "c"
end,
},
pattern = {
["^\\.(?:zsh(?:rc|env)?)$"] = "sh",
},
filename = {
["TODO"] = "markdown",
[".git/config"] = "gitconfig",
-- ["~/.dotfiles/zsh/.zshrc"] = "sh",
-- ["~/.zshrc"] = "sh",
-- [ "~/.config/mutt/muttrc"] = "muttrc",
["README$"] = function(path, bufnr)
if string.find("#", vim.api.nvim_buf_get_lines(bufnr, 0, 1, true)) then
return "markdown"
end
-- no return means the filetype won't be set and to try the next method
end,
},
})
```
## keymaps
```lua
-- Filename: keymaps.lua
-- Last Change: Tue, 11 Jul 2023 - 16:00
local map = require('core.utils').map
-- Set space as my leader key
map('', '<Space>', '<Nop>')
vim.g.mapleader = ' '
vim.g.maplocalleader = ' '
vim.keymap.set('n', '<c-k>',
function()
local word = vim.fn.expand("<cword>")
vim.cmd('help ' .. word)
end,
{ desc = 'help for current word' }
)
--{ silent = false, noremap = true, desc = 'toggles diagnostic'})
map('n', '<leader>l', '<cmd>Lazy<cr>', { desc = 'Lazy' })
map('n', '<Delete>', '<cmd>:update!<CR>', { desc = 'Save file if it has changed' })
map('n', '<c-s>', '<cmd>:update!<CR>', { desc = 'Save file if it has changed' })
map('n', '<F9>', '<cmd>:update!<CR>', { desc = 'Save file if it has changed' })
-- discard buffer
-- fixing a temporary issue:
-- https://github.com/dstein64/nvim-scrollview/issues/10
map('n', '<leader>x', ':wsh | up | sil! bd<cr>', { silent = true })
map('n', '<leader>w', ':bw!<cr>', { silent = true })
map('n', 'gl', '`.', { desc = 'Jump to the last changeoint' })
-- map('n', '>', '>>', { desc = "faster indent unindent" } )
-- map('n', '<', '<<', { desc = "faster indent unindent" } )
map('v', '<', '<gv', { desc = 'Reselect after < on visual mode' })
map('v', '>', '>gv', { desc = 'Reselect after > on visual mode' })
map(
'n',
'<leader>d',
'<cmd>lua require("core.utils").toggle_diagnostics()<cr>',
{ desc = 'enable/disable diagnostcs', silent = true }
)
-- quickfix mappings
map('n', '[q', ':cprevious<CR>')
map('n', ']q', ':cnext<CR>')
map('n', ']Q', ':clast<CR>')
map('n', '[Q', ':cfirst<CR>')
map('n', '[b', ':bprevious<CR>')
map('n', ']b', ':bnext<CR>')
-- " Move to previous/next
-- nnoremap <silent> <A-,> <Cmd>BufferPrevious<CR>
-- nnoremap <silent> <A-.> <Cmd>BufferNext<CR>
map('n', '<A-,>', '<Cmd>BufferPrevious<CR>')
map('n', '<A-.>', '<Cmd>BufferNext<CR>')
-- Navigate buffers
map('n', '<M-,>', ':bnext<CR>')
map('n', '<M-.>', ':bprevious<CR>')
map('n', 'Y', 'yg_', { desc = 'Copy until the end of line' })
-- Make the dot command work as expected in visual mode (via
-- https://www.reddit.com/r/vim/comments/3y2mgt/
map('v', '.', ':norm .<cr>', { desc = 'Repear normal command in visual' })
-- Line bubbling
-- Move selected line / block of text in visual mode
map('x', 'K', ":move '<-2<CR>gv-gv", { noremap = true, silent = true })
map('x', 'J', ":move '>+1<CR>gv-gv", { noremap = true, silent = true })
map('n', "'", '`')
map('n', '`', "'")
vim.keymap.set('n', '<A-d>', '<cmd>Delblank<cr>', { desc = 'remove consecutive blank lines' })
-- vim.keymap.set('n', '<cr>', function()
-- if vim.fs.isfile(vim.fn.expand('<cfile>')) == true then
-- vim.api.nvim_feedkeys('gf', 'n', false)
-- else
-- vim.api.nvim_feedkeys('<cr>', 'n', false)
-- end
-- end, {})
-- vim.keymap.set("n", "<cr>", function()
-- local path = vim.fn.expand("<cfile>")
-- local buf = vim.api.nvim_get_current_buf()
-- local cwd = vim.api.nvim_buf_get_name(buf):match("(.*/)")
-- local handler = io.open(cwd .. path)
-- if handler == nil then
-- return "<cr>"
-- end
-- handler:close()
-- return "gf"
-- end, {})
-- empty lines go to black hole register
vim.keymap.set('n', 'dd', function()
if vim.api.nvim_get_current_line():match('^%s*$') then
vim.api.nvim_feedkeys('"_dd', 'n', false)
else
vim.api.nvim_feedkeys('dd', 'n', false)
end
end)
vim.keymap.set('n', '<M-5>', function()
-- https://stackoverflow.com/a/47074633
-- https://codereview.stackexchange.com/a/282183
local results = {}
local buffers = vim.api.nvim_list_bufs()
for _, buffer in ipairs(buffers) do
if vim.api.nvim_buf_is_loaded(buffer) then
local filename = vim.api.nvim_buf_get_name(buffer)
if filename ~= '' then
table.insert(results, filename)
end
end
end
curr_buf = vim.api.nvim_buf_get_name(0)
if #results > 1 or curr_buf == '' then
vim.cmd('bd')
else
vim.cmd('quit')
end
end, { silent = false, desc = 'bd or quit' })
-- alternate file mapping (add silent true)
map(
'n',
'<bs>',
[[:<c-u>exe v:count ? v:count . 'b' : 'b' . (bufloaded(0) ? '#' : 'n')<cr>]],
{ silent = true, noremap = true }
)
map('n', '<M-,>', '<cmd>BufferLineCyclePrev<cr>', { desc = 'Prev buffer' })
map('n', '<M-.>', '<cmd>BufferLineCycleNext<cr>', { desc = 'Next buffer' })
-- toggle number/relative number
map('n', '<M-n>', '<cmd>let [&nu, &rnu] = [!&rnu, &nu+&rnu==1]<cr>')
-- line text-objects
-- https://vimrcfu.com/snippet/269
map('o', 'al', [[v:count==0 ? ":<c-u>normal! 0V$h<cr>" : ":<c-u>normal! V" . (v:count) . "jk<cr>" ]], { expr = true })
map('v', 'al', [[v:count==0 ? ":<c-u>normal! 0V$h<cr>" : ":<c-u>normal! V" . (v:count) . "jk<cr>" ]], { expr = true })
map('o', 'il', [[v:count==0 ? ":<c-u>normal! ^vg_<cr>" : ":<c-u>normal! ^v" . (v:count) . "jkg_<cr>"]], { expr = true })
map('v', 'il', [[v:count==0 ? ":<c-u>normal! ^vg_<cr>" : ":<c-u>normal! ^v" . (v:count) . "jkg_<cr>"]], { expr = true })
-- other interesting text objects
-- reference: https://www.reddit.com/r/vim/comments/adsqnx/comment/edjw792
-- TODO: detect if we are over the first char and jump to the right
local chars = { '_', '-', '.', ':', ',', ';', '<bar>', '/', '<bslash>', '*', '+', '%', '#', '`' }
for k, v in ipairs(chars) do
map('x', 'i' .. v, ':<C-u>norm! T' .. v .. 'vt' .. v .. '<CR>')
map('x', 'a' .. v, ':<C-u>norm! F' .. v .. 'vf' .. v .. '<CR>')
map('o', 'a' .. v, ':normal! va' .. v .. '<CR>')
map('o', 'i' .. v, ':normal! vi' .. v .. '<CR>')
end
-- cnoremap <expr> <c-n> wildmenumode() ? "\<c-n>" : "\<down>"
-- cnoremap <expr> <c-p> wildmenumode() ? "\<c-p>" : "\<up>"
map('c', '<c-n>', [[wildmenumode() ? "\<c-n>" : "\<down>"]], { expr = true })
map('c', '<c-p>', [[wildmenumode() ? "\<c-p>" : "\<up>"]], { expr = true })
-- It adds motions like 25j and 30k to the jump list, so you can cycle
-- through them with control-o and control-i.
-- source: https://www.vi-improved.org/vim-tips/
map('n', 'j', [[v:count ? (v:count > 5 ? "m'" . v:count : '') . 'j' : 'gj']], { expr = true })
map('n', 'k', [[v:count ? (v:count > 5 ? "m'" . v:count : '') . 'k' : 'gk']], { expr = true })
map('n', '<leader>ci', '<cmd> lua vim.diagnostic.open_float()<cr>')
-- nvim file
map('n', '<Leader>n', "<cmd>lua require('core.files').nvim_files()<CR>")
map('n', '<F6>', "<cmd>lua require('core.colors').choose_colors()<cr>", { desc = 'Choose colorScheme' })
map('n', 'c*', '*<c-o>cgn')
map('n', 'c#', '#<c-o>cgn')
map('n', 'gl', '`.')
map('n', '<leader>=', '<cmd>Reindent<cr>', { desc = 'Reindent current file' })
local myvimrc = vim.env.MYVIMRC
map('n', '<Leader>v', '<cmd>drop ' .. myvimrc .. '<CR>')
map('n', '<Leader>z', '<cmd>drop ~/.zshrc<CR>')
map('i', '<C-r>+', '<C-r><C-o>+', { desc = 'Insert clipboard keeping indentation' })
map('i', '<C-r>*', '<C-r><C-o>*', { desc = 'Insert primary clipboard keeping indentation' })
map('i', '<S-Insert>', '<C-r><C-o>*', { desc = 'Insert primary clipboard keeping indentation' })
map('i', '<leader>+', '<C-r><C-o>+', { desc = 'Insert clipboard keeping indentation' })
map('i', '<S-Insert>', '<C-r><C-o>*', { desc = 'Insert clipboard keeping indentation' })
vim.keymap.set("n", "sx", "<cmd>lua require('substitute.exchange').operator({motion = 'iw'})<cr>", { noremap = true })
vim.keymap.set("n", "sxx", "<cmd>lua require('substitute.exchange').line<cr>", { noremap = true })
vim.keymap.set("x", "X", "<cmd>lua require('substitute.exchange').visual<cr>", { noremap = true })
vim.keymap.set("n", "sxc", "<cmd>lua require('substitute.exchange').cancel<cr>", { noremap = true })
map('n', '<M-p>', '"+gP')
map('v', '<M-p>', '"+gP')
map('i', '<M-p>', '<C-r><C-o>+')
map('v', '<leader>y', '"+y')
map('n', '<c-c>', '<cmd>%y+<cr>', { desc = 'copy file to clipboard', silent = true })
map('n', '<c-m-f>', '<cmd>Format<cr>', { desc = 'Reformat keeping cursor position' })
map('i', '.', '.<c-g>u')
map('i', '!', '!<c-g>u')
map('i', '?', '?<c-g>u')
map('i', ';', ';<c-g>u')
map('i', ':', ':<c-g>u')
map('i', ']', ']<c-g>u')
map('i', '}', '}<c-g>u')
-- map('n', 'n', 'n:lua require("core.utils").flash_cursorline()<CR><CR>')
-- map('n', 'N', 'N:lua require("core.utils").flash_cursorline()<CR><CR>')
-- map('n', 'n', 'n:lua require("neoscroll").zz(300)<cr>', { desc = "middle of screen"})
-- map('n', 'N', 'N:lua require("neoscroll").zz(300)<cr>', { desc = "middle of screen"})
-- map('n', '*', '*:lua require("core.utils").flash_cursorline()<CR><CR>')
-- map('n', '#', '#:lua require("core.utils").flash_cursorline()<CR><CR>')
-- map(
-- 'n',
-- 'n',
-- "n<cmd>lua require('core.utils').flash_cursorline()<cr><cr>",
-- { desc = 'center the next match', silent = true }
-- )
--
-- map(
-- 'n',
-- 'N',
-- "N<cmd>:lua require('core.utils').flash_cursorline()<cr><cr>",
-- { desc = 'center the next match', silent = true }
-- )
map('n', 'J', 'mzJ`z')
map(
'n',
'<C-l>',
[[ (&hls && v:hlsearch ? ':nohls' : ':set hls')."\n" <BAR> redraw<CR>]],
{ silent = true, expr = true }
)
map('c', '<C-a>', '<home>')
map('c', '<C-e>', '<end>')
map('n', '<F8>', [[<cmd>lua require("core.files").xdg_config()<cr>]], { silent = true })
-- Quick write
-- map('n', '<leader>d', '<cmd>bd<CR>')
-- map("n", "<C-M-o>", ':lua require("telescope.builtin").oldfiles()<cr>') -- already mapped on which-key
map('n', '<C-M-o>', ':lua require("core.files").search_oldfiles()<cr>', { desc = 'Serach oldfiles' }) -- already mapped on which-key
-- map("n", "<c-p>", [[<cmd>lua require("telescope.builtin").find_files{cwd = "~/.dotfiles/wiki"}<cr>]], { silent = true })
map(
'n',
'<c-p>',
[[<cmd>lua require("telescope.builtin").find_files({search_dirs = {"~/.config", "~/.dotfiles"}})<cr>]],
{ silent = true }
)
--map("n", "<leader>b", [[<cmd>lua require('telescope.builtin').buffers()<cr>]], { silent = true })
-- map("n", "<leader>b", ":buffers<CR>:buffer<Space>", {desc = "Swich buffers"})
-- nnoremap <F5> :buffers<CR>:buffer<Space>
vim.keymap.set(
'n',
'<C-d>',
[[<Cmd>lua vim.cmd('normal! <C-d>'); MiniAnimate.execute_after('scroll', 'normal! zvzz')<CR>]]
)
vim.keymap.set(
'n',
'<C-u>',
[[<Cmd>lua vim.cmd('normal! <C-u>'); MiniAnimate.execute_after('scroll', 'normal! zvzz')<CR>]]
)
-- Resize splits with arrow keys
map('n', '<C-Up>', '<cmd>resize +2<CR>')
map('n', '<C-Down>', '<cmd>resize -2<CR>')
map('n', '<C-Left>', '<cmd>vertical resize -2<CR>')
map('n', '<C-Right>', '<cmd>vertical resize +2<CR>')
local open_command = 'xdg-open'
if vim.fn.has('mac') == 1 then
open_command = 'open'
end
local function url_repo()
local cursorword = vim.fn.expand('<cfile>')
if string.find(cursorword, '^[a-zA-Z0-9-_.]*/[a-zA-Z0-9-_.]*$') then
cursorword = 'https://github.com/' .. cursorword
end
return cursorword or ''
end
vim.keymap.set('n', 'gx', function()
vim.fn.jobstart({ open_command, url_repo() }, { detach = true })
end, { silent = true })
vim.keymap.set('n', '0', function()
if vim.fn.reg_recording() ~= "" then
vim.api.nvim_feedkeys('0', 'n', true)
else
local pos = vim.fn.col('.')
if pos == 1 then
vim.api.nvim_feedkeys('^', 'n', true)
elseif pos == vim.fn.col('$') - 1 then
vim.api.nvim_feedkeys('0', 'n', true)
else
vim.api.nvim_feedkeys('$', 'n', true)
end
end
end, { desc = 'smart zero movement' })
map('n', '-', '<cmd>split<cr>')
map('n', '|', '<cmd>vsplit<cr>')
map(
'n',
'<leader>L',
'<cmd>lua require("luasnip.loaders.from_lua").load({paths = "~/.config/nvim/luasnip/"})<cr>',
{ desc = 'reload snippets', silent = false }
)
-- -- Insert 'n' lines below current line staying in normal mode (e.g. use 5<leader>o)
-- vim.keymap.set("n", "<m-o>", function()
-- return "m`" .. vim.v.count .. "o<Esc>``"
-- end, { expr = true })
-- -- Insert 'n' lines above current line staying in normal mode (e.g. use 5<leader>O)
-- vim.keymap.set("n", "<m-O>", function()
-- return "m`" .. vim.v.count .. "O<Esc>``"
-- end, { expr = true })
map("n", "<m-o>", "<cmd>Telescope oldfiles<cr>", { desc = "telescope oldfiles" })
map('n', '<leader>s', '<cmd>LuaSnipEdit<cr>', { desc = 'Edit snippets' })
map('n', '<Leader><CR>', '<cmd>drop ~/.config/nvim/luasnip/snippets.lua<cr>', { silent = true, noremap = true })
map('i', 'jk', '<Esc>', { desc = 'Exit insert mode' })
map('i', 'kj', '<Esc>', { desc = 'Exit insert mode' })
vim.keymap.set('n', '<leader>ll', function()
return require('lazy').home()
end)
vim.keymap.set('n', '<leader>lu', function()
return require('lazy').update()
end)
vim.keymap.set('n', '<leader>ls', function()
return require('lazy').sync()
end)
vim.keymap.set('n', '<leader>lL', function()
return require('lazy').log()
end)
vim.keymap.set('n', '<leader>lc', function()
return require('lazy').clean()
end)
vim.keymap.set('n', '<leader>lp', function()
return require('lazy').profile()
end)
```
## autocommands
```lua
-- ~/.config/nvim/lua/hore/autocommands.lua
-- Mon, 10 Jul 2023 07:13:37
-- Define local variables
local augroup = vim.api.nvim_create_augroup
local autocmd = vim.api.nvim_create_autocmd
local user_cmd = vim.api.nvim_create_user_command
-- the formatonsave does the job
-- augroup('write_pre', { clear = true })
-- autocmd('BufWritePre', {
-- callback = function()
-- local cursor_pos = vim.fn.getpos('.')
-- vim.cmd('%s/\\s\\+$//e')
-- vim.fn.setpos('.', cursor_pos)
-- end,
-- group = 'write_pre',
-- desc = 'Remove trailing whitespaces',
-- })
augroup('buf_empty', { clear = true })
autocmd({ "BufEnter" }, {
group = 'buf_empty',
pattern = { "" },
callback = function()
local buf_ft = vim.bo.filetype
if buf_ft == "" or buf_ft == nil then
vim.cmd [[
nnoremap <silent> <buffer> q :close<CR>
set nobuflisted
]]
end
end,
})
-- Highlight text on yank
augroup('YankHighlight', { clear = true })
autocmd('TextYankPost', {
group = 'YankHighlight',
callback = function()
vim.highlight.on_yank { higroup = 'IncSearch', timeout = '700' }
end,
desc = 'Highlight yanked text',
})
augroup('lsp_disable_diagnostic', { clear = true })
autocmd('BufEnter', {
group = 'lsp_disable_diagnostic',
pattern = '*',
command = 'lua vim.diagnostic.disable()',
desc = 'Disable diagnostic for a while',
})
augroup('formatonsave', { clear = true })
autocmd('BufWritePost', {
group = 'formatonsave',
desc = 'Trigger format on save',
pattern = { '*.lua', '*.py', '*.rb', '*.rs', '*.ts', '*.tsx', '*.sh', '*.md' },
callback = function()
vim.lsp.buf.format()
vim.cmd('normal zz')
end,
})
augroup('MatchRedundantSpaces', { clear = true })
autocmd('InsertLeave', {
pattern = '*',
callback = function()
vim.cmd([[highlight RedundantSpaces ctermbg=red guibg=red]])
vim.cmd([[match RedundantSpaces /\s\+$/]])
end,
desc = 'Higlight extra spaces',
})
augroup('clear_matches', { clear = true })
autocmd('InsertEnter', {
pattern = '*',
callback = function()
vim.cmd([[call clearmatches()]])
vim.cmd([[highlight RedundantSpaces ctermbg=red guibg=red]])
vim.cmd([[set nohls]])
end,
desc = 'Do not show extra spaces during typing',
})
augroup('yankpost', { clear = true })
autocmd({ 'VimEnter', 'CursorMoved' }, {
group = 'yankpost',
pattern = '*',
callback = function()
cursor_pos = vim.fn.getpos('.')
end,
desc = 'Stores cursor position',
})
autocmd('TextYankPost', {
pattern = '*',
group = 'yankpost',
callback = function()
if vim.v.event.operator == 'y' then
vim.fn.setpos('.', cursor_pos)
end
end,
})
augroup('equalwindow', { clear = true })
autocmd({ 'FileType' }, {
pattern = 'help',
group = 'equalwindow',
callback = function ()
vim.schedule(function ()
vim.cmd('tabdo wincmd =')
end)
end
})
augroup('start_luasnip', { clear = true })
autocmd({ "CursorHold" }, {
group = 'start_luasnip',
callback = function()
local status_ok, luasnip = pcall(require, "luasnip")
if not status_ok then
return
end
if luasnip.expand_or_jumpable() then
-- ask maintainer for option to make this silent
-- luasnip.unlink_current()
vim.cmd [[silent! lua require("luasnip").unlink_current()]]
end
end,
})
-- Close man and help with just <q>
autocmd('FileType', {
pattern = {
'help',
'man',
'lspinfo',
'checkhealth',
},
callback = function(event)
vim.bo[event.buf].buflisted = false
vim.keymap.set('n', 'q', '<cmd>q<cr>', { buffer = event.buf, silent = true })
end,
})
-- Auto create dir when saving a file where some intermediate directory does not exist
augroup('write_pre', { clear = true })
autocmd('BufWritePre', {
group = 'write_pre',
callback = function(event)
if event.match:match('^%w%w+://') then
return
end
local file = vim.loop.fs_realpath(event.match) or event.match
vim.fn.mkdir(vim.fn.fnamemodify(file, ':p:h'), 'p')
end,
})
-- Check for spelling in text filetypes and enable wrapping, and set gj and gk keymaps
autocmd('FileType', {
pattern = {
'gitcommit',
'markdown',
'text',
},
callback = function()
local opts = { noremap = true, silent = true }
vim.opt_local.spell = true
vim.opt_local.wrap = true
vim.api.nvim_buf_set_keymap(0, 'n', 'j', 'gj', opts)
vim.api.nvim_buf_set_keymap(0, 'n', 'k', 'gk', opts)
end,
})
augroup('bufcheck', { clear = true })
autocmd('BufReadPost', {
group = 'bufcheck',
callback = function()
local exclude = { 'gitcommit' }
local buf = vim.api.nvim_get_current_buf()
if vim.tbl_contains(exclude, vim.bo[buf].filetype) then
return
end
local mark = vim.api.nvim_buf_get_mark(0, '"')
local lcount = vim.api.nvim_buf_line_count(0)
if mark[1] > 0 and mark[1] <= lcount then
pcall(vim.api.nvim_win_set_cursor, 0, mark)
vim.api.nvim_feedkeys('zz', 'n', true)
end
end,
desc = 'Go to the last loc when opening a buffer',
})
-- Check if the file needs to be reloaded when it's changed
augroup('userconf', { clear = true })
autocmd({ 'FocusGained', 'TermClose', 'TermLeave' }, {
command = 'checktime',
group = 'userconf',
})
-- Toggle relative numbers based on certain events
augroup('GainFocus', { clear = true })
autocmd({ 'BufEnter', 'FocusGained', 'InsertLeave', 'CmdlineLeave', 'WinEnter' }, {
pattern = '*',
group = 'GainFocus',
callback = function()
if vim.o.nu and vim.api.nvim_get_mode().mode ~= 'i' then
vim.opt.relativenumber = true
end
end,
})
autocmd({ 'BufLeave', 'FocusLost', 'InsertEnter', 'CmdlineEnter', 'WinLeave' }, {
pattern = '*',
group = 'GainFocus',
callback = function()
if vim.o.nu then
vim.opt.relativenumber = false
vim.cmd('redraw')
end
end,
})
-- Set cmdheight to 1 when recording, and put it back to normal when it stops
augroup('record_action', { clear = true })
autocmd('RecordingEnter', {
callback = function()
vim.opt_local.cmdheight = 1
end,
group = 'record_action',
})
autocmd('RecordingLeave', {
callback = function()
vim.opt_local.cmdheight = 0
end,
group = 'record_action',
})
augroup('make-executable', { clear = true })
autocmd('BufWritePost', {
group = 'make-executable',
pattern = { '*.sh', '*.py' },
command = [[!chmod +x %]],
desc = 'Make files ended with *.sh, *.py executable',
})
```
## autocommands
```
-- ~/.config/nvim/lua/hore/autocommands.lua
-- Mon, 10 Jul 2023 07:13:37
-- Define local variables
local augroup = vim.api.nvim_create_augroup
local autocmd = vim.api.nvim_create_autocmd
local user_cmd = vim.api.nvim_create_user_command
-- the formatonsave does the job
-- augroup('write_pre', { clear = true })
-- autocmd('BufWritePre', {
-- callback = function()
-- local cursor_pos = vim.fn.getpos('.')
-- vim.cmd('%s/\\s\\+$//e')
-- vim.fn.setpos('.', cursor_pos)
-- end,
-- group = 'write_pre',
-- desc = 'Remove trailing whitespaces',
-- })
augroup('buf_empty', { clear = true })
autocmd({ "BufEnter" }, {
group = 'buf_empty',
pattern = { "" },
callback = function()
local buf_ft = vim.bo.filetype
if buf_ft == "" or buf_ft == nil then
vim.cmd [[
nnoremap <silent> <buffer> q :close<CR>
set nobuflisted
]]
end
end,
})
-- Highlight text on yank
augroup('YankHighlight', { clear = true })
autocmd('TextYankPost', {
group = 'YankHighlight',
callback = function()
vim.highlight.on_yank { higroup = 'IncSearch', timeout = '700' }
end,
desc = 'Highlight yanked text',
})
augroup('lsp_disable_diagnostic', { clear = true })
autocmd('BufEnter', {
group = 'lsp_disable_diagnostic',
pattern = '*',
command = 'lua vim.diagnostic.disable()',
desc = 'Disable diagnostic for a while',
})
augroup('formatonsave', { clear = true })
autocmd('BufWritePost', {
group = 'formatonsave',
desc = 'Trigger format on save',
pattern = { '*.lua', '*.py', '*.rb', '*.rs', '*.ts', '*.tsx', '*.sh', '*.md' },
callback = function()
vim.lsp.buf.format()
vim.cmd('normal zz')
end,
})
augroup('MatchRedundantSpaces', { clear = true })
autocmd('InsertLeave', {
pattern = '*',
callback = function()
vim.cmd([[highlight RedundantSpaces ctermbg=red guibg=red]])
vim.cmd([[match RedundantSpaces /\s\+$/]])
end,
desc = 'Higlight extra spaces',
})
augroup('clear_matches', { clear = true })
autocmd('InsertEnter', {
pattern = '*',
callback = function()
vim.cmd([[call clearmatches()]])
vim.cmd([[highlight RedundantSpaces ctermbg=red guibg=red]])
vim.cmd([[set nohls]])
end,
desc = 'Do not show extra spaces during typing',
})
augroup('yankpost', { clear = true })
autocmd({ 'VimEnter', 'CursorMoved' }, {
group = 'yankpost',
pattern = '*',
callback = function()
cursor_pos = vim.fn.getpos('.')
end,
desc = 'Stores cursor position',
})
autocmd('TextYankPost', {
pattern = '*',
group = 'yankpost',
callback = function()
if vim.v.event.operator == 'y' then
vim.fn.setpos('.', cursor_pos)
end
end,
})
augroup('equalwindow', { clear = true })
autocmd({ 'FileType' }, {
pattern = 'help',
group = 'equalwindow',
callback = function ()
vim.schedule(function ()
vim.cmd('tabdo wincmd =')
end)
end
})
augroup('start_luasnip', { clear = true })
autocmd({ "CursorHold" }, {
group = 'start_luasnip',
callback = function()
local status_ok, luasnip = pcall(require, "luasnip")
if not status_ok then
return
end
if luasnip.expand_or_jumpable() then
-- ask maintainer for option to make this silent
-- luasnip.unlink_current()
vim.cmd [[silent! lua require("luasnip").unlink_current()]]
end
end,
})
-- Close man and help with just <q>
autocmd('FileType', {
pattern = {
'help',
'man',
'lspinfo',
'checkhealth',
},
callback = function(event)
vim.bo[event.buf].buflisted = false
vim.keymap.set('n', 'q', '<cmd>q<cr>', { buffer = event.buf, silent = true })
end,
})
-- Auto create dir when saving a file where some intermediate directory does not exist
augroup('write_pre', { clear = true })
autocmd('BufWritePre', {
group = 'write_pre',
callback = function(event)
if event.match:match('^%w%w+://') then
return
end
local file = vim.loop.fs_realpath(event.match) or event.match
vim.fn.mkdir(vim.fn.fnamemodify(file, ':p:h'), 'p')
end,
})
-- Check for spelling in text filetypes and enable wrapping, and set gj and gk keymaps
autocmd('FileType', {
pattern = {
'gitcommit',
'markdown',
'text',
},
callback = function()
local opts = { noremap = true, silent = true }
vim.opt_local.spell = true
vim.opt_local.wrap = true
vim.api.nvim_buf_set_keymap(0, 'n', 'j', 'gj', opts)
vim.api.nvim_buf_set_keymap(0, 'n', 'k', 'gk', opts)
end,
})
augroup('bufcheck', { clear = true })
autocmd('BufReadPost', {
group = 'bufcheck',
callback = function()
local exclude = { 'gitcommit' }
local buf = vim.api.nvim_get_current_buf()
if vim.tbl_contains(exclude, vim.bo[buf].filetype) then
return
end
local mark = vim.api.nvim_buf_get_mark(0, '"')
local lcount = vim.api.nvim_buf_line_count(0)
if mark[1] > 0 and mark[1] <= lcount then
pcall(vim.api.nvim_win_set_cursor, 0, mark)
vim.api.nvim_feedkeys('zz', 'n', true)
end
end,
desc = 'Go to the last loc when opening a buffer',
})
-- Check if the file needs to be reloaded when it's changed
augroup('userconf', { clear = true })
autocmd({ 'FocusGained', 'TermClose', 'TermLeave' }, {
command = 'checktime',
group = 'userconf',
})
-- Toggle relative numbers based on certain events
augroup('GainFocus', { clear = true })
autocmd({ 'BufEnter', 'FocusGained', 'InsertLeave', 'CmdlineLeave', 'WinEnter' }, {
pattern = '*',
group = 'GainFocus',
callback = function()
if vim.o.nu and vim.api.nvim_get_mode().mode ~= 'i' then
vim.opt.relativenumber = true
end
end,
})
autocmd({ 'BufLeave', 'FocusLost', 'InsertEnter', 'CmdlineEnter', 'WinLeave' }, {
pattern = '*',
group = 'GainFocus',
callback = function()
if vim.o.nu then
vim.opt.relativenumber = false
vim.cmd('redraw')
end
end,
})
-- Set cmdheight to 1 when recording, and put it back to normal when it stops
augroup('record_action', { clear = true })
autocmd('RecordingEnter', {
callback = function()
vim.opt_local.cmdheight = 1
end,
group = 'record_action',
})
autocmd('RecordingLeave', {
callback = function()
vim.opt_local.cmdheight = 0
end,
group = 'record_action',
})
augroup('make-executable', { clear = true })
autocmd('BufWritePost', {
group = 'make-executable',
pattern = { '*.sh', '*.py' },
command = [[!chmod +x %]],
desc = 'Make files ended with *.sh, *.py executable',
})
```
## bootstrap
```lua
-- Install lazy.nvim automatically
-- Last Change: Sat, 09 Sep 2023 - 15:24:19
local lazypath = vim.fn.stdpath('data') .. '/lazy/lazy.nvim'
if not vim.loop.fs_stat(lazypath) then
vim.fn.system {
'git',
'clone',
'--filter=blob:none',
'https://github.com/folke/lazy.nvim.git',
'--branch=stable', -- latest stable release
lazypath,
}
end
vim.opt.rtp:prepend(lazypath)
local opts = {
git = { log = { '--since=3 days ago' } },
ui = { custom_keys = { false } },
install = { colorscheme = { 'tokyonight' } },
performance = {
rtp = {
disabled_plugins = {
'gzip',
'netrwPlugin',
'tarPlugin',
'tohtml',
'tutor',
'zipPlugin',
'rplugin',
'editorconfig',
'matchparen',
'matchit',
},
},
},
checker = { enabled = false },
}
-- Load the plugins and options
require('lazy').setup('plugins', opts)
```
## commands
```lua
-- Filename: /home/sergio/.config/nvim/lua/core/commands.lua
-- Last Change: Fri, 09 Dec 2022 - 19:18
-- vim:set ft=lua nolist softtabstop=2 shiftwidth=2 tabstop=2 expandtab:
local user_cmd = vim.api.nvim_create_user_command
user_cmd('LspSignatre', 'lua vim.lsp.buf.signature_help()', { nargs = '+' })
user_cmd('LspHover', 'lua vim.lsp.buf.hover()', { nargs = "+" })
-- commands and abbreviations
user_cmd('ClearBuffer', 'enew | bd! #', { nargs = 0, bang = true })
user_cmd('CopyUrl', 'let @+=expand("<cfile>")', { nargs = 0, bang = true })
user_cmd('HarponDel', ':lua require("harpoon.mark").rm_file()', { nargs = 0, bang = true })
user_cmd('BlockwiseZero', ':lua require("core.utils").blockwise_register("0")<CR>', { nargs = '?', bang = false })
user_cmd('BlockwisePlus', ':lua require("core.utils").blockwise_register("+")<CR>', { nargs = '?', bang = false })
user_cmd('BlockwisePrimary', ':lua require("core.utils").blockwise_register("*")<CR>', { nargs = '?', bang = false })
vim.cmd([[cnoreab Bz BlockwiseZero]])
vim.cmd([[cnoreab B+ BlockwisePlus]])
vim.cmd([[cnoreab B* BlockwisePrimary]])
user_cmd('Dos2unix', ':lua require("core.utils").dosToUnix()<CR>', { nargs = 0, bang = true })
user_cmd('ToggleBackground', 'lua require("core.utils").toggle_background()<cr>', { nargs = 0 })
user_cmd('ToggleSpell', 'lua require("core.utils").toggle_spell()<cr>', { nargs = 0 })
user_cmd('Wiki', 'lua require("core.files").wiki()<CR>', { desc = 'Search on my wiki'})
user_cmd("LuaSnipEdit", function()
require("core.utils").edit_snippets()
end, {})
user_cmd("Cor", "lua print(vim.g.colors_name)<cr>", { desc = "show current colorscheme"})
user_cmd('SnipList', function()
pcall(function()
require('core.utils').list_snips()
end)
end, {})
vim.cmd([[cnoreab Cb ClearBuffer]])
vim.cmd([[cabbrev vb vert sb]]) --vertical split buffer :vb <buffer>
vim.cmd([[cnoreab cls Cls]])
vim.cmd([[command! Cls lua require("core.utils").preserve('%s/\\s\\+$//ge')]])
vim.cmd([[command! Reindent lua require('core.utils').preserve("sil keepj normal! gg=G")]])
vim.cmd([[command! Format lua require('core.utils').preserve("lua vim.lsp.buf.format()")]])
vim.cmd([[highlight MinhasNotas ctermbg=Yellow ctermfg=red guibg=Yellow guifg=red]])
vim.cmd([[match MinhasNotas /NOTE:/]])
-- vim.cmd([[command! BufOnly lua require('core.utils').preserve("silent! %bd|e#|bd#")]])
user_cmd('BufOnly', function()
pcall(function()
-- vim.fn.Preserve("exec '%bd|e#|bd#'")
require('core.utils').preserve('silent! up|%bd|e#|bd#')
end)
end, {})
vim.cmd([[cnoreab Bo BufOnly]])
vim.cmd([[cnoreab W w]])
vim.cmd([[cnoreab W! w!]])
vim.cmd([[command! CloneBuffer new | 0put =getbufline('#',1,'$')]])
user_cmd('CloneBuffer', "new | 0put =getbufline('#',', '$')", { nargs = 0, bang = true })
-- vim.cmd([[command! Mappings drop ~/.config/nvim/lua/user/mappings.lua]])
vim.cmd([[command! Scratch new | setlocal bt=nofile bh=wipe nobl noswapfile nu]])
vim.cmd([[syntax sync minlines=64]]) -- faster syntax hl
user_cmd('Delblank', function()
require('core.utils').squeeze_blank_lines()
end, { desc = 'Squeeze blank lines', nargs = 0, bang = true })
user_cmd('ToggleDiagnostics', function()
require('core.utils').toggle_diagnostics()
end, { desc = 'Toggle lsp diagnostic', nargs = 0, bang = true })
user_cmd('Transparency', function()
require('core.utils').toggle_transparency()
end, { desc = 'Toggle transparency', nargs = 0, bang = true })
user_cmd('MiniStarter', function()
MiniStarter.open()
end, { desc = 'Fire MiniStarter' })
user_cmd('Old', 'Telescope oldfiles', { desc = 'List oldfiles (open)' })
user_cmd('Blockwise', function()
require('core.utils').blockwise_clipboard()
end, { desc = 'Make + register blockwise', nargs = 0, bang = true })
vim.cmd([[cnoreab Bw Blockwise]])
-- Use ':Grep' or ':LGrep' to grep into quickfix|loclist
-- without output or jumping to first match
-- Use ':Grep <pattern> %' to search only current file
-- Use ':Grep <pattern> %:h' to search the current file dir
vim.cmd('command! -nargs=+ -complete=file Grep noautocmd grep! <args> | redraw! | copen')
vim.cmd('command! -nargs=+ -complete=file LGrep noautocmd lgrep! <args> | redraw! | lopen')
-- save as root, in my case I use the command 'doas'
vim.cmd([[cmap w!! w !doas tee % >/dev/null]])
vim.cmd([[command! SaveAsRoot w !doas tee %]])
-- vim.cmd([[hi ActiveWindow ctermbg=16 | hi InactiveWindow ctermbg=233]])
-- vim.cmd([[set winhighlight=Normal:ActiveWindow,NormalNC:InactiveWindow]])
-- vim.cmd('command! ReloadConfig lua require("utils").ReloadConfig()')
vim.cmd('command! ReloadConfig lua require("core.utils").ReloadConfig()')
-- inserts filename and Last Change: date
-- vim.cmd([[inoreab lc -- File: <c-r>=expand("%:p")<cr><cr>-- Last Change: <c-r>=strftime("%b %d %Y - %H:%M")<cr><cr>]])
vim.cmd('inoreabbrev Fname <c-r>=expand("%:p")<cr>')
vim.cmd('inoreabbrev Iname <c-r>=expand("%:p")<cr>')
vim.cmd('inoreabbrev fname <c-r>=expand("%:t")<cr>')
vim.cmd('inoreabbrev iname <c-r>=expand("%:t")<cr>')
vim.cmd('inoreabbrev idate <c-r>=strftime("%a, %d %b %Y %T")<cr>')
vim.cmd([[cnoreab cls Cls]])
vim.api.nvim_create_user_command('BiPolar', function(_)
local moods_table = {
['true'] = 'false',
['false'] = 'true',
['on'] = 'off',
['off'] = 'on',
['Up'] = 'Down',
['Down'] = 'Up',
['up'] = 'down',
['down'] = 'up',
['enable'] = 'disable',
['disable'] = 'enable',
['no'] = 'yes',
['yes'] = 'no',
}
local cursor_word = vim.api.nvim_eval("expand('<cword>')")
if moods_table[cursor_word] then
vim.cmd('normal ciw' .. moods_table[cursor_word] .. '')
end
end, { desc = 'Switch Moody Words', force = true })
```
## files
```lua
-- File: /home/sergio/.config/nvim/lua/files.lua
-- Last Change: Thu, 17 Mar 2022 15:05
-- https://github.com/nvim-telescope/telescope.nvim/wiki/Configuration-Recipes
-- https://youtu.be/Ua8FkgTL-94
local status_ok, telescope = pcall(require, "telescope")
if not status_ok then
return
end
local M = {}
-- copied from https://github.com/nvim-telescope/telescope.nvim/wiki/Gallery
-- :Telescope find_files previewer=false theme=get_dropdown
local dropdown_theme = require('telescope.themes').get_dropdown({
results_height = 20,
-- winblend = 20;
width = 0.6,
prompt_title = '',
prompt_prefix = 'Files> ',
previewer = false,
borderchars = {
{ '─', '│', '─', '│', '╭', '╮', '╯', '╰' },
preview = { '─', '│', '─', '│', '╭', '╮', '╯', '╰' },
},
})
-- searches files on ~/.config
M.xdg_config = function()
require("telescope.builtin").find_files({
prompt_title = "XDG-CONFIG",
previewer = false,
find_command={'fd','--no-ignore-vcs'},
sorting_strategy = "ascending",
file_ignore_patterns = { "lua-language-server", "chromium" },
layout_config = { width = 0.7 },
cwd = "~/.config",
-- width = 0.6,
layout_config = { height = 0.3 },
layout_config = { width = 0.5 },
results_height = 20,
hidden = true,
previewer = false,
borderchars = {
{ '─', '│', '─', '│', '╭', '╮', '╯', '╰' },
preview = { '─', '│', '─', '│', '╭', '╮', '╯', '╰' },
},
})
end
-- mapped to F8
-- searches files on ~/.config
M.wiki = function()
require("telescope.builtin").find_files({
prompt_title = "Search wiki",
previewer = false,
find_command={'fd','--no-ignore-vcs'},
sorting_strategy = "ascending",
-- file_ignore_patterns = { "lua-language-server", "chromium" },
layout_config = { width = 0.7 },
cwd = "~/.dotfiles/wiki/",
-- width = 0.6,
layout_config = { height = 0.3 },
layout_config = { width = 0.5 },
results_height = 20,
hidden = true,
previewer = false,
borderchars = {
{ '─', '│', '─', '│', '╭', '╮', '╯', '╰' },
preview = { '─', '│', '─', '│', '╭', '╮', '╯', '╰' },
},
})
end
-- mapped to F8
-- searches opened buffers
M.buffers = function()
require("telescope.builtin").buffers({
prompt_title = "BUFFERS",
sorting_strategy = "ascending",
file_ignore_patterns = { "lua-language-server", "chromium" },
-- cwd = "~/.config",
previewer = false,
layout_config = { height = 0.3 },
layout_config = { width = 0.5 },
hidden = true,
})
end
-- mapped to <leader>b
M.nvim_files = function()
require("telescope.builtin").find_files({
prompt_title = "NVIM FILES",
previewer = false,
find_command={'fd','--no-ignore-vcs'},
sorting_strategy = "ascending",
file_ignore_patterns = { ".git" },
cwd = "~/.config/nvim",
hidden = true,
})
end
-- mapped to <leader>n
-- searches on ~/.dotfiles
M.search_dotfiles = function()
require("telescope.builtin").find_files({
prompt_title = "DOTFILES",
find_command={'fd','--no-ignore-vcs'},
shorten_path = true,
sorting_strategy = "ascending",
cwd = vim.env.DOTFILES,
hidden = true,
previewer = false,
layout_config = { height = 0.3 },
layout_config = { width = 0.5 },
})
end
-- mapped to Ctrl-p
-- searches on ~/.dotfiles
M.search_oldfiles = function()
require("telescope.builtin").oldfiles({
prompt_title = "OLDFILES",
previewer = false,
shorten_path = true,
sorting_strategy = "ascending",
-- cwd = vim.env.DOTFILES,
hidden = true,
layout_config = { height = 0.3 },
layout_config = { width = 0.5 },
})
end
-- mapped to Ctrl-Alt-o
-- searches on ~/.dotfiles
M.grep_dotfiles = function()
require("telescope.builtin").live_grep({
prompt_title = "GREP DOTFILES",
shorten_path = true,
sorting_strategy = "ascending",
cwd = vim.env.DOTFILES,
hidden = true,
})
end
-- mapped to
M.grep_wiki = function()
local opts = {}
opts.hidden = true
opts.search_dirs = {
"~/.dotfiles/wiki",
}
opts.prompt_prefix = ">"
opts.prompt_title = "Grep Wiki"
opts.path_display = { "smart" }
require("telescope.builtin").live_grep(opts)
end
return M
```
## theme
```lua
-- File: /home/sergio/.config/nvim/lua/core/theme.lua
-- Last Change: Wed, Nov 2023/11/22 - gg12:03:46
function isDayOrNight()
local currentTime = os.date("*t")
local currentHour = currentTime.hour
local startDaytimeHour = 6
local endDaytimeHour = 18
if currentHour >= startDaytimeHour and currentHour < endDaytimeHour then
-- return "day"
return "dawnfox"
else
-- return "night"
return "tokyonight"
end
end
local colorscheme = isDayOrNight()
local status_ok, _ = pcall(vim.cmd, "colorscheme " .. colorscheme)
if not status_ok then
vim.notify("colorscheme " .. colorscheme .. " not found!")
return
end
```
| voyeg3r |
786,038 | Mixing Clean Architecture | TL;DR This article gently introduces the CleanMixer tool, which is helpful for visualization and... | 0 | 2021-08-10T13:29:43 | https://dev.to/miros/mixing-clean-architecture-428c | elixir, erlang, architecture, codequality | **TL;DR** This article gently introduces the [CleanMixer](https://github.com/miros/clean_mixer) tool, which is helpful for visualization and control of elixir project architecture. Throughout this article, CleanMixer is used as a backbone theme for introducing architecture principles, best practices, and their reasons.
**Disclaimer** Most of the theoretical material I will cover uses Robert Martin's excellent book "Clean Architecture".
### Introduction
First of all, let's introduce a definition of what a component is. Any set of source files can be a component. It is something physical, for example, a namespace (in Elixir, that is a set of files in a separate folder with the same module name prefix for it). It could be an umbrella app; it could be a hex package.
If we are talking about a component from a logical point of view, a component is an abstraction of some functionality, in other words, some [DDD Bounded Context](https://martinfowler.com/bliki/BoundedContext.html), some functional area or business capability. Or it could be a purely technical abstraction — for example, an adapter for accessing Kafka. The relationship between components is physical. The source file of one component should mention the module's name or the source file of another component. It should not be confused with logical coupling, where one component discovers its dependencies only at runtime using Dependency Injection.
Let's use the [Clean Mixer] (https://github.com/funbox/clean_mixer) tool to visualize the architecture of some imaginary project and see what the picture looks like when the principles are violated.
```bash
mix clean_mixer.plantuml -v
```

What are the problems here?
Firstly, it is unclear which component is the domain or the core of the application. Which component contains the most critical business logic of our application? For what purpose was this service created?
Secondly, the layers are poorly visible. It is not clear where the Domain Core is, but on the other hand, it is also not clear where all the adapters and infrastructure components are. For example, the Queues component is used by many other components. Is it an important domain object, or is it some small detail of interaction with the outside world? It is unclear.
As a result, the whole picture looks tangled. Everything is together, and everything depends on each other. It is not clear where the essential parts are. The main reason for this is that there are cyclical dependencies. And cycles in dependencies are a strong indicator of the Stable Dependency Principle violation. We will talk about it very soon. There are often two arrows in the picture. One is red, where the principle is violated, and one is black, where the principle is not violated.
Another problem is that all the components are specific. Therefore, it is better to depend on abstractions. If one component depends on abstraction, then a clear boundary is drawn between the components. And thanks to this boundary, these components can vastly change independently of each other without any avalanche-like propagation of changes in the system. The Abstractness metric here is minimal for most components, meaning they do not contain interface definitions (elixir behaviours).
Now the principles
### Principles of cohesion
How are the components formed? What are they created from? We will cover two principles here. They are the Common Closure Principle and the Common Reuse Principle.
Common Closure Principle
Let's start with the Common Closure Principle. It says that files that change for the same reason must reside in the same component. And naturally, the opposite: files that change for different reasons and at different rates should be located in different components.
These architectural principles are very similar to more widely known SOLID principles but in a more generalized way. The SOLID counterpart of the Common Closure Principle is, of course, the mighty Single Responsibility Principle.
This principle follows from the fact that maintainability, that is, the convenience of maintaining a project, is more critical than reusability. When some functionality is in a single place, i.e., in one component, it is naturally more convenient to change it. On the other hand, if functionality is not split into a bunch of small independent pieces, it is more difficult to reuse it. A client should not depend on what he does not use (the Interface Segregation Principle). Maintainability is especially important in the early stages of project development. When to produce a lot of code, and preferably good understandable code, is more important than trying to package this code to be reusable in other hypothetical projects.
### Common Reuse Principle
The next one is the Common Reuse Principle. Files that are used together must be in the same component. Sounds very familiar. Indeed, this architectural principle is a generalization of the SOLID Interface Segregation Principle. It follows from the fact that if a component has good cohesion and implements some logically integral piece of functionality, then its files should have many dependencies among themselves. It is difficult for clients to depend on one file in such a component and not to depend on the others because they need all the functionality implemented by this component. Customers don't want to use a small piece of a large multipurpose component that has everything in the world. You don't have to depend on what you don't need. Files that are loosely related to others should not be in that component. Clients of that component might not want to depend on them.
### Principles of coupling
Now let's talk about coupling principles how the components are related. Very often, arbitrary relations between components indicate that something is wrong with cohesion as well. Relationships between components in this context mean source file dependencies (some code in a source file refers to a module name of another component).
### Acyclic Dependency Principle
The first and the most critical principle is the Acyclic Dependency Principle. It's straightforward. There should be no cycles in the component dependency graph. You can always get rid of cycles. Dependency cycles make components inter dependable and make independent work on different parts of the system more difficult.
For example, let's say our component is a hex package. The hex package has a version. Suppose we had made changes to package B that forced us to increment its major version. Now we need to update all clients of this package. So we need to update package A. And since we change something in package A, we need to update its version in package C. Because of the circular connections of the three components here, we also need to update the version of package C in package B. So if we make changes to one component, we need to change all components in their cycle.
All three components physically form one monolithic system. If these components were microservices, it would mean a nasty lock step deployment of all of them at once. In the world of microservices, it is a nightmarish beast of a distributed monolith.

### Breaking Cycles
How do I break the cycles? There are two approaches here.
#### Breaking Cycles with a new component
One way to do it is to move common code into a new component. Let's imagine we have two components — the component of Happy Doge and the component of Good Doge. But, as we know, all Doges are good happy doggos. Therefore, the Happy Doge uses docility of the Good Doge, and the Good Doge uses the happiness of the Happy Doge. We can combine this functionality into a new component of the base Doge and use it from both of these two components.

It's a great feeling if while trying to avoid a circular dependency, we suddenly realize that our system is missing some business-related component (the bounded context in DDD).
#### Breaking cycles with DI
The second way to break the loop is to reverse the direction of the dependency. If you have a cyclic graph, then to break it, it is enough to direct one dependency in the opposite direction. How to do it? Let's assume that we have a domain component, and it uses some functionality from the authorization component. But we do not want the domain to be dependent on anything. We want it to be the core of the system. What should we do? Within the domain component, we define an abstract interface and implement it inside the authorization component. It turns out that the domain no longer depends on any functionality in the authorization. Now the authorization component must implement some abstract interface that the domain requires from it. The concrete implementation of the functionality into the domain component is injected at runtime. That is a good old-fashioned dependency injection, which in my opinion, is highly underused in Elixir.

The most straightforward way to implement an interface in Elixir is behaviour. Every time I say interface, you can think about behaviour.
```elixir
defmodule Domain.UseCases.CreateUser do
alias Domain.User
@spec create_user(String.t, UserRegistry.t()) :: {:ok, User.t()} | {:error, term}
def create_user(username, user_registry) do
case user_registry.exists?(username) do
{:ok, true} ->
# ...
other ->
# ...
end
end
end
defmodule Domain.UserRegistry do
@type t :: module
@callback exists?(username) :: {:ok, boolean} | {:error, term}
end
end
defmodule Auth.LDAP do
alias Domain.UserRegistry
@behaviour UserRegistry
@impl UserRegistry
def exists?(username) do
{:ok, true}
end
end
```
### Stable Dependency Principle
Do you remember that odd red arrow that was on the opening diagram? It is time to make sense of it. That arrow was red because it violated the Stable Dependency Principle - dependencies should point in the direction of stability.
But what is stability? It is important to note that stability is not the opposite of volatility. It is not how often the source files of that component change. Stability is a definition in terms of dependencies. It determines how hard it is to change a component without breaking other components in your system. It measures how much work it takes to change it.

There are _stable_ components and _unstable_ components. Component A is a stable one. Three components use it. This means that it is responsible for them since they depend on it. If it wants to change its interface or internal functionality, these components may also need to change. Therefore, it makes the component stable. It's harder to change it.
Component B is unstable. It has no dependencies, and nobody uses it. Therefore, it can change as often as it wants.
Some components, by their very nature, must be volatile. We just want it. A good architectural principle is to divide components that change frequently and those that change rarely. Components that change frequently should be unstable. Components that rarely change can be stable.
Unstable components should depend on components that are more difficult to change and not vice versa. Because of their volatility, they may change frequently, and we don't want to modify a bunch of other components' code every time we change it (code smell known as the Shotgun Surgery).
Conversely, a volatile component should not be a dependency of a component that is difficult to change.
From this, we draw a simple conclusion. Dependencies should point in the direction of stability. Thus, if you go through the dependency graph, each next component in the system should be more stable than the previous ones.
To assess this more quantitatively, we need to introduce simple IN and OUT metrics. IN is the number of inbound dependencies, and OUT is the number of outbound dependencies. Those connections are source file-based.

For example, we have a Cc component. Inside it, there are two public files. They are used by two files from the Ca component, one from the Cb component and one from the Cd component. This means that its IN metric is equal to four.
### Instability metric

**I = OUT / (IN + OUT)**
Based on the IN metric, you can calculate the Instability metric. The Instability of a component is the share of its outgoing connections among all its connections. If the Instability of a component is equal to one, then no one depends on the component. He has no reason not to change. It can be a volatile component. If Instability is zero, then the component is very stable. Other components depend on this component. It is difficult for it to change, but on the other hand, since it does not depend on other components. So it will change only for important reasons: for reasons of either unavoidable change in business logic or some kind of purposeful refactoring decisions that hopefully won't change its interface.
Let's consider the case of violation of the Stable Dependency Principle. I think it is a bit easier to use the Stability metric. It is simply the inverted Instability.
**S = 1 − I = IN / (IN + OUT)**
Suppose initially we had a stable component in the system. We designed it this way, and it contains the core of the system's functionality, one that we expect not to change very often. But at some point in time, while adding some new functionality to the core of the system, one of the developers saw that the initially volatile component Flexible had the code he needed, and he just jumped to reuse it.

A new connection was created between a stable component and a mutable component. This connection violates the Stable Dependency Principle. A stable component has a ⅔ Stability metric. It has two incoming connections, among a total of three connections. The flexible component has a Stability metric of 1/3. He has one incoming connection and two outgoing ones. We remember that in the Stable Dependency Principle, stability should increase in the direction of connections. Ideally, the Stability of the component that is lower in the picture should be greater than the Stability of the component that is higher in the picture, but here it is just the opposite. This connection is broken, and so is our original desire for a component that rarely changes to be stable.
A component that changes frequently, an unstable component, should have few or no incoming connections, and yet they have appeared.
Solution? Once again, it is the Dependency Inversion. Component C defines the interface it needs. Component D implements this interface, and the necessary functionality is injected into component C.

### Stable Abstraction Principle
The next principle is the Stable Abstraction Principle. Stable components should be abstract. Indeed, we said that some components in the system must be stable, and many incoming connections to them can not be avoided. But it means that when this component changes, much other code in the system can break. This can be prevented by depending on abstractions. Stable components should be abstract, thereby loosening coupling by defining explicit interfaces. Unstable components need to be concrete because they implement some specific functionality, and that is why they have value. Dependencies must point in the direction of stability, and therefore dependencies must point in the direction of increasing abstractness.
You can take a simple metric A, the number of abstract files among all the files in our component.
**A = AbstractFiles / TotalFiles**
### The Main Sequence
We can visualize these metrics of Abstractness and Instability. This plot has two zones to avoid.

The first is the Zone of pain. It contains very concrete components. And they have very high stability. These are difficult to change because if you try to change such components, you may need to change a bunch of others.
On the other hand, there is a zone of uselessness. There are components here that are abstract but very unstable. It is unclear why a component that is changing all the time and no one depends on it defines some abstraction. This is most likely some kind of rubbish, like unnecessary interfaces.
We can then assume that the most valuable and problem-free components will be the components that are as far away as possible from these two extreme points. And these components form what Bob Martin called the Main Sequence. Distance from this main sequence can be a good metric. The further the component is from the main sequence, the more suspicious it is. So it is prudent to take a close look at it. Why did such a bad guy appear in the system, and what is wrong with him? Maybe he has something wrong with connections. Somewhere you need Inversion of Dependencies? Or, maybe, in general, you need to repartition the responsibilities between other components or introduce some new ones.
There is a particular case, and it is essential to mention it. These are nearly immutable components. Although they are in the pain zone, they are not dangerous because they rarely change. The classic example is the standard libraries of the language. All of our code is permeated with it, but this is not a big deal because the developers of these libraries take on a solid obligation to define stable interfaces and functionality.
### Clean and Hexagonal Architectures
Let's try to bring these principles together in the frameworks of Clean and Hexagonal Architectures. I will talk about them together as if they were the same thing. Because indeed, they have the same basic principles.
[](https://butovo.zone/clean_and_hexagonal.png)
The first principle is that the domain is always at the center of all dependencies. Domain neither depends on nor directly uses any source files from other components. Only other components have physical connections to a domain because they implement the interfaces that are defined in it.
While Clean Architecture considers the domain to be central, Hexagonal architecture emphasizes that the inner parts of the architecture define the interfaces implemented by the outer parts.
Inside the system, in its core is the Domain, along the edges, there are all sorts of adapters to implement API, query the database, access Kafka, etc... But, fundamentally, these two architectures are very similar in their values.
To reiterate, the Domain is the most stable part of the application. Therefore, all other components depend on it. But these dependencies should be to data structures that are fundamental entities of a specific domain. Or they must be abstract interfaces that are defined by the Domain.
Both architectures have many things in common in their layering as well. In the center is the Domain Entities Layer. These are pure data structures and entities with basic behavior that is stable in the Domain. If, for example, you want to create a new application with new functionality in the same domain, then, theoretically, you could take these structures out into a new application and build some new functionality based on them with no or few changes. This is the most stable part of the application - the Domain, the part that changes relatively rarely and only for important reasons: changes in your knowledge about the domain.
Around the inner core is the Domain Services Layer, that is, use cases. These are those entities that implement business processes using the core Domain structures. They, by definition, are application-specific. Therefore, they are less stable and need to change more often.
Next is the Application Services Layer. This would include technical but important concerns such as transaction management for the database.
Along the edges of the system are Adapters. This is the lowest level logic - i/o. Those can be HTTP API controllers or clients, Kafka, queues, etc. When moving from the center of the system to the edges, stability and reusability decrease.
**Reusability: Domain entities > Domain Services > Application Services > Adapters**
**Stability: Domain entities > Domain Services > Application Services > Adapters**
### Clean Mixer Example
[](https://butovo.zone/clean_mixer.png)
Let's try to analyze a specific example. Look at the picture. First, we see where the core of our system is. It consists of `im` (Instant Messaging) and `im/business_chat`.
It is important to note that all dependencies for components inside our Core Domain are only incoming. Therefore, these components are very stable.
At the top, we have the app_server component, and it is very unstable because it has many outgoing dependencies. It contains all the dirty initialization: building supervisor trees, configuring dependency injection, etc.
We also can see the `app_services` components — it's the Application Services Layer, which lies between the very stable application core and the very unstable component app_server.
Red arrows point to the Stable Dependency Principle violation. But the difference in Stability metrics is not that big. So it does not seem that these violations bring any problems worth looking at at this specific moment.
Also, note that components within the Core Domain are more abstract than other components (I metric).
It is also quite interesting that two very stable components have many incoming connections: the `xmpp` and `xep` components. Those implement low-level protocols. This is the kind of functionality that someday soon we might like to move to a separate hex package and use in other applications that work with these protocols. Therefore, we want these components to be stable to be more conveniently reused in other applications.
It is also worth mentioning the `data_io/interface` component. This is an entirely abstract component that contains only the most basic data structures and interface definitions. It was created to break cyclic dependencies.
### Automated architecture tests
But fixing the current state of the system is only the first step. How do we automatically check for architectural regressions? This can be done reliably only by automated tests. This is especially important if your target architecture is a Distributed Monolith. Without rigorous automated control, it can quickly regress to a Big Ball of Mud.
Some examples of tools just for that are Archunit for Java and NetArchTest for C#. Well, now Elixir has a Clean Mixer.
`test/arch_test.exs`
```elixir
ExUnit.start(capture_log: true, trace: true)
defmodule MessagingServer.Architecture.ArchTest do
use ExUnit.Case
alias CleanMixer.Workspace
@domain [
"im",
"im/business_chats"
]
@core_infrastructure [
"xmpp",
"xep"
]
@application_services [
"jabber/app_services",
"im/app_services",
"data_io/app_services"
]
@adapters [
"chatbot_platform_api",
"app_server",
"network"
]
@interface_components [
"data_io/interface"
]
setup_all do
workspace = CleanMixer.project(include_hex: true) |> Workspace.new(timeout_ms: 30_000)
%{ws: workspace}
end
defp dependencies_of(workspace, name) do
Workspace.dependencies_of(workspace, name)
|> Workspace.reject_hex_packs()
|> Enum.map(& &1.target)
end
defp dependency?(workspace, source_name, target_name) do
Workspace.dependency?(workspace, source_name, target_name)
end
defp component(workspace, name) do
Workspace.component(workspace, name)
end
defp format_cycle(cycle) do
cycle |> Enum.map(& &1.name) |> Enum.join(" -> ")
end
```
The rules are described in the form of regular ExUnit tests. I split the components in the system into several layers, which are similar to the layers in the Clean Architecture. In each subsequent layer, stability decreases, and instability increases. The most stable layer is the Core Domain, then goes the Core Infrastructure - components that implement protocols. The next level is Application Services. And then the Adapters.
There are two exceptional cases. These are the interface components, which we do not want to depend on anything. And protocol components, which should be as stable as possible since we want them to have a minimum of outgoing dependencies for easy subsequent reuse.
In the first test, we don't want our system to have circular dependencies between components.
```elixir
test "there are shall be no circular dependencies between components", %{ws: ws} do
assert Workspace.component_cycles(ws) |> Enum.map(&format_cycle/1) == []
end
```
In the next test, we don't want Core Domain components to depend on application services.
```elixir
for comp <- @domain, app_service <- @application_services do
test "domain component #{comp} shall not depend on application service #{app_service}", %{ws: ws} do
refute dependency?(ws, unquote(comp), unquote(app_service))
end
end
```
We don't want our Domain Components to depend on Adapters:
```
for comp <- @domain, adapter <- @adapters do
test "domain component #{comp} shall not depend on adapter #{adapter}", %{ws: ws} do
refute dependency?(ws, unquote(comp), unquote(adapter))
end
end
```
We check that the Core Infrastructure is independent of app_services:
```elixir
for comp <- @core_infrastructure, app_service <- @application_services do
test "core infrastructure component #{comp} shall not depend on application service #{app_service}", %{ws: ws} do
refute dependency?(ws, unquote(comp), unquote(app_service))
end
end
```
We check that the utils component is independent of any other component. Naturally, we want it to be stable and reusable.
```elixir
test "`utils` shall have no dependencies", %{ws: ws} do
assert dependencies_of(ws, "utils") == []
end
```
All in all, we check that each next more unstable level depends only on more stable components. In fact, we simply check for The Stable Dependencies Principle.
We can run these tests with the following command.
```bash
> mix run --no-start test/arch_test.exs
```
Running these tests on each commit in CI gives me a great deal of assurance that our architecture remains what I expect.
#### To summarize:
* Prohibit cyclical dependency graphs
* Separate domain and infrastructure. The domain should be at the center of your application, infrastructure at the edges.
* Explicitly define interfaces and control dependency directions using Dependency Inversion, but prefer to depend on interfaces, not implementations.
* Visualize the current state and automate checks for architectural regressions | miros |
784,507 | Website builders that let you design a website using HTML and CSS | Are there any website builders that let you design a website from scratch? I mean website builders... | 0 | 2021-08-07T15:15:38 | https://dev.to/riyanswat/website-builders-that-let-you-design-a-website-using-html-and-css-5c1n | help, webdev | Are there any website builders that let you design a website from scratch? I mean website builders that let you use HTML and CSS etc. Like W3schools spaces. | riyanswat |
784,524 | Nextjs vs vuejs - which is better | Next js is written in react, while vue js is a framework similar to react and a combination of react... | 0 | 2021-08-07T18:09:21 | https://dev.to/omkarchari619/nextjs-vs-vuejs-which-is-better-4jmj | Next js is written in react, while vue js is a framework similar to react and a combination of react + angular = vue. | omkarchari619 | |
784,629 | Laravel for Beginners : a Quick Guide - 1 | Laravel PHP’s MVC framework, say a popular one too It is developed by Taylor Otwell, thank him for... | 14,020 | 2021-08-08T16:03:03 | https://dev.to/kartikbhat/laravel-for-beginners-a-quick-guide-47mh | php, mvc, laravel8 | **[Laravel](https://laravel.com/)**
PHP’s MVC framework, say a popular one too
It is developed by [Taylor Otwell](https://twitter.com/taylorotwell), thank him for making our web development somewhat easier, its current version is 8.x
###What you need before dive into it?###
Cool :)
> basics of HTML, CSS, JS and PHP is enough (I hope you know these at least or learn them first)
I am sure developing web application using Laravel is easy (hope so) and interesting too :)
> ##Laravel is a PHP Framework##
###What is Framework?###
Simply, a set of pre-developed files that makes our development easier and faster, we develop->code our ideas on top of these files...
###Why we need Framework?###
Reduces development time, avoid developing simple components too, then finally we can get relief from scratching our head for managing data flow between multiple files ( but of course frequently we need to scratch our head for logical reasons :| )
If you feel **you are creative** - **you have patience** - **eager to explore** then for a sure Laravel thrills you...
Lets Begin,
> **Laravel follows a MVC architecture** ...
**What is MVC?**
* MVC is an architecture where Database Operations - Business Logic - View (visual representation) were kept separately.
* It makes code readability easier
* also makes our work easier too :)
> **MVC - Model - View - Controller**
* Model - Place to write all database related code
* View - Where we write UI related code
* Controller - Holds an actual logical part, acts as a mediator between Model and View
> **System Requirements**
* Apache Server - install either [WAMP](https://www.wampserver.com/en/) (easier) /[XAMPP](https://www.apachefriends.org/download.html)
* [Composer](https://getcomposer.org/)
* Browser
* IDE - I prefer [VSCode](https://code.visualstudio.com/download)
.
.
.
.
.
.
.
.
.
.
.
.
.
I hope you were done with installation :)
then to run server
> ###If you are using WAMP##
just double click on 
you can observe the transition status of WAMP server
Starting/Error - 
Partially Completed - 
Started Successfully - 
> ###If you are using XAMPP###
just double click on 
then within its interface **start** Apache and MySQL

> ###then open browser hit **http://localhost** ###
if a page related
to WAMP/XAMPP loads properly then it is clear that server started properly else there might be some problems with your system
> ###open **cmd** - a command prompt and enter **composer** ###
if it shows like this

it justifies that composer got installed successfully
Got Tired after these steps :(
Don't worry you are just one step away from **Laravel** installation...
come lets do that one too :)
* Open command prompt at you desired location in your system
* Now you are installing LARAVEL through using composer
* enter a command
`composer create-project laravel/laravel my-app`
wait for a while to install all necessary dependencies
* then, Bingo
> LARAVEL got installed, then you can see these messages

> **Then How to run this Application ?**
That too still simple,
open command prompt inside a created laravel project

then enter
`php artisan serve`
You will get an URL, where our Laravel app is running

Ok... the last step :)
last one...
open the browser then hit the URL provided in a cmd propmt...
Hurray !!!!!
Our Laravel Application is running

Enough for now right!!!... Ok
I hope you gained some hint on **What is Laravel?** and **How to install it?**
Lets meet in the next topic on Laravel Application Structure and how MVC induced in it...
Bye :)
| kartikbhat |
784,661 | 🔥 Announcing the "Angular Cookbook" | *Taking a deep breath in relief.... Sigh! I'm really, really happy today on becoming a worldwide... | 0 | 2021-08-07T17:37:06 | https://dev.to/codewithahsan/announcing-the-angular-cookbook-6d6 | angular, webdev, books, typescript | *Taking a deep breath in relief.... Sigh!
I'm really, really happy today on becoming a worldwide published Author. The 🅰️ngular Cook🅱️ook has been released today to both the [Packt store](https://www.packtpub.com/product/angular-cookbook/9781838989439?utm_source=github&utm_medium=repository&utm_campaign=9781838989439) and on the [Amazon store](https://www.amazon.com/Angular-Cookbook-recipes-enterprise-scale-development-dp-1838989439/dp/1838989439/?) as well.

## Cookbook What?
If you are an 🅰️ngular developer and are always thinking what are the best
practices for this, and that in Angular, this is a book that I've written **JUST
FOR YOU**! I've been working with Angular for more than 8 years to the point
where **Google** acknowledged myself as a
[Google Developers Expert in Angular](https://ahsanayaz.com/gde). And I've
compiled over 80 recipes that **every** Angular developer should know. Be it
**Component Communication, Performance Optimization, Routing Strategies,
Animations,** or even **Progressive Web Apps with Angular**, I've got you
covered with more than **600 pages** of 🅰️ngular goodness! 😎
## A sneak peek at the content
Open the [following link](https://www.amazon.com/Angular-Cookbook-recipes-enterprise-scale-development-dp-1838989439/dp/1838989439/?asin=B08VTWYJ7H&revisionId=&format=2&depth=1) to see a **sample** of the Kindle version (PDF).
## What will you learn through this book?
✔️ Gain a better understanding of how components, services, and directives work
in Angular
✔️ Understand how to create Progressive Web Apps using Angular from scratch
✔️ Build rich animations and add them to your Angular apps
✔️ Manage your app's data reactivity using RxJS
✔️ Implement state management for your Angular apps with NgRx
✔️ Optimize the performance of your new and existing web apps
✔️ Write fail-safe unit tests and end-to-end tests for your web apps using Jest
and Cypress
✔️ Get familiar with Angular CDK components for designing effective Angular
components
## Who this book is for
The book is for intermediate-level 🅰️ngular web developers looking for
actionable solutions to common problems in Angular enterprise development.
Mobile developers using Angular technologies will also find this book useful.
Working experience with JavaScript and TypeScript is necessary to understand the
topics covered in this book more effectively.
## Should you buy a copy?
YEAH, of course. How'll I get rich otherwise?

Just kidding 😄. I have spent a lot of time learning lots of techniques with
Angular. For the most part, it's easy peasy. But some things you always learn
**the hard way**. And I'd never want anyone to unnecessarily spent extensive
time to go learning those techniques. The purpose of this book is to make the
process easier for you to become an **Expert** in developing Enterprise
applications with Angular.
## Grab your copy today
[Get it from Amazon](https://www.amazon.com/Angular-Cookbook-recipes-enterprise-scale-development-dp-1838989439/dp/1838989439/?)
## Connect with me
[](https://twitch.tv/codewithahsan)[](https://www.github.com/ahsanayaz)[](https://www.linkedin.com/in/ahsanayaz)[](https://twitter.com/muhd_ahsanayaz)[](https://instagram.com/muhd.ahsanayaz)[](https://facebook.com/muhd.ahsanayaz)[](https://www.tiktok.com/@muhd.ahsanayaz)[](https://discord.gg/rEBSSh926k)
| codewithahsan |
784,695 | How to write a bit safer types in TypeScript | In this article you will learn: how to make your typescript functions even more safer some tricky... | 0 | 2021-08-07T20:32:33 | https://dev.to/captainyossarian/how-to-write-a-bit-safer-types-in-typescript-49ge | typescript | In this article you will learn:
- how to make your typescript functions even more safer
- some tricky differences between types and interfaces
- how to make types for your data structure more safe
**Part 1** - safer functions
Consider this example which is stolen from TypeScript docs:
```typescript
type Animal = { tag: 'animal' }
type Dog = Animal & { bark: true }
type Cat = Animal & { meow: true }
declare let animal: (x: Animal) => void;
declare let dog: (x: Dog) => void;
declare let cat: (x: Cat) => void;
animal = dog; // ok without strictFunctionTypes and error with
dog = animal; // should be ok
dog = cat; // should be error
```
Very simple code, nothing complicated.
`Animal` is a supertype for `Dog` and `Cat`.
There are a lot of typescript projects in the wild without `strict` flags. If you have active `strictFunctionTypes` flag, please disable it.
After disabling, you will see that `animal = dog` does not produces an error despite the fact that it isn’t `provably` sound.
*This is unsound because a caller might end up being given a function that takes a more specialized type, but invokes the function with a less specialized type. In practice, this sort of error is rare, and allowing this enables many common JavaScript patterns*
Of course if you have a big project you can't just turn on `strictFunctionTypes` and fix all errors.This is not always possible. So how to live without this flag?
Answer is simple. Just use generics.
```typescript
type Animal = { tag: 'animal' }
type Dog = Animal & { bark: true }
// generic is here
declare let animal: <T extends Animal>(x: T) => void;
declare let dog: (x: Dog) => void;
animal = dog; // error even without strictFunctionTypes
```
Almost forgot, prefer arrow function notation inside interfaces rather than method notation:
```typescript
// unsafe
interface Bivariant<T> {
call(x: T): void
}
// safe
interface Contravariant<T> {
call: (x: T) => void
}
```
**Part 2** - tricky differences between types and interfaces
Imagine you have untyped `handleRecord` function:
```typescript
interface Animal {
tag: 'animal',
name: 'some animal'
}
declare var animal: Animal;
const handleRecord = (obj:any) => { }
const result = handleRecord(animal)
```
You know that this function expects and object. You can replace `any` with `object` type, but eslint will not be happy about this change and will suggest you to use `Record<string, unknown>` instead.
```typescript
interface Animal {
tag: 'animal',
name: 'some animal'
}
declare var animal: Animal;
const handleRecord = (obj:Record<string, unknown>) => { }
const result = handleRecord(animal) // error
```
As you might have noticed, it still does not work. Because `interfaces` are not indexed by the default and we still can't pass `Animal` object to `handleRecord`. So, what we can do to fix our type and don't break dependent types?
Just use `type` instead of `interface`.
```typescript
type Animal= {
tag: 'animal',
name: 'some animal'
}
declare var animal: Animal;
const handleRecord = (obj:Record<string, unknown>) => { }
const result = handleRecord(animal) // ok
```
**Part 3** - safer data structure
Consider this example:
```typescript
interface Animals {
dog: 'Sharky',
cat: 'Meout'
}
type AnimalEvent<T extends keyof Animals> = {
name: T
call: (name: Animals[T]) => void
}
```
Seems that `AnimalEvent` constructor type is perfectly fine. Yea, why not? Let's use it as a function argument or array element:
```typescript
const handleEvent = <T extends keyof Animals>(event: AnimalEvent<T>) => { }
// we would expect an error but it compiles
const arrayOfEvents: AnimalEvent<keyof Animals>[] = [{
name: 'dog',
call: (name: 'Meout') => { }
}]
// should be error but it compiles
handleEvent<keyof Animals>({
name: 'dog',
call: (name: 'Meout') => { }
})
```
It is defenitely something wrong with our type because it allows us to represent invalid state. We all know that invalid state should not be representable if we use TypeScript.
Let's refactor it a bit:
```typescript
interface Animals {
dog: 'Sharky',
cat: 'Meout'
}
type EventConstructor<T extends keyof Animals> = {
name: T
call: (name: Animals[T]) => void
}
/**
* Retrieves a union of all possible values
*/
type Values<T> = T[keyof T]
// "Sharky" | "Meout"
type Test = Values<Animals>
// EventConstructor<"dog"> | EventConstructor<"cat">
type AnimalEvent = Values<{
[Prop in keyof Animals]: EventConstructor<Prop>
}>
const handleEvent = (event: AnimalEvent) => { }
// error
const arrayOfEvents: AnimalEvent[] = [{
name: 'dog',
call: (name: 'Meout') => { }
}]
// error
handleEvent({
name: 'dog',
call: (name: 'Meout') => { }
})
```
Instead of using generic for animal name we have created a union of all possible `AnimalEvents` representation. Hence - illegal state is unrepresentable.
These techniques are easy to use and simple to understand.
| captainyossarian |
784,769 | How to Run a Minecraft Server on AWS For Less Than 3 US$ a Month | During the first weeks of the COVID-19 pandemic, back in april 2020 my son ask me to build a... | 0 | 2021-08-07T21:58:49 | https://sidoine.org/how-to-run-a-minecraft-server-on-aws-for-less-than-3-usd-a-month | aws, serverless, minecraft, tutorial |
During the first weeks of the COVID-19 pandemic, back in april 2020 my son ask me to build a Minecraft server in order to play on the same world with his school friend. After checking [some available services](https://clovux.net/mc/) (yeah not so expensive finally), I have chosen to build a server on a EC2 instance. This article will explain you how to optimize the cost 😜, based on the usage!
## Some Tools Used in the Article
### AWS
I want to rely only on AWS services as I want to increase my knowledge on this big cloud offering. There is always one service you don't know ! In this particular example I will use the following services:
* [EC2](https://aws.amazon.com/ec2/) (virtual servers in the cloud)
* [Lambda](https://aws.amazon.com/lambda/) (serverless functions)
* [Simple Email Service](https://aws.amazon.com/ses/) (Email Sending and Receiving Service)
### Minecraft
[Minecraft](https://www.minecraft.net/) is a popular sandbox video-game. In this case I will focus on the Minecraft *Java Edition*, because the server version is running well on Linux server, and my son is running a laptop on Debian.
## Global Architecture of the Solution
The first month operating the server, I noticed that my son is using it a couple of hours each day, and then the server was idle. It's built on a EC2 `t2.small` with a 8 GB disk so I have a monthly cost of about **18 US$**. Not a lot but I was thinking that there is room for improvement! The main part of the cost is the EC2 compute cost (~17 US$) and I know that it's not used 100% of the time. The global idea is to **start the server only when my son is using it**, but he doesn't have access to my AWS Console so I need to find a sweet solution!
Here is the various blocks used:
* an **EC2 instance**, the Minecraft server
* use **SES** (Simple Email Service) to receive e-mail, and trigger a Lambda function
* one **Lambda** function to start the server
* one **Lambda** function to stop the server
And that's it. My son is using it this way:
* send an e-mail to a specific and secret e-mail address, this will start the instance
* after 8h the instance is shutdown by the lambda function (I estimate that my son must not play on Minecraft more than 8h straight 😅)
## Let's Build it Together
### Build the EC2 Instance
This is the initial part, you must create a new EC2 instance. From the EC2 dashboard, click on `Launch Instance` and choose the *Amazon Linux 2 AMI* with the *x86* option.

Next you must choose the *Instance Type*. I recommend you the `t2.small` for Minecraft. You will able to change it after the creation.

Click on `Next: Configure Instance Details` to continue the configuration. Keep the default settings, and the default size for the disk (8 GB) as it's enough.
For the tag screen I generally provide a `Name` (it's then displayed on EC2 instance list) and a `costcenter` (I use it for cost management later).

For the Security Group, it the equivalent of a firewall on EC2 and you must configure which port will be accessible from internet on your server. I add SSH port and the Minecraft port (25565) like you see on the following screen:

Then to start the instance you must select or create a key pair. It's mandatory and allow then to connect remotely to your EC2 instance. In my case I am using an existing key pair but if you create a new key don't forget to download on your laptop the *private key file*.

*Yes my key is named caroline. Why not?*
Then you must connect your instance via SSH, I recommend this [guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) if you need help. Basically you must run this kind of command:
```
ssh -i my_private_key.pem ec2-user@public-ipv4
```
The `public-ipv4` is available in the instance list:

You first need java. As newer build of minecraft (since 1.17) are running only on Java 17, I recommend to use Corretto (the Amazon Java version):
```
sudo rpm --import https://yum.corretto.aws/corretto.key
sudo curl -L -o /etc/yum.repos.d/corretto.repo https://yum.corretto.aws/corretto.repo
sudo yum install -y java-17-amazon-corretto-devel.x86_64
java --version
```
You must have something like:
```
openjdk 17.0.1 2021-10-19 LTS
OpenJDK Runtime Environment Corretto-17.0.1.12.1 (build 17.0.1+12-LTS)
OpenJDK 64-Bit Server VM Corretto-17.0.1.12.1 (build 17.0.1+12-LTS, mixed mode, sharing)
```
Thanks @mudhen459 for the research on this java issue ;)
And I want a dedicated user:
```
sudo adduser minecraft
```
To install Minecraft you can rely on the Minecraft server page [here](https://www.minecraft.net/en-us/download/server).
For example for the version `1.17.1` I can run the following:
```
sudo su
mkdir /opt/minecraft/
mkdir /opt/minecraft/server/
cd /opt/minecraft/server
wget https://launcher.mojang.com/v1/objects/a16d67e5807f57fc4e550299cf20226194497dc2/server.jar
sudo chown -R minecraft:minecraft /opt/minecraft/
```
> **⚠️ Warning regarding Java version:**
> It seems that starting with Minecraft 1.17, it require now a Java JRE 16 (instead of Java JRE 8).
> [This site](https://mcversions.net/download/1.16.5) is giving you links to download older Minecraft versions if needed.
```
Exception in thread "main" java.lang.UnsupportedClassVersionError: net/minecraft/server/Main has been compiled by a more recent version of the Java Runtime (class file version 60.0), this version of the Java Runtime only recognizes class file versions up to 52.0
```
I have created a little service to avoid start manually the server. I want the Minecraft process to start as soon as I start the server.
To do that I have created a file under `/etc/systemd/system/minecraft.service` with the following content:
```
[Unit]
Description=Minecraft Server
After=network.target
[Service]
User=minecraft
Nice=5
KillMode=none
SuccessExitStatus=0 1
InaccessibleDirectories=/root /sys /srv /media -/lost+found
NoNewPrivileges=true
WorkingDirectory=/opt/minecraft/server
ReadWriteDirectories=/opt/minecraft/server
ExecStart=/usr/bin/java -Xmx1024M -Xms1024M -jar server.jar nogui
ExecStop=/opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p strong-password stop
[Install]
WantedBy=multi-user.target
```
Then advise the new service by the following:
```
chmod 664 /etc/systemd/system/minecraft.service
systemctl daemon-reload
```
More information on systemd [here](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/chap-managing_services_with_systemd#sect-Managing_Services_with_systemd-Unit_Files).
> Now if you restart the EC2 instance a Minecraft server must be available! You can check ✅ this first step!
> I am not speaking of the fact that the IPv4 is dynamic by default. I recommend to setup an static `Elastic IP` for this server ([here!](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-associate-static-public-ip/)) in order to get a static IP.
### Build the Start Scenario
Let's first create our Lambda function. Go into **Lambda**, and click on `Create function` to build a new one. Name it `mc_start` and use a `Node.js 14.x` or more runtime.
Then you must have this type of screen:

Replace the content of `index.js` file with the following:
```
const AWS = require("aws-sdk");
var ec2 = new AWS.EC2();
exports.handler = async (event) => {
try {
var result;
var params = {
InstanceIds: [process.env.INSTANCE_ID],
};
var data = await ec2.startInstances(params).promise();
result = "instance started"
const response = {
statusCode: 200,
body: result,
};
return response;
} catch (error) {
console.error(error);
const response = {
statusCode: 500,
body: "error during script",
};
return response;
}
};
```
In Configuration, set the following:
* add an environnement variable named `INSTANCE_ID` with the value that correspond to the Instance Id of your Minecraft server (something like `i-031fdf9c3bafd7a34`).
* the role permissions must include the right to start our EC2 instance like this:

In Simple Email Service, it's time to create a new *Rule Set* in the `Email Receiving` section:

Click on `Create rule` inside `default-rule-set`. Take note that the Email Receiving feature is only available today in 3 regions: `us-east-1`, `us-west-2` and `eu-west-1` (source [here](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email.html)).
If SES is receiving an email on this particular identity:

It invoke a Lambda function:

> You must [add the domain](https://docs.aws.amazon.com/ses/latest/dg/verify-addresses-and-domains.html) to the `Verified identities` to make this work. It's also necessary to publish an MX entry in order to declare SES as the email receiver for a specific domain or subdomain (more info [here](https://docs.aws.amazon.com/ses/latest/dg/receiving-email-setting-up.html)).
### Build the Stop Scenario
This time we want to stop the instance after 8h. It's a simple Lambda function.
Let's first create our Lambda function. Go into **Lambda**, and click on `Create function` to build a new one. Name it `mc_shutdown` and use a `Node.js 14.x` or more runtime.
Replace the content of `index.js` file with the following:
```
const AWS = require("aws-sdk");
var ec2 = new AWS.EC2();
exports.handler = async (event) => {
try {
var result;
var params = {
InstanceIds: [process.env.INSTANCE_ID],
};
var data = await ec2.describeInstances(params).promise();
var instance = data.Reservations[0].Instances[0];
if (instance.State.Name !== "stopped") {
var launch_time = new Date(instance.LaunchTime);
var today = new Date();
result = "instance running";
if ((today - launch_time) / 3600000 > process.env.MAX_HOURS) {
console.log("stopping the instance...");
var stop_data = await ec2.stopInstances(params).promise();
result = "instance stopped";
}
} else {
result = "instance not running";
}
const response = {
statusCode: 200,
body: result,
};
return response;
} catch (error) {
console.error(error);
const response = {
statusCode: 500,
body: "error during script",
};
return response;
}
};
```
In Configuration, set the following:
* add an environnement variable named `INSTANCE_ID` with the value that correspond to the Instance Id of your Minecraft server (something like `i-031fdf9c3bafd7a34`).
* add an environnement variable named `MAX_HOURS` with the value that correspond to number of hours allowed after startup, like `8` for 8 hours).
* the role permissions must include the right to start our EC2 instance like this:

We add a trigger to fire the task every 20 minutes:

Hurray the configuration is done !
## Conclusion
This setup is working nicely here, my son is happy because he start himself the instance when he need. I am happy because it reduce **a lot** the cost of this service. On the last 3 months I see that the EC2 Compute cost for this server is less than 1 US$ 😅 (around 17 US$ before the optimization) so **95% less expensive** !
Currently the configuration is made manually in the console, I would love to spend some time to change that one day, using for example the [CDK toolkit](https://docs.aws.amazon.com/cdk/latest/guide/home.html).
It's also probably possible to manage the storage of the Minecraft world on S3 instead of the Instance EBS disk (some $$ to save here, but not a lot).
It was a very fun project to build using multiple AWS services! Do you see other usages of dynamically boot EC2 instances using Lambda functions? Let me know in the comments! | julbrs |
784,788 | My First Post on Dev.to | Hey dear! This is my first post on Dev.to 😀 Let ME just introduce myself Name - Oluwakeji... | 0 | 2021-08-07T23:26:21 | https://dev.to/onabajooluwakeji/my-first-dev-to-post-4jod | firstpost, welcome, programming | Hey dear!
This is my first post on Dev.to 😀
Let ME just introduce myself
Name - Oluwakeji Onabajo
Country - Nigeria (I love ❤ traveling)
Occupation - Student
Languages I know as at now: PHP together with JavaScript, HTML and CSS
Currently learning new programming languages and technologies.
I've read a lot of cool articles on this here and finally decided to sign up for an account on Dev.to. Looking forward to create lovely contents here.
Show some love 💕 | onabajooluwakeji |
784,885 | File renaming script | https://github.com/picardcharlie/python101/blob/master/projects/file_name_change.py A script to... | 0 | 2021-08-08T02:36:42 | https://dev.to/picardcharlie/file-renaming-script-1pil | https://github.com/picardcharlie/python101/blob/master/projects/file_name_change.py
A script to rename all the files in a directory.
Will ignore all other directories within.
Gives a warning before running to stop accidents.
Did not test it, do not trust myself. | picardcharlie | |
784,893 | There Will Always Be More Work | Software engineering is an interesting field because the work never really ends. You may be working... | 0 | 2021-08-11T13:49:49 | https://levelup.gitconnected.com/there-will-always-be-more-work-36a7ded93c75 | career, leadership, healthydebate, devjournal | Software engineering is an interesting field because the work never really ends. You may be working to finish a feature now, but after that there will be more features to build. More bugs to fix. More tech debt to pay down.
You could spend your entire life working and never really "finish." I suspect the same is true of most other professions.
And yet, I often feel a sense of urgency, that I must work later or longer to get more work done.
To be clear, this isn't a result of poor time management –– I get plenty of work done throughout the day, more than is expected of me. I'm not trying to catch up because I've fallen behind. What I'm describing is an urge to continue working longer than necessary to get more work done simply because there is more work to do.
Logically, I realize that this is a mistake. Working longer hours does not always lead to greater productivity. In the long run, it leads to burnout.
So why do I feel this way? It may be because I enjoy my job. It may be because I feel a sense of ownership over the work. It may be because in the back of my mind it *feels* like I'm accomplishing something that can ultimately be finished.
But the truth is, none of these reasons are justifiable excuses for working longer hours, later nights, or during the weekend.
So this is my reminder to myself –– and to you –– that there will always be more work. Take care of yourself.
Don't burn yourself out. It's not worth it. | thawkin3 |
784,905 | Memory Leaks | In this post: The Problem What's a Memory Leak? The Solution Hooks are (is?) still pretty new to... | 0 | 2021-08-08T05:12:25 | https://dev.to/zbretz/memory-leaks-3f1l | In this post:
1. [The Problem](#the-problem)
2. [What's a Memory Leak?](#what's-a-memory-leak?)
3. [The Solution](#the-solution)
Hooks are (is?) still pretty new to me. But since it seems like an important feature of React to understand, I've stretched to use it in my projects and start to gain some practical knowledge.
The most creative implementation that I've attempted so far is wrapping the `useEffect` hook inside of a custom hook that fetches a blog post from my Blog Model.
Here's what it looked like:
```javascript
const usePostData = (user, postId) => {
const [post, setPost] = useState([]);
useEffect(()=>{
httpHandler.getPost(user, postId, (err, data)=>{
setPost(data)
})
},[])
return post
}
```
I used this function to populate a Post component with data:
```javascript
const post = usePostData(user, post_id)
```
##The Problem
React didn't like this, and it threw the following error:
` Warning: Can't perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application. To fix, cancel all subscriptions and asynchronous tasks in a useEffect cleanup function.`
I did a little reading and found some great resources that explained the issue and provided solution for fixing the memory leak:
- *This one's my favorite.* https://www.benmvp.com/blog/handling-async-react-component-effects-after-unmount/
- https://medium.com/wesionary-team/how-to-fix-memory-leak-issue-in-react-js-using-hook-a5ecbf9becf8
- https://dev.to/nans/an-elegant-solution-for-memory-leaks-in-react-1hol
##What's a Memory Leak?
Put simply, a memory leak is created when data is requested, but has nowhere to go.
In React, this can occur when the feature that is requesting the data unmounts before the data is delivered. This isn't uncommon when making asynchronous calls like over http.
I have some work to do to determine what unmounts in my Post component code, and why. Does it unmount and then remount on render when the data actually reaches the component? No - my understanding is that it mounts on the first render. So when does it unmount? That will remain a mystery until tomorrow.
##The Solution
In the meantime, here's the code after applying the most straightforward solution that all three resources provided:
```javascript
const usePostData = (user, postId) => {
const [post, setPost] = useState();
useEffect(()=>{
let componentMounted = true;
httpHandler.getPost(user, postId, (err, data)=>{
if (componentMounted){
setPost(data)
}
})
return () => {
componentMounted = false
}
},[])
return post
}
``` | zbretz | |
785,321 | Soi cầu MB 9/8 Dự đoán xổ số miền Bắc hôm nay 9 tháng 8 | Soi cầu MB 9/8 chuẩn xác nhất tại THABET GG. Soi cầu chốt số dự đoán xổ số miền Bắc XSMB miễn phí vip... | 0 | 2021-08-08T15:40:40 | https://dev.to/thabetgg/soi-c-u-mb-9-8-d-doan-x-s-mi-n-b-c-hom-nay-9-thang-8-4b2g | Soi cầu MB 9/8 chuẩn xác nhất tại THABET GG. Soi cầu chốt số dự đoán xổ số miền Bắc XSMB miễn phí vip nội gian. Soi cầu XSMB 9 tháng 8 2021 những con số may mắn, lô đẹp hôm nay về tỷ lệ chính xác cực cao. Dự đoán xổ số miền Bắc hôm nay cho anh em về bờ.
https://thabet.gg/soi-cau-mb-9-8-du-doan-xo-so-mien-bac-hom-nay-9-thang-8/
| thabetgg | |
785,480 | Website for FRONTEND challenges! | Are you a beginner in FRONTEND development than you must try this challenges. 1 100 Days... | 0 | 2021-08-08T17:49:18 | https://dev.to/er_saifullah/website-for-frontend-challenges-213i | webdev, frontend, css, challenge | Are you a beginner in FRONTEND development than you must try this challenges.
#1 100 Days CSS#

This challenge help you to become CSS expert in 100 days. Completely free no registration.
#2 Codier.io#

Worth to try FRONTEND coding challenges in CODIER.
#3 CODEPEN#

Have fun and level up your skills by building things.
#4 FRONTEND MENTOR#

Solve real HTML, CSS and JAVASCRIPT challenges here.
That's it. So go ahead smashed the challenges.
Thank you for reading!! | er_saifullah |
785,512 | GOING FROM LOCAL STATE TO A REDUX STORE IN A REACT APP | When choosing between local state and a redux store there is one main factor you want to take into... | 0 | 2021-08-08T20:25:03 | https://kak79.github.io/mywebsite/blog/blog10.html | tutorial, codenewbie, react, redux | When choosing between local state and a redux store there is one main factor you want to take into account: how many components need access to the state. Just one, use local state. More than one, you might want to use redux. What is redux? Well in this post I'm going to define the basic components of redux and thunk while explaining how to change from locally defined state to using a redux store.
In the following image I have a fetch set up as local state.

Redux is a JavaScript library that stores all of our data in a global state so that we can access it from across all components in our application.
In order to use Redux you need to run either
```npx install redux
npx install react-redux
npx install thunk```
or
```yarn install redux
yarn install react-redux
yarn install thunk```
in your terminal.
NOTE: The redux library is not exclusive to react - it can be used with other JS frameworks.
Next you need to set up your `index.js` file like this:

You are importing a provider, reducer, createStore, thunk and middleware. Then you are making a store with middleware using thunk and line 13 allows use of window's devtools. Lines 18 and 22 are wrapped around App which makes it so that redux is used for state. Our provider on line 18 is for connecting our react app to our redux store.
The redux store is an object that stores the global state in our app.
Next you want to make a redux folder in your src folder in which to store all of your redux files. Inside the redux folder you want to add an actions folder and a reducers folder and make a reducers file.


A reducer is just a function that returns state. I am using a combined reducer here. It turns out that redux lets us combine multiple reducers into one that can be passed into createStore by using a helper function named combineReducers. This way I can put more than one reducer in my `blogReducer.js` file.

Examining the file `reducer/blogReducer.js` will gain the information that the state is set to an empty array and there is something called an `action.type` and an `action.payload`. An action is a JavaScript object that can be an asynchronous function. Redux documentation states that 'you can think of an action as an event that describes something that happened in the application.' An action has a type and a payload. `Action.type` is a string and should be all caps. `Action.payload` will be other fields with additional information in them.
```const add1Action = { type: 'ADD_ONE', payload: + 1 }```

Due to the asynchronous nature of react, if your action is a function, then you need thunk. Thunk allows asynchronous functions to return a dispatch action in the form of a function (line 3 and 4 in above image). Otherwise it returns an action object.
If you follow the logic in the image below, first the `componentDidMount()` fires which you can see because of `console.log('A')`. Then `fetchBlogs()` fires which you can see because of `console.log('B')`. Then `console.log('C')` fires before `console.log('D')` does. This is because `console.log('D')` cannot fire until the 2nd promise is returned in the fetch.

In order to connect our components to the store, we need to import connect and use `mapStateToProps` and `mapDispatchToProps`. Destructuring is the process of breaking a structure. In the context of programming, the structures are the data structures, and destructuring these data structures means unpacking individual values from the data structure. On the left side, in `BlogContainer.js`, the connect statement has `mapStateToProps` and `mapDispatchToProps` destructured. They are in a longer format on the right side to show the other way they can be called.

I defined the major terms associated with redux and explained the basics about how to set up a redux store. Obviously the time when you want redux is in a significantly larger app than this, but this gives the basic idea. Good luck and happy coding. | kak79 |
785,543 | My time as an undergraduate student | I spent three years study for a bachelor degree in chemistry. During that period, the biggest dilemma... | 0 | 2021-08-09T02:40:09 | https://victorleungtw.medium.com/my-time-as-an-undergraduate-student-705cea37694b | culture, cuhk, chemistry, university | ---
title: My time as an undergraduate student
published: true
date: 2021-08-08 19:04:38 UTC
tags: culture,cuhk,chemistry,university
canonical_url: https://victorleungtw.medium.com/my-time-as-an-undergraduate-student-705cea37694b
---
I spent three years study for a bachelor degree in chemistry. During that period, the biggest dilemma is between the ideal and reality. In the perfect world, I could spend the time to immerse myself in the pursuit of science. I got interested in quantum mechanics, which was the most complicated subject in my life. The idea of Schrödinger’s cat was counterintuitive in the microscopic world. And the equation was so complex with multiple dimensions and multiple variables differentiation equations.
On the other hand, I had to worry about the reality that I may not be able to find a good job in Hong Kong. Hong Kong was an international financial centre with a narrow job market and industry. All the best students in public examinations went to business school for better job prospects. Therefore the morale of students in the faculty of chemistry was so low. The study materials were so complex, while the professors may not be interested in teaching as they are more interested in doing research. There are some exceptions, though, and my favourite professor was Chu Ming Chung in the physics department. He was a classmate of my favourite chemistry teacher in high school. He was also a student of my most admire American physicist, Richard Feynman, in Caltech. This Nobel prize winner seemed unreachable until I felt the connection via professor Chu. He widened my horizon about Albert Einstein’s general relativity theory and made me fascinated about the universe. A great teacher inspires me to learn, and I pondered the law of physics and enjoyed it.
Besides academic study, I learn a lot outside of the classroom by contributing to the student union. My student union in high school mainly was about student farewells, such as stationary discounts and entertainment events. However, the student union was at a different level at university; we care about political issues in the society, are involved in politics and take social responsibility. Most of the active students studied in public administration, while I was unique to join as a science student with rational, logical thinking. There was a lot of philosophical discussion in the student union, which raised my interest in the newspaper issues, and we debated which ideology is right and which one is wrong. We talked about John Rawl’s theory of justice and thought about the right and fair way to distribute our limited resources. Why should the government exist? Why should the student union exist? Why should we take democracy for voting? Why do we give up part of our freedom to let the government rule us? If the government was not functioning, would we still be paying tax? All these ideas were open for discussion, and we had an excellent culture to welcome all sorts of ideas as a student exploring different ideologies. The freedom of speech and express ourselves was something the Chinese communist party would be most scared of. Meanwhile, we understand the opportunity cost to openly support sensitive issues, such as memorials of 4th June Tiananmen Square protests in Victoria Park in Causeway Bay in Hong Kong. It’s not allowed these days, and all the student unions were under pressure.
I was proud to study at the Chinese University of Hong Kong, which was unique as its constituents by four colleges at that time. I was happy to enter New Asia College due to my love of Chinese philosophy. It was founded by the most outstanding scholars in new Confucians, such as Qian Mu and Tang Chun I. They escaped from mainland China to Hong Kong, and their whole life was anti-communism. They thought the regime was destroying our Chinese culture and philosophy. Thus they inherited the knowledge and taught the students in Hong Kong. There were many successful alumni, and more importantly, it was humanity and the moral standard they taught us. Our New Asia College spirit influenced generations of students. Our song reminds us of the difficult times when the school first opened and how young people overcame obstacles and took responsibility. Whenever I felt frustrated, our song came up in my mind, and it motivated us to keep going forward. Our school was not only about knowledge but also about how to be a good human being, with honesty, moral standards, and also being conscientious.
During the summer holiday as an undergraduate, I took a couple of months to go working holiday. I went to the United States. It was a fun experience working in Kansas City, Missouri, in the middle of nowhere. My job was selling souvenirs inside a theme park with minimum wages. That life experience was eye-opening as it was my first time living in the U.S. out of Hong Kong. There was a complete culture shock. For example, I lived in an apartment, and the nearest supermarket was just downstairs in H.K. However, in the U.S., the nearest supermarket required me to walk for one hour. I was only a student without any car nor a driving licence. It was a completely different world to take public transport. In H.K., there are bus and train schedules which are always running. In the U.S, there were only three schedules a day to go to the city. I went to the shopping mall one time, came out at 5 pm, but already missed the last bus at 3 pm. Luckily I called a local friend with a car to save me. Otherwise, it would be a big problem.
Another cultural shock for me was the people openness towards sex. In H.K., it was taboo to talk about it and too shy to do it. In the U.S, my roommates at that time were Jamaican and Columbians. They openly welcome different women to have sex right next to my bed. It was noisy to hear the spring of the bed goes up and down, disturbing my sleep. But overall, the American people I met in the job were friendly. Every day, we had a competition to see who sold the most souvenir on the street. At first, I was too shy to approach visitors to sell them toy lightsabers. But later on, I realise it’s a matter of the volume of people I come, assuming the probability of buyer buying is constant. Without the difference in selling skills, the more people I approach, the more could I sell. And I also learn the secret of selling is to be happy and be fun to approach. Overall, they are visitors coming to the theme Park to have fun. I was surprised they spent the money happily to buy a toy made in China, which probably has a low cost to make. It was also fun to meet different people in the theme park. Some Americans have a Chinese tattoo, which was funny to me. Because what they wrote on their body did not make sense. For example, power is a Chinese word composed of two characters, force and volume. It means power to have total force volume, but it’s ridiculous to have a tattoo only containing the last word “volume” without the “force”. There are many memorable things in this trip to the U.S. that I could not write everything down. I met a Chinese girlfriend from Xian during the journey, which was why I could speak mandarin fluently today. We saved our money at work to travel along the east coast to New York, Washington D.C., and Chicago. I miss all the good times in the U.S.
Unfortunately, coming back to Hong Kong, I separated from my girlfriend, and my long-distance relationship could not last. Before this relationship started, I had a girlfriend in high school. However, we went to different universities. She studied accounting in business school. And the further distance between us was that we started to have different paths about life. She only wants to graduate with a stable job in life, buy a house and start a family. But I was very idealistic, pursuing science and contributing to the student union. We had a completely different ideology, and she somehow became materialistic with the influence of her classmates. My best friend was at the same university as her. One day, he told me that he saw my girlfriend was holding hands with another guy in the classroom.
I was shocked and irritated. I called my girlfriend to clarify, and she was on the bus when she answered the phone. I asked her if my best friend was telling the truth, and turn out he was right, and her new boyfriend was sitting next to her on the same bus. She cheated on me. In Chinese slang, it’s called giving me a green hat to wear. I can’t believe it and accept it. After I calmed down, the problem was there a long time before the third person came in. Even without the third person, we were separated by different universities and different ideologies towards life. We choose different paths, and I wish her the best with that guy.
Meanwhile, the happy university life passed so fast in a blink of an eye. Pretty soon, I had to worry about the job hunt before graduation as the last day was approaching. I feel depressed. An excellent job opening such as a management trainee was reserved for the business school students with relevant experience. My score in chemistry was not the best, and it would be impossible for me to stay in the field to do research. I knew I needed more work experience to find my first full-time job, so I took a part-time job to work in Uniqlo, the Japanese clothes shop. I read a book about the founder, who became the richest person in Japan. I was motivated to work there, but frankly speaking, it was harsh. I worked in the busiest store in Tsim Sha Tsui in Hong Kong, it opened very early, so I had to take the earliest train in the morning. Then, it was a whole day unpacking clothes, folding and tidying up the shelf and greeting customers. At the end of long hours, I can feel my feet were so sore. The job was like a punishment in the greek story that every time you push the rock near the top of the mountain, it rolls back and falls again to the beginning.
Similarly, every time you fold all the clothes and are nearly perfect for tidying up the shelves, some customers would come in and make it a mess like the beginning. It was a job that makes me question the meaning of life and why did I work hard to study chemistry and learn useless knowledge about the black hole? Luckily after working in a part-time job for a few months, I got a full-time job as a test engineer in a laboratory for testing the chemical safety of food utensils. At least, it was a profession relevant a bit to my study in Chemistry, And I took a graduate trip to Australia before the job, which changed my life, and it is a story in the next chapter.
_Originally published at_ [_http://victorleungtw.com_](https://victorleungtw.com/2021/08/09/my-time-as-an-undergraduate-student/) _on August 8, 2021._ | victorleungtw |
785,653 | From awk to a Dockerized Ruby Script | For a programming project I wanted to easily get a list of the printable ASCII characters such that I... | 0 | 2021-08-11T05:13:41 | https://www.rakeroutes.com/from-awk-to-a-dockerized-ruby-script | ---
title: From awk to a Dockerized Ruby Script
published: true
date: 2021-08-08 04:00:00 UTC
tags:
canonical_url: https://www.rakeroutes.com/from-awk-to-a-dockerized-ruby-script
---
For a programming project I wanted to easily get a list of the printable ASCII characters such that I could easily loop through them and do some work with each individual ASCII character.
That seems simple enough! But I didn’t quite find anything that did what I wanted.
I found the very cool `ascii` program available via homebrew but as awesome as it is it does not have an output that is simply a list of the printable ASCII.
I found printable ASCII lists online of course, but I didn’t want to simply cut and paste into a text file.
So I turned to scripting out something and AWK is a great one for simple scripts.
```
$ awk 'BEGIN { for(i=32;i<=126;i++) printf "%c\n", i; }'
!
"
#
$
%
&
'
(
)
*
+
,
-
.
/
0
1
-- etc --
```
That’s nice and all but I didn’t want to keep making myself go to ctrl-r to use that data output.
So throw that into `$HOME/bin/printable-ascii` and done?
Well no. As I wrote that into a script I figured, since I’m making this an actual script I might as well take the rare opportunity these days to write some Ruby!
```
$ ruby -e "32.upto(126) { |n| puts n.chr }"
!
"
#
$
%
&
'
(
)
*
+
,
-
.
/
0
1
-- etc --
```
There, nice and expressive as we all expect Ruby to be.
## A Real Ruby Script
But hold on. Since I’m bothering to put this into a script via Ruby I could output other interesting data as well. Like it would be coold to be able to run a script a see not only the ASCII but the decimal, octal, hexadecimal, and binary representations of the characters.
It’s all been a minute since I wrote a nice command line tool in Ruby and [OptionParser](https://ruby-doc.org/stdlib-3.0.2/libdoc/optparse/rdoc/OptionParser.html) is such a fantastic library in the Ruby standard library. It’d be a really refreshing change of programming pace compared to my daily programming work to be able to write something small and useful using only a good language with a good standard library.
A bit of scripting later and tada! `printable-ascii` 0.0.1. That version didn’t even hit the repo because I was still writing it in my dotfiles ~/bin directory.
But it was starting to be really cool. ASCII is such a neat slice of data and I’d never really parsed through it directly that much.
When I added the JSON output I just had to take this cool little script to its own repo! Fiat [sdball/printable-ascii](https://github.com/sdball/printable-ascii) and [printable-ascii v1.0.0](https://github.com/sdball/printable-ascii/releases/tag/v1.0.0)
Did I stop there? I did not.
## Homebrew
I thought it would be fun to see if I could get this script into homebrew. Can a Ruby script even be added to homebrew? Turns out yes and it’s pretty easy thanks to GitHub providing and hosting tar.gz files. ❤️GitHub!
```
class PrintableAscii < Formula
desc "Output all printable ASCII characters in various representations and formats"
homepage "https://github.com/sdball/printable-ascii"
url "https://github.com/sdball/printable-ascii/archive/refs/tags/v2.1.0.tar.gz"
sha256 "cf0b2dfa7c1e0eb851be312c3e53d4f67ad68d46c58d8983f61afeb56588b061"
license "MIT"
def install
bin.install "bin/printable-ascii"
end
test do
ascii_json = [
{ "character" => " " },
{ "character" => "!" },
{ "character" => "\"" },
{ "character" => "#" },
{ "character" => "$" },
{ "character" => "%" },
{ "character" => "&" },
{ "character" => "'" },
{ "character" => "(" },
{ "character" => ")" },
{ "character" => "*" },
{ "character" => "+" },
{ "character" => "," },
{ "character" => "-" },
{ "character" => "." },
{ "character" => "/" },
{ "character" => "0" },
{ "character" => "1" },
{ "character" => "2" },
{ "character" => "3" },
{ "character" => "4" },
{ "character" => "5" },
{ "character" => "6" },
{ "character" => "7" },
{ "character" => "8" },
{ "character" => "9" },
{ "character" => ":" },
{ "character" => ";" },
{ "character" => "<" },
{ "character" => "=" },
{ "character" => ">" },
{ "character" => "?" },
{ "character" => "@" },
{ "character" => "A" },
{ "character" => "B" },
{ "character" => "C" },
{ "character" => "D" },
{ "character" => "E" },
{ "character" => "F" },
{ "character" => "G" },
{ "character" => "H" },
{ "character" => "I" },
{ "character" => "J" },
{ "character" => "K" },
{ "character" => "L" },
{ "character" => "M" },
{ "character" => "N" },
{ "character" => "O" },
{ "character" => "P" },
{ "character" => "Q" },
{ "character" => "R" },
{ "character" => "S" },
{ "character" => "T" },
{ "character" => "U" },
{ "character" => "V" },
{ "character" => "W" },
{ "character" => "X" },
{ "character" => "Y" },
{ "character" => "Z" },
{ "character" => "[" },
{ "character" => "\\" },
{ "character" => "]" },
{ "character" => "^" },
{ "character" => "_" },
{ "character" => "`" },
{ "character" => "a" },
{ "character" => "b" },
{ "character" => "c" },
{ "character" => "d" },
{ "character" => "e" },
{ "character" => "f" },
{ "character" => "g" },
{ "character" => "h" },
{ "character" => "i" },
{ "character" => "j" },
{ "character" => "k" },
{ "character" => "l" },
{ "character" => "m" },
{ "character" => "n" },
{ "character" => "o" },
{ "character" => "p" },
{ "character" => "q" },
{ "character" => "r" },
{ "character" => "s" },
{ "character" => "t" },
{ "character" => "u" },
{ "character" => "v" },
{ "character" => "w" },
{ "character" => "x" },
{ "character" => "y" },
{ "character" => "z" },
{ "character" => "{" },
{ "character" => "|" },
{ "character" => "}" },
{ "character" => "~" },
]
assert_equal ascii_json, JSON.parse(shell_output("#{bin}/printable-ascii --json"))
end
end
```
The real core of the magic is `bin.install "bin/printable-ascii"`. That installs the `printable-ascii` script into homebrew’s bin as an executable.
```
$ brew install --formula ./Formula/printable-ascii.rb
==> Downloading https://github.com/sdball/printable-ascii/archive/refs/tags/v2.1.0.tar.gz
Already downloaded: /Users/sdball/Library/Caches/Homebrew/downloads/1f5ded4652929fb1c8ca5ffdb1a733cdfa3e65e6bf447893ef59803f7f6919b9--printable-ascii-2.1.0.tar.gz
🍺 /opt/homebrew/Cellar/printable-ascii/2.1.0: 5 files, 22KB, built in 1 second
Removing: /Users/sdball/Library/Caches/Homebrew/printable-ascii--2.0.0.tar.gz... (6.3KB)
```
Right on! Maybe someday it’ll really be in Homebrew but for now it’s easy enough to install directly with the formula.
## Docker
Since Homebrew would require cloning the repo and then manually installing from the formula file I should be able to pretty easily wrap up this script into a Docker image! Then anyone could easily run this silly script as long as they have Docker installed. Who doesn’t love running arbitrary scripts from the Internet via Docker?
Since I’ve been helping out as part of the DevOps team at work lately I had some hot loaded Docker knowledge ready to roll. I just need an image with Ruby, copy in the script somewhere, and set the script as the ENTRYPOINT.
Then with `docker run` any arguments passed to the docker run command will be passed to the script itself. ✨Docker!
I quickly found the [official Ruby Docker image](https://hub.docker.com/_/ruby/) and just grabbed the first image I saw referenced in their README
```
FROM ruby:2.5
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY . .
CMD ["./your-daemon-or-script.rb"]
```
I slimmed that down a bit since I don’t have any need for bundler or Gemfiles. Here’s what produced the 1.0.0 Docker image for printable-ascii
```
FROM ruby:2.5
WORKDIR /usr/src/app
COPY bin/printable-ascii ./
ENTRYPOINT ["./printable-ascii"]
```
Easy peasy and it worked great!
## GitHub Actions
Build a Docker image from my own laptop and publish to Docker Hub? That’s so archaic! What kind of DevOps newbie am I if I let myself live with that kind of publishing story?
GitHub Actions to the rescue! I whipped up a GitHub Actions workflow to publish to both Docker Hub and GitHub’s own container registry whenever I publish a new version of the script. It even checks that the script’s declared version matches the release that’s going to be published.
It’s a _bit_ weird in that it means that the Docker image OF the script has the same version as the script itself. If I rev the script itself then it all makes sense because a new Docker image will contain the new script. But if I only update the Docker image then I don’t have a reasonable way to only change its version.
Thankfully this script and Dockerfile are very simple so I can simply keep them in sync and only update the Docker image when there’s a new version of the script to release.
But in practice for more complex relationships between the utility being provided by the Docker image and the Docker image itself it seems like there’s an opportunity for more metadata. Like a dual versioning of the image and its contents represented in a good, consistent way on Docker Hub.
## That Docker image is way too huge
After I setup the GitHub Action I went back to look at my Docker Hub stats and WOW over 300MB for this script’s image! That’s no good at all!
I figured there had to be a slimmer Ruby image to work with. The default I was using probably has all kinda of superfluous (for me) utilities and libraries to support all kinds of projects.
After a bit of reading in Ruby’s official docker image I found there’s both a `-slim` and an `-alpine` version of the Ruby image to use. Since my script is really really Ruby and its standard library alone I was certain the `-alpine` image would work great.
Alpine is a special Linux distribution designed to create small Docker images.
Success! Using the Alpine version brought the image down to ~30MB! Roughly 10% of the original size!
```
FROM ruby:3.0-alpine
WORKDIR /usr/src/app
COPY bin/printable-ascii ./
ENTRYPOINT ["./printable-ascii"]
```
It works wonderfully! Anyone with Docker can get printable ASCII anytime
```
$ docker run sdball/printable-ascii --json --decimal --binary --hexadecimal --octal --range A-C | jq '.'
[
{
"character": "A",
"decimal": "65",
"binary": "1000001",
"hexadecimal": "41",
"octal": "101"
},
{
"character": "B",
"decimal": "66",
"binary": "1000010",
"hexadecimal": "42",
"octal": "102"
},
{
"character": "C",
"decimal": "67",
"binary": "1000011",
"hexadecimal": "43",
"octal": "103"
}
]
```
## Features on features
Working on this simple toy script has been an absolute joy so I’ve continued to add more features that no one needs. Not even me! I just think they’re neat!
A `--range` option to allow supplying one or more ranges of printable ASCII characters
A `--random NUMBER` option to pick `NUMBER` of random printable ASCII characters
And I’ve got plans for more silly options ha ha! Printable ASCII for all!
Maybe someday I’ll actually get back to the project where I needed a list of printable ASCII in the first place. | sdball | |
785,760 | Introduction to Object Types in TypeScript Pt2 | TypeScript introduces a new type called "object type". This type is created specifically for objects... | 0 | 2021-08-09T06:11:56 | https://blog.alexdevero.com/object-types-in-typescript-pt2/ | typescript, webdev, tutorial, beginners | TypeScript introduces a new type called "object type". This type is created specifically for objects and makes it working with them easier. In this tutorial, you will learn about how to extend object types and how to use intersection types. You will also learn how to create object types using generics.
## Extending object types
As you work with objects, you will often have to handle types that are different, but only slightly. For example, you might create one object that contains a number of properties. So, you will create an object type for it. Then, you will create another object that is slightly different.
Since this second object is slightly different, you can't use the previous object type. Now, you have two options. The first option is to create new object type for this second object. This is okay, but since those two object are similar, part of this type will be duplicating code of the first type.
The second option is also about creating new object type. However, this time, you will build it on top of the previous. This means that the second object type will inherit properties defined in the first object type. Then, it will add some additional, those you need just for the second object.
This process of building on top of existing interfaces is called extending. It is very similar to extending [JavaScript classes]. In this case, you are extending object types. The result is the same, the combination of new interface you are extending and the old you are extending with.
You can extend object types, or interfaces, with the `extends` keyword. This keyword follows after the object type name. It is then followed by the existing object type you want to extend the new type with. The rest is like defining new object type with an interface. You add curly braces and add additional properties inside the body.
```TypeScript
// Create some default object type/interface:
interface Guest {
username: string;
email: string;
password: string;
}
// Create new object type by extending Guest:
interface User extends Guest {
canPost: boolean;
}
// Translates to:
// interface User {
// username: string;
// email: string;
// password: string;
// canPost: boolean;
// }
// Create another object type by extending User:
interface Admin extends User {
role: 'admin';
securityClearance: 'low' | 'medium' | 'high';
}
// Translates to:
// interface Admin {
// username: string;
// email: string;
// password: string;
// canPost: boolean;
// role: 'admin';
// securityClearance: 'low' | 'medium' | 'high';
// }
// Create new object of type Guest:
const guest: Guest = {
username: 'joe0001',
email: 'joe@joe.co',
password: 'joe_joe_12345'
}
// Create new object of type User:
const user: User = {
username: 'tom0001',
email: 'tom@tom.eu',
password: 'tom_tom_12345',
canPost: true
}
// Create new object of type Admin:
const admin: Admin = {
username: 'dick0001',
email: 'dick@dick.bz',
password: 'dick_dick_12345',
canPost: true,
role: 'admin',
securityClearance: 'medium'
}
```
One important thing to mention. You can extend object types defined with interfaces only when you define them. You can't extend them when you use them to annotate some object. This will not work.
## Combining type aliases and interfaces, or intersection types
In the previous part, you've learned about [type aliases]. You've learned what they are and how to use them to define object types, as an alternative to interfaces. One thing you should know about type aliases is that you can combine them. You can take one alias, combine it with another, or multiple, and create a new type alias.
This also works with interfaces. This means that you are not limited to extending interfaces. You don't have to create one by building on top of another. You can also combine two or more existing interfaces. This can be useful when you have two existing object types and need properties that match their intersection or combination.
You can combine type aliases or interfaces by using the `&` operator. The result of this is a TypeScript construct called "intersection type". It is a new type that combines all type aliases or interfaces you've specified. When you define new intersection type, you have to declare it staring with `type` keyword, similarly to [aliases].
```TypeScript
// Create first interface:
interface BasicSet {
monitorModel: string;
keyboardModel: string;
mouseModel: string;
}
// Create second interface:
interface AdvancedSet {
headphonesModel: string;
gameControllerModel: string;
}
// Create intersection type based on Square and Cube:
type GamerSet = BasicSet & AdvancedSet
// Translates to:
// type GamerSet = {
// monitorModel: string;
// keyboardModel: string;
// mouseModel: string;
// headphonesModel: string;
// gameControllerModel: string;
// }
```
Intersection types have an advantage over extending. You can use them when you define new object types as well as when you annotate object with existing types. The syntax in both cases is the same.
```TypeScript
// Create first interface:
interface BasicObject {
width: number;
height: number;
depth: number;
}
// Create second interface:
interface Book {
title: string;
author: string;
numberOfPages: number;
}
// Annotate object with intersection type
// of BasicObject and Book:
const hardcover: BasicObject & Book = {
width: 7.5,
height: 9.3,
depth: 0.9,
title: 'The Four Steps to the Epiphany',
author: 'Steve Blank',
numberOfPages: 384
}
```
Intersection types also allow you to combine interfaces with type aliases and vice versa.
```TypeScript
// Create an interface:
interface BasicBook {
sizeInMB: number;
numberOfPages: number;
}
// Create a type alias:
type Book = {
title: string;
author: string;
}
// Annotate object with intersection type
// of BasicObject interface and Book type:
const ebook: BasicBook & Book = {
sizeInMB: 5,
numberOfPages: 400,
title: 'Good to Great',
author: 'Jim Collins'
}
```
## Object types and generics
Basic type aliases and interfaces work well when you know what type this or that property will be. This may not apply to all situations. It can happen that at the time you define an object type you will know only that some property will be expected. However, you will not know what type this property will be, not at that moment.
You could solve this problem by using multiple possible types. You could also use `any` or `unknown`, but neither of these will be really helpful. TypeScript gives you third option you can use in situations like these. When you define a new object type, you can also specify any parameters it accepts.
When you specify a parameter for an object type, you can then reference it inside the body of the object type. You can think about this basically as about a function. If you define a function and specify a parameter you can then work with that parameter inside the function itself.
What's more, when you define a parameter, you don't have to know its value at that very moment. You will know it when you call the function and pass the value as an argument. Object types allow you to do the same thing. In case of object types, interfaces, type aliases and types in general, these constructs are called "generic".
This name may sound strange and cryptic. Stripped to the essentials, generics are basically just types that work with parameters. You can define a parameter for interface or type aliases using angle brackets notation. These angle brackets follow after the name of the type and contain all parameters the type accepts.
If you have multiple parameters, you separate them by commas. The important thing comes when you want to use that type. When you define a parameter for a type and you want to use it, you have to specify some value for the parameter and pass it as an argument. This allows you specify types at the moment of annotation, not declaration.
```TypeScript
// Example no.1: Interfaces
// Create a generic interface with one parameter called "Type":
interface Robot<Type> {
name: Type; // Referencing the "Type" parameter
}
// Specify "Type" parameter to be type of number:
const factoryBot: Robot<number> = {
name: 26598
}
// The interface for "factoryBot" will be:
// interface Robot<number> {
// name: number;
// }
// Specify "Type" parameter to be type of string:
const humanoid: Robot<string> = {
name: 'Joe'
}
// The interface for "humanoid" will be:
// interface Robot<string> {
// name: string;
// }
// Example no.2: Type aliases
type Book<BookType> = {
title: string;
author: string;
type: BookType;
}
// Specify "BookType" parameter to be type of 'electronic':
const ebook: Book<'electronic'> = {
title: 'A Game of Thrones',
author: 'George R. R. Martin',
bookType: 'electronic'
}
// The type for "ebook" will be:
// type Book<'electronic'> = {
// title: string;
// author: string;
// bookType: 'electronic';
// }
```
### Generics beyond simple type parameters
When you work with generic, you can also use parameters to pass another types or interfaces. This can be useful when you have complex objects where some parts will change. For example, different types of payload. One type of a payload might be object of one shape. Another type of a payload might be object of a different shape.
You could solve this by using multiple variants of types or interfaces. This would lead to duplicate code, even if you used extending. Another solution is to create one generic type or interface and use the parameter to specify the payload type.
```TypeScript
// Create a generic interface with "Payload" parameter:
interface ServerResponse<Payload> {
code: string;
message: string;
payload: Payload;
}
// Create interface for the first type of payload:
interface PayloadUser {
name: string;
email: string;
}
// Create interface for the second type of payload:
interface PayloadPost {
title: string;
author: string;
slug: string;
date: string;
}
// Create interface for the third type of payload:
interface PayloadImage {
title: string;
size: number;
fileType: string;
}
// Create object for response with payload of type user:
const serverResponseWithUser: ServerResponse<PayloadUser> = {
code: '200',
message: 'Success',
payload: {
name: 'Sandy Jones',
email: 'sandy@jones.com',
}
}
// Create object for response with payload of type post:
const serverResponseWithPost: ServerResponse<PayloadPost> = {
code: '200',
message: 'Success',
payload: {
title: 'Hello world',
author: 'Anonymous',
slug: 'Hello world',
date: 'January 1, 1970 00:00:00'
}
}
// Create object for response with payload of type image:
const serverResponseWithImage: ServerResponse<PayloadImage> = {
code: '200',
message: 'Success',
payload: {
title: 'cat',
size: 1,
fileType: 'jpg'
}
}
```
## Conclusion: Introduction to object types in TypeScript pt2
With the help of extending interfaces and intersection types, work with even complex objects becomes easier. When you add generics, even temporarily unknown types are no longer a problem. It is my hope that this tutorial helped you understand all these concepts, how they work and how to use them.
[javascript classes]: https://blog.alexdevero.com/javascript-classes-pt1/#class-inheritance-extends
[type aliases]: https://blog.alexdevero.com/introduction-to-object-types-in-typescript/#named-object-types
[aliases]: https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-aliases
| alexdevero |
785,942 | Background jobs with Symfony messenger component | Let’s say your client comes up with this request: whenever an image is uploaded, do something with... | 0 | 2021-08-09T11:47:50 | https://dev.to/bornfightcompany/background-jobs-with-symfony-messenger-component-p63 | engineeringmonday, php, symfony | Let’s say your client comes up with this request: whenever an image is uploaded, do something with it. Change background color, convert it to some other format, add watermark… Whatever it is, image processing is often a slow process that will add significant time to your ‘upload image’ action.
###Do it later###
But, if you were to think about it, there is no reason why it should be executed during the request. The user doesn’t really care about that. He will upload the image, continue with his day and image processing will be done somewhere.. in the background.
Flow of any background job looks like this: **create job** + **store necessary data somewhere so it can be executed later** + **execute it
**
**Symfony messenger** component, introduced in version 4.1, handles this for you!
###Create job###
Think about the data necessary for job execution. In this case, we should remember some kind of identifier that will let us fetch the image later on. Depending on your requirements, it can be id of an entity stored in the database, path to the image or something else.
In the Message folder of your project, create a class that will encapsulate that data (and do nothing else).
``` php
namespace App\Message;
class SetBackgroundColorToBlack
{
/**
* @var string
*/
private $imageIdentifier;
public function __construct(string $imageIdentifier)
{
$this->imageIdentifier = $imageIdentifier;
}
public function getImageIdentifier(): string
{
return $this->imageIdentifier;
}
}
```
Inject MessageBusInterface into a service that handles image uploading. After the image is uploaded, create your message and dispatch it.
``` php
public function __construct(MessageBusInterface $messageBus)
{
$this->messageBus = $messageBus;
}
$message = new SetBackgroundColorToBlack($image->getIdentifier());
$this->messageBus->dispatch($message);
```
Congrats! You've just told symfony... well, nothing much.
### Where do all the messages go to? ###
You've created your message and dispatched it. What now? Store it somewhere so the class that knows how to handle it can pick it.
In this case, it's called transport. Symfony provides multiple transport options as well as retry strategies, so read the docs and decide what's best for your case. I used doctrine transport that won't be suitable for large projects.
Two transports are defined: async, that handles messages of SetBackgroundColorToBlack type and failure transport.
``` yaml
framework:
messenger:
failure_transport: failed
transports:
async:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
failed:
dsn: 'doctrine://default?queue_name=failed'
routing:
'App\Message\SetBackgroundColorToBlack': async
```
It's best to define your transports as env variables.
```
###> symfony/messenger ###
MESSENGER_TRANSPORT_DSN=doctrine://default
###< symfony/messenger ###
```
### Do the job ###
Someone should do the job. In MessageHandler directory of your project, create class SetBackgroundColorToBlackHandler.
This class **must** implement **MessageHandlerInterface**.
``` php
class SetBackgroundColorToBlackHandler implements MessageHandlerInterface
{
/**
* @var ImageProcessingService
*/
private $imageProcessingService;
/**
* @var ImageRepository
*/
private $imageRepository;
public function __construct(ImageProcessingService $imageProcessingService,
ImageRepository $imageRepository)
{
$this->imageProcessingService = $imageProcessingService;
$this->imageRepository = $imageRepository;
}
public function __invoke(SetBackgroundColorToBlack $setBackgroundColorToBlack): void
{
$image = $this->imageRepository
->get($setBackgroundColorToBlack->getImageIdentifier());
if ($image === null) {
return;
}
$this->imageProcessingService->setBackgroundColorToBlack($image);
}
```
Notes:
* Symfony is smart enough to connect message to the handler. It's enough to type hint variable of __invoke() method signature in the handler class.
* Handlers are services which means that you can inject other services.
* Maybe the image can't be fetched because someone deleted it in the meantime. Depending on your domain, maybe the image must exist and you want to throw an exception in that case.
* Maybe imageProcessingService throws exception (it probably does). Remember we defined failure transport? By default, all messages will be retried 3 times before they end up in failure transport.
### Work, work, work, work, work ###
Define worker that is gonna consume those messages.
`php bin/console messenger:consume async`
**Install supervisor on production that will watch out for your workers.**
How do you like symfony messenger component? What do you use to handle background jobs? Do you think they are necessary in the first place?
| linajelincic |
785,946 | CodePen Challenge: Fibonacci Sequence | CodePen Challenge: https://codepen.io/challenges/2021/july/4 DEV TIME: 2.5 hrs FEATURES: Flexbox... | 0 | 2021-08-09T09:08:55 | https://dev.to/drawcard/codepen-challenge-fibonacci-sequence-536 | html, css, codepen | CodePen Challenge:
https://codepen.io/challenges/2021/july/4
DEV TIME: 2.5 hrs
FEATURES:
- Flexbox layout utilising a golden mean ratio, that is derived from the Fibonacci sequence
- Earthy colour palette, to match the imagery and suit a herbarium theme
- Added mobile responsivness (<768px)
- Nice hover effect on buttons
- Thanks to Wikipedia for dummy content
(View full screen)
{% codepen https://codepen.io/drawcard/pen/wvdXEVV %} | drawcard |
785,959 | Considerations in Building Enclaves for Multiparty Computation (Part 2) | Getting Your Code On Now that you’ve pinned down the ideal functionality of the enclave,... | 13,569 | 2021-08-09T09:21:31 | https://oblivious.ai/blog/consideration-in-building-enclaves-for-multiparty-computation-part-2/ | security, datascience, privacy, cloud | ## Getting Your Code On
Now that you’ve pinned down the ideal functionality of the enclave, and assuming you are comfortable coding up a server to handle requests from each party, we can talk about some of the aspects you probably want to keep in mind.
In AWS, you can treat Nitro Enclaves as a self-contained VM with an Enclave Image File build from a Docker image running inside. The communication in and out of the enclave is via virtual sockets (vsock) only to the parent instance (that is the instance that created the enclave. The parent instance acts as an intermediary between the enclave and the outside world, with the sole exception of the KMS Proxy which speaks directly with AWS Key Management Service.
To start, the purpose of using a trusted execution environment is because the parties don’t trust each other. So we have to assume the users of the enclave are adversarial by nature. This means there is an onus on you, the developer, to develop a secure application which is often easier said than done. A good starting point is to create strict input and output validators. If the IO is forced to conform to a predefined JSON-schema or OpenAPI definition, at least you should be able to validate that while checking for any malicious characters and so on.
You should endeavor to be particularly careful when handling bytes as an input, especially in Python. The YouTuber PwnFunction has a nice introductory tutorial on a known exploit in Python’s Pickle library which you can find here.
There is an ever persisting battle of course between usability and security. Some would strongly argue that enclave source code should be in either C or Rust, while others feel comfortable using languages such as Python in order to take advantage of a particular tool or framework within the enclave itself. Irrespective of your decision, the code can be made more secure by performing rigorous testing, performing static analysis such as SonarCloud, and using internal firewalls to lock down any unexpected communication channels.
Once you’ve hardened the IO of the enclave you may also want to consider your authentication model for within the enclave. This can be as simple as inputting the key management services the enclave can speak to as a build argument (discusses in the next section) or using pre-shared keys with TLS-PSK for example. OAuth-based approaches may cause some additional considerations if the enclave can not directly communicate with a trusted key authority for public-key cryptographic approaches. That’s not to say it’s impossible, but one should certainly think through the challenges and potential risks which may be involved if there is a parent server acting as a man-in-the-middle.
## Encoding Guarantees: Build Arguments and Environment Variables
Previously we established that the enclave image (converted from the Docker image) running inside the Nitro Enclave is what is attested when requesting key access from one or more of the parties. Well, we can use this fact to develop reusable enclave images.

For example, assume that we have two parties, Alice and Bob. They would like to be authorised based on some hash of their respective KMS ARNs and would also like to limit the number of function calls made by Alice to 3 and Bob to 2. Now imagine another scenario where Alice and Charlie would like to run the same interaction, but this time Alice can only make 2 function calls and Charlie 4.
In such a scenario you do not want to be hard coding this information each time. Instead, you can leverage Docker build arguments and set them as environment variables within the enclave. This changes the Docker image and as such the attestation hashes that are used to verify the container to the key management services. It also can be highly efficient too, depending on how you create your Docker image as it can take advantage of build caching such that building the image with new arguments can be a painless process.
## Persistent Storage, PCI Devices & Resource Requirements
Nitro Enclaves endeavor to ensure security by locking down a virtual machine to a very limited set of functionalities. It operates purely in RAM with dedicated CPUs. As such many functionalities, you may expect may not be available.
For example, one would typically expect to have persistent storage on a Volume on an EC2 instance. However, this of course is outside of the enclave so you have to encrypt data using a KMS before releasing it to the parent instance. In the context of multiparty computation, this poses an interesting question, whose keys to use? To answer this you may wish to consider who manages the parent instance and how many of the parties would need to collaborate together to decrypt the data if the encrypted payload was to ever be leaked. One approach would be to encrypt the packet with each party's KMS in turn and reverse the process when decrypting the payload if returned to the enclave at a later point. There is nothing obviously wrong with this approach, but as we no every encryption and decryption will add to the latency of the enclave overall.
When it comes to data, we’ve found it's best to assume the enclave is transactional and to save and reload persistent data for better confidence. That is not to say you shouldn’t save anything locally but best to consider it as a cache. That way if the enclave were to halt and be restored you have the safety net of being able to reload the state of the enclave.
A second challenge is that no PCI devices are available to the enclave, so if you are hoping to crunch some data on a GPA or equivalent, you may want to think again. Your only obvious solution in such a scenario is to pay for an instance with more powerful CPUs and/or to allocate more of them to the enclave to facilitate threading and multiprocessing.
Finally, one must remember that the entire enclave lives within the permitted resources at launch time. This means the enclave must have enough ram for the enclave image, all RAM required internally, and all RAM which will store the file system and so forth. This is worth keeping in mind as you develop your enclave and taking a resource-efficient design can save you significantly on your monthly AWS bills.
## Timing Attacks
Importantly, enclaves give you a guarantee that what code you agree gets run, not that it is safe to run. This is a big difference and the onus is on the developer to make sure that side channels, such as execution time, do not leak sensitive information that was unforeseen when signing off on the ideal functionality.

Let’s have a look at how the timing of execution may inadvertently change the guessing probability of a party's secret input. Suppose Alice inputs a decision tree that Bob wishes to use to classify some data. If the decision tree is not balanced, ie if there is a different number of comparisons required depending on the path taken through the branches of the decision tree, then simple timing of the execution time may reveal what the output of the classification was.
While not always trivial to achieve, you should endeavor to create fixed-time programs for enclaves if the timing is likely to reveal superfluous information to one or more parties involved.
## Summary
While not delving into actual code, we’ve tried to outline some pointers towards building your first few enclaves. Even the pros can make a mistake with a poorly defined ideal functionality or an insecure implementation. Take your time, give it a shot and be comfortable to make some mistakes initially. Enclave technology is still very new and there is great reward in being an early pioneer in developing and leveraging enclave applications.
## Why Oblivious?
At Oblivious, we’ve built the first full-service enclave management system for multiparty applications. It’s called Ignite and it allows data scientists and machine learning practitioners to take advantage of prebuilt enclaves for data exploration, analysis, training, and inference. If you are interested in the technology, reach out to us to get started today! | jkf_oblivious |
785,965 | Get a 100k Flux grant for deploying your app on the Flux network | Flux has opened up applications for developers to get a 100,000 Flux (~$10,000 at time of writing)... | 0 | 2021-08-09T09:54:04 | https://dev.to/cryptovium/get-a-flux-grant-for-deploying-your-app-on-the-flux-network-2nbh | 
Flux has opened up applications for developers to get a **100,000 Flux** (~$10,000 at time of writing) grant for creating and deploying something awesome on the Flux network.
> **UPDATE 15 AUG 2021:** 100,000 Flux now worth ~$14,000
> **UPDATE 22 AUG 2021:** 100,000 Flux now worth ~$19,000
If you've got the ideas, skills and motivation to build something truly unique, then Flux's decentrealized cloud network is the place to build it.
## What is Flux?
Flux is a powerful global decentralized computing network currently made up of over **1650 community-run, decentralized nodes** powered by the revolutionary second layer operating system: FluxOS.
{% youtube VCqHOBRuSmI %}
Each node is capable of hosting dockerized applications with FluxOS ensuring that a set number of containers are run across the network. This ensures true decentralization and that your app is always available.
The Flux Network currently has close to 10,000 vCores, 30TB RAM and 1,000TB of SSD storage available for apps to consume.
Powering the ecosystem is the Flux blockchain, a Proof of Work chain that is used for rewarding miners and node operators, payment for compute resources and much more.
## Apps already on the Flux Network
The Flux network already hosts dozens of apps and aspects of blockchain infrastructure for other projects. Flux can run any hardened dockerized image, so the possibilities of what you can build are endless.
They recently completed a 20 dapps in 20 days challenge which 20 new apps added to the network, including numerous games. In fact, Flux is perfect for hosting online games and even has some Minecraft servers currently running on the network.
As mentioned above, Flux also hosts aspects of other Crypto projects infrastructure, the best example of which is Kadena. There are hundreds of Kadena nodes currently being run on the Flux Network.
To see a complete list of apps running on Flux, check out:
https://www.runonflux.io/fluxos#dapps-section
## 100k Flux grants
To keep the ball rolling after the 20 dapps in 20 days challenge, Flux is looking for the next killer decentralized application and is inviting developers to submit their ideas to be in with a chance of getting one of those 100k $FLUX grants.
It can be a single app, a platform/system consisting of a combination of apps or something else entirely as long as it harnesses the strengths of a decentralized cloud and helps to grow the network.
Grants are give for the development proposals that are greenlighted by the community DAO and are paid in phases at key project milestones with the final amount paid on the final production deployment.
## How to apply
For more details on how to apply and what winning the grant will mean, please check out the following post from the Flux team:
https://fluxofficial.medium.com/call-for-developers-innovate-on-flux-and-get-a-100k-grant-e9ae90735b63
## Further reading
If you'd like to learn more about the Flux project as a whole, then checkout the [Flux website](https://runonflux.io), drop by the [Flux discord](https://discord.io/runonflux) or give them a follow on [Twitter](https://twitter.com/RunOnFlux).
I also cover Flux and other projects extensively on my own [Twitter account](https://twitter.com/Cryptovium), including weekly updates on the earning potential of running a Flux node.
{% twitter 1416081604812869634 %}
Give me a follow! | cryptovium | |
786,110 | Does your "dream tech" solve your actual problems? | Last night I dreamt that I had received an offer for a round of angel investment funds for Tiny... | 0 | 2021-08-13T07:43:56 | https://jhall.io/archive/2021/08/09/does-your-dream-tech-solve-your-actual-problems/ | tinydevops, scale | ---
title: Does your "dream tech" solve your actual problems?
published: true
date: 2021-08-09 00:00:00 UTC
tags: TinyDevOps,scale
canonical_url: https://jhall.io/archive/2021/08/09/does-your-dream-tech-solve-your-actual-problems/
---
Last night I dreamt that I had received an offer for a round of angel investment funds for Tiny DevOps.
Disregarding the fact that angel investment makes no sense for my type of business, after waking I thought about it a bit.
On the one hand, a pile of cash would be nice. It would allow me to focus on improving the production quality of my [podcast](https://podcast.jhall.io/) and [YouTube channel](https://www.youtube.com/channel/UC5UfX0EgUWlcdQ2RDsq_fcA), for example, or buying some paid advertising, or probably improving lead generation in a dozen other ways.
But on the other hand, accepting investment of any kind would completely undermine the purpose of my business, which is to help staff-strapped and cash-strapped teams do better DevOps. If I suddenly had the cash to hire a team of 15, I would be in a completely different game, and no longer able to relate to the people I’m trying to help.
I see a similar trend in DevOps at large. Most of the DevOps stories we hear talked about come from big tech. This means the technical solutions we’re most often exposed to are for problems that big companies have. Scaling to thousands of nodes, or geo-redundancy, or even something as simple as blue/green deployments. These are all good tools, just as investment capital can be a good tool—but these tools are only useful if they solve **your problem**.
Most of us don’t need to scale to thousands of nodes. Most of us don’t need geo-redundancy. Most of us don’t even need blue/green deployments.
What problems are you facing? How do you know if you’re chosing a solution that fits the problem, and your scale?
* * *
_If you enjoyed this message, [subscribe](https://jhall.io/daily) to <u>The Daily Commit</u> to get future messages to your inbox._ | jhall |
786,229 | Python Boot Camp Experience!!! | Friday 6th August, 2021, made the end of week the of the python boot camp at lux tech academy. For so... | 0 | 2021-08-10T14:13:02 | https://dev.to/priscillahnalubega/python-boot-camp-experience-4d5h | python, datascience, beginners, machinelearning | <p>Friday 6th August, 2021, made the end of week the of the python boot camp at lux tech academy. For so long I have been trying to learn python on my own and most of the times it ends fatally. The constant bugs, indention errors almost made me give up learning python entirely but that's all part of the learning process. In my recent quest to master programming, a friend of mine referred me to the lux tech academy boot camp where have learnt and familiarized my self with a lot of new stuff thus being a beginner for the nth time now.
</p>
<p>
Though my experience cannot be limited to what am going to list down it doesn’t stop me from sharing a few lessons.
</p>
<h1>INTRODUCTION</h1>
<P> Week one commenced with a variety of talks from how to be abadass python developer to setting up the environment and writing clean code. I learnt the professional way of writing python code that using the <a href ="https://cheatography.com/jmds/cheat-sheets/python-pep8-style-guide/"> <b>pep8</b></a> standard. To make my learning easier I used <a href="https://www.pythontutorial.net/">Python Tutorial website</a> for reference. <a href="https://scrimba.com/learn/python">Scrimba</a> is also a nice place to start on your journey of being a python developer. I also learnt how to write and publish technical articles
</P>
<h2>Hello world!</h2>
<p>
As our mentor during the program always quotes "if you can write “hello world” you can change the world" we commenced with learning the basics of the python language which included the syntax, data types, key-words and styles of writing python code. At the end of the week I able to build a <a href="https://github.com/priscillahnalubega/password-genarator">random password generator</a>.
</p>
<h2>Happy coding</h2>
<p>
The happy phrase that sends off all programmers into the deep waters of writing code. During the second week of the boot camp i learnt <ul>
<li>lists, tuples, dictionaries and sets</li>
<li>Functions</li>
<li>mutable and immutable</li>
<li>common methods in python</li>
<li>Data structures and alogorithms</li>
</ul>
With that I was able to build a <a href="https://github.com/priscillahnalubega/YouTube-downloader">youtube downloader</a> though faced a lot of challenges with the pytube package.
</p>
<h2>Eat, code, sleep, </h2>
<p>
In week 3 we where Demystifying The Flask Application factory pattern at the end of which I was able to build a <a href="https://github.com/priscillahnalubega/flask-application">simple flask application.</a>
</p>
<h2>Conclusion</h2>
<p> I believe that by the end of the boot camp my confidence as a software developer would have been boosted. The things I have learnt in this boot camp cant be limited to what I have shared but this all I can say for now.
thanks for reading, till next time chawwww!!!
| priscillahnalubega |
786,243 | The Zen 3 eBay/StockX Market - Now Just A Normal Market | Source Code for Data Scraping: https://github.com/driscoll42/ebayScraper It has been over six months... | 0 | 2021-08-09T13:52:23 | https://dev.to/driscoll42/the-zen-3-ebay-stockx-market-now-just-a-normal-market-5365 | python, datascience, analytics | Source Code for Data Scraping: https://github.com/driscoll42/ebayScraper
It has been over six months since my last series of articles and it's time for an update on the eBay/StockX market for Zen 3, Big Navi, Ampere, and the PS5/Xbox. I'll be posting an article each day with how the markets have changed.
* CPUs: Zen 3
* Consoles: [Xbox Series S/X & PS5](https://dev.to/driscoll42/ps5-xbox-ebay-stockx-scalping-market-over-half-a-million-sold-for-104-million-in-scalper-profits-bbo)
* GPUs: RTX 30 Series & Big Navi
* Will be next week to allow for additional analysis
Highlights:
* 30,254 Zen 3 CPUs sold on eBay and StockX
* $18.29 million in sales on eBay/StockX, with $410k in profits for sellers and $1.74 million in profits for eBay/StockX/PayPal
* Prices are now at or below MSRP
The Zen 3 market on eBay and StockX is now almost boring, which is a wonderful thing. Gone are the days of last year where it was impossible to get a Zen 3 CPU for 1.4-2.4x its retail price. Now you can buy one at MSRP easily, and a used one for less than MSRP. At time of writing, the 5800X sells for $366, 82% of its MSRP, and even the 5950X is selling for 98% of it's MSRP at $780. Further if I go to Newegg, one can easily buy any of the CPUs on sale even. The exception to this are the new Zen 3 APUs, the 5300G, 5600G, and 5700G, but even they are only going for 3-15% over MSRP and I expect will drop to normal as soon as they are easily sold.
The story of the price drops on eBay is primarily the story of Antonline's pricing. On March 17, Antonline sold over 1000 5600X's in less than two weeks at $321, and has steadily been dropping prices every few weeks, in late May to $297 selling another 500, and more recently in July to $270, less than MSRP, presumably to clear stock. A similar story plays out for the 5900X, most dramatically on May 6 where Antonline made it available for $585 and sold over 600 CPUs, and they also made the 5950X available for $879 and sold over 700 CPUs in a week. Antonline sets a maximum on eBay for pricing and by now it's unprofitable to scalp Zen 3 CPUs

* [Zen 3 Median Pricing in Raw Dollars](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/Zen%203%20Median%20Pricing%20-%20%24.png)
* [Zen 3 Median Pricing - % MSRP - 7 Day Rolling Average](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/Zen%203%20Median%20Pricing%207%20Day%20Rolling%20Average%20-%20%25%20MSRP.png)
* [Zen 3 Median Pricing in Raw Dollars - 7 Day Rolling Average](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/Zen%203%20Median%20Pricing%207%20Day%20Rolling%20Average%20-%20%24.png)
CPU Graph | MSRP| Total Sold | Median Price | Past Week Median Price | Casual Scalper Break Even | Sophisticated Scalper Break Even | Total Sales | Estimated Scalper Profits | Estimated eBay/PayPal Profits
---|---|----|----|----|----|----|----|----|----|----
[5300G](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5300G.png)|$150|7|$304|No Sales|$153 |$188 |$2,168 |$740 |$281
[5600G](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5600G.png)|$259 |72 |$379 |$300 |$263 |$321 |$26,571 |$3,974 |$2,427
[5700G](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5700G.png)|$359 |71 |$515 |$370 |$365 |$448 |$37,083 |$4,876 |$4,415
[5600X](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5600X.png)|$299 |9,386 |$1,034 |$265 |$304 |$375 |$3,280,583 |-$178,286|$295,115
[5800X](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5800X.png)|$449 |7,000 |$730 |$366 |$457 |$558 |$3,229,732 |-$474,869|$299,452
[5900X](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5900X.png)|$549 |6,533 |$323 |$530 |$558 |$681 |$4,824,243 |$487,347 |$451,065
[5950X](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5950X.png)|$799 |4,942 |$422 |$780 |$812 |$987 |$5,093,480 |$364,251 |$475,499
Total|N/A |28,011 |N/A |N/A |N/A |N/A | $16,493,861 |$208,033 |$1,528,254
StockX's market trends largely mirror eBay's, just vastly smaller, only 8% the quantity sold and 10% in revenue, and slightly cheaper, ~3% cheaper in the past week except the 5600X which is 8% cheaper. By this point it is also unprofitable to scalp on StockX, unless one is trying to resell an old CPU to upgrade.
CPU|Total Sold|Average Sales Price|Last Week Average Price| Total Sales Volume | Estimated StockX Profits | Estimated Scalper Profits
---|---|---|---|---|---|---
5600X|98|$307|$245|$30,086|$3,610|-$2,826
5800X|115|$393|$354|$45,195|$5,423|-$11,863
5900X|1291|$750|$539|$968,250|$116,190|$143,301
5950X|739|$1,022|$759|$755,258|$90,631|$74,166
Total|2,243|N/A|N/A|$1,798,789|$215,855|$202,777

* [5600X Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5600X%20Cumulative%20Plots.png)
* [5800X Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5800X%20Cumulative%20Plots.png)
* [5900X Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5900X%20Cumulative%20Plots.png)
* [5950X Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5950X%20Cumulative%20Plots.png)
* [5300G Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5300G%20Cumulative%20Plots.png)
* [5600G Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5600G%20Cumulative%20Plots.png)
* [5700G Cumulative Sales/Profits](https://raw.githubusercontent.com/driscoll42/ebayMarketAnalyzer/master/2108-Images/5700G%20Cumulative%20Plots.png)
Plotting the Sales dollars over time, it is clear to see whenever there was a massive drop of stock on eBay as the total sales spikes high, also whenever the 5800X drops below some threshold (e.g. $500, $450, $500, etc...) that triggers a number of buyers. The real standout is the 5950X where Antonline saw $300,000 in revenue in a single day from making several hundred 5950X's for sale at once.

Taking a look again at who is selling the CPUs on eBay, by now it is clearly a few dominant players, mostly Antonline and Best Buy, who own the market, with 52% of CPUs sold by sellers who have sold over 50 CPUs. 31% of CPUs are still sold by sellers with <5 sales, but as the market normalizes, it'll be mostly the big retailers.

I have mostly filtered out the suspicious sellers now entirely, so very few have almost no feedback, but as before the plot is dominated by the two stores at the far end, though some other stores with less feedback even out the curve some.
I have opted not to get pricing on various other CPUs as I did last time primarily due to the increase in manual labor to scrape eBay now. | driscoll42 |
786,291 | Moving on Data structures | Lists Lists are immutable. Lists let you hold any number of items of the same type. let... | 14,021 | 2021-08-10T14:40:58 | https://dev.to/frolovdev/moving-on-data-structures-1nc8 | ocaml, newbie, howto, tutorial | ### Lists
Lists are immutable. Lists let you hold any number of items of the same type.
```ocaml
let l = ["i"; "am"; "list"];;
```
### Tuples
A tuple is an ordered collection of values that can each be of a different type. You can create a tuple by joining values together with a comma.
```ocaml
let a_tuple = (3,"three") ;;
val a_tuple : int * string = (3, "three")
let another_tuple = (3,"four",5.) ;;
val another_tuple : int * string * float = (3, "four", 5.)
```
Destructuring of a tuple
```ocaml
let (x,y) = a_tuple ;;
(**val x : int = 3
val y : string = "three" *)
```
Adding elements in front of list
```ocaml
"French" :: "Spanish" :: languages ;;
(** - : string list = ["French"; "Spanish"; "OCaml"; "Perl"; "C"] *)
```
Concatenation of two lists
```ocaml
[1;2;3] @ [4;5;6] ;;
(** - : int list = [1; 2; 3; 4; 5; 6] *)
```
### Records
```ocaml
type point2d = { x : float; y : float }
```
Destructuring of the record
```ocaml
let magnitude { x = x_pos; y = y_pos } =
Float.sqrt (x_pos **. 2. +. y_pos **. 2.)
;;
(* val magnitude : point2d -> float = <fun> *)
```
Composing records
```ocaml
type circle_desc = { center: point2d; radius: float }
type rect_desc = { lower_left: point2d; width: float; height: float }
type segment_desc = { endpoint1: point2d; endpoint2: point2d }
```
Variant types
```ocaml
type scene_element =
| Circle of circle_desc
| Rect of rect_desc
| Segment of segment_desc
```
### Arrays
The mutable guy.
```ocaml
let numbers = [| 1; 2; 3; 4 |] ;;
(* val numbers : int array = [|1; 2; 3; 4|] *)
numbers.(2) <- 4 ;;
(* - : unit = () )*
numbers ;;
(* - : int array = [|1; 2; 4; 4|] *)
```
The .(i) syntax is used to refer to an element of an array, and the <- syntax is for modification. Because the elements of the array are counted starting at zero, element .(2) is the third element.
What is Unit? Unit it's just a placeholder similar to void in other more mainstream languages
### Mutable records
Records, which are immutable by default, can have some of their fields explicitly declared as mutable. Here’s an example of a mutable data structure for storing a running statistical summary of a collection of numbers.
```ocaml
type running_sum =
{ mutable sum: float;
mutable sum_sq: float; (* sum of squares *)
mutable samples: int;
}
```
Let's write some imperative stuff
```ocaml
let mean rsum = rsum.sum /. Float.of_int rsum.samples
let stdev rsum =
Float.sqrt (rsum.sum_sq /. Float.of_int rsum.samples
-. (rsum.sum /. Float.of_int rsum.samples) **. 2.)
;;
(* val mean : running_sum -> float = <fun> *)
(* val stdev : running_sum -> float = <fun> *)
let create () = { sum = 0.; sum_sq = 0.; samples = 0 }
let update rsum x =
rsum.samples <- rsum.samples + 1;
rsum.sum <- rsum.sum +. x;
rsum.sum_sq <- rsum.sum_sq +. x *. x
;;
(* val create : unit -> running_sum = <fun> *)
(* val update : running_sum -> float -> unit = <fun> *)
```
Take a look into semicolons to sequence operations. When you are working purely functionally, this wasn’t necessary, but you start needing it when you’re writing imperative code.
```ocaml
let rsum = create () ;;
val rsum : running_sum = {sum = 0.; sum_sq = 0.; samples = 0}
List.iter [1.;3.;2.;-7.;4.;5.] ~f:(fun x -> update rsum x) ;;
(* - : unit = () *)
mean rsum ;;
(* - : float = 1.33333333333333326 *)
stdev rsum ;;
(* - : float = 3.94405318873307698 *)
```
### Refs
Ref is a single mutable variable (hi ma react boys)
```ocaml
let x = { contents = 0 } ;;
val x : int ref = {contents = 0}
x.contents <- x.contents + 1 ;;
- : unit = ()
x ;;
- : int ref = {contents = 1}
```
Some syntactic sugar for ref
```ocaml
let x = ref 0 (* create a ref, i.e., { contents = 0 } *) ;;
(* val x : int ref = {Base.Ref.contents = 0} *)
!x (* get the contents of a ref, i.e., x.contents *) ;;
(* - : int = 0 *)
x := !x + 1 (* assignment, i.e., x.contents <- ... *) ;;
(* - : unit = () *)
!x ;;
(* - : int = 1 *)
```
Nothing magic here, let's implement the ref by ourselves
```ocaml
type 'a ref = { mutable contents : 'a } ;;
type 'a ref = { mutable contents : 'a; }
let ref x = { contents = x } ;;
(* val ref : 'a -> 'a ref = <fun> *)
let (!) r = r.contents ;;
(* val ( ! ) : 'a ref -> 'a = <fun> *)
let (:=) r x = r.contents <- x ;;
(* val ( := ) : 'a ref -> 'a -> unit = <fun> *)
``` | frolovdev |
786,349 | Cross-Platform app with C# and Uno Platform - Part 1 | If you are developing an app, you want to reach as many users as you can and the easiest and the most... | 14,025 | 2021-08-09T17:57:02 | https://dev.to/parallelthreads/writing-a-cross-platform-app-with-c-and-uno-platform-lop | csharp, mobile, dotnet |
If you are developing an app, you want to reach as many users as you can and the easiest and the most obvious target is a web application. It is easily accessible irrespective of the platform users are on. However, not all applications\scenarios can be delivered through a browser and this is why we have desktops\laptops, wearables and mobile apps. So now you have to write software that supports these platforms as well. Will life be easier if you could write code just once that would work on all these platforms?
### So, desktops\laptops and mobile - that's 2 categories?
*Only* 2 platforms (or form factors) - desktops\laptops can be Windows, Mac or Linux. Phones are either Android or iOS. This is not even considering the huge market of Tablets and Wearables. It is *also* not considering the various versions of those devices running various versions of the operating system on them.
<figure>

<figcaption><p><a href="https://giphy.com/gifs/reaction-BmmfETghGOPrW">via GIPHY</a></p></figcaption>
</figure>
In this article, we will take a look at [Uno Platform](https://platform.uno/) to develop our app for Windows, Mac, Linux and Android.
**Note:** We will discuss a bit more about cross-platform development, challenges and alternatives to the Uno Platform in part 2 of this article.
### What is Uno Platform?
Uno Platform is an [open source](https://github.com/unoplatform) UI platform to build apps with a single codebase that can target a wide variety of platforms. You write code in C# and XAML to target all these platforms. It builds on [existing capabilities provided by each platform](https://platform.uno/how-it-works/) internally, so that you can focus on developing your app. It does allow you to add\modify the app's behaviour based on where your app is running.
<figure>

<figcaption><p>Source:<a href="https://github.com/unoplatform/uno/blob/master/doc/articles/what-is-uno.md">Uno Platform</a></p></figcaption>
</figure>
[GitHub docs](https://github.com/unoplatform/uno/blob/master/doc/articles/how-uno-works.md) has a good summary on how Uno works for build, runtime and rendering across platforms.
### Simple app with Uno
We will look at creating a simplified version of the Portfolio Tracker web app that I [previously wrote about](https://dev.to/parallelthreads/building-a-web-app-with-asp-net-core-blazor-webassembly-13kj). The constraint I have imposed is, I shouldn't have to modify my service or database. It has to work with the existing set of APIs.
My requirement boils down to this -
* Use this current available data and write an app that runs on Windows, Linux, Mac and Android.
* Reuse as much code as possible from the previous Web Assembly project that I've created.
**Note:** Uno supports more platforms than the ones I'm targeting. It supports iOS, Tizen and Web Assembly as well. I cannot try out iOS and Tizen so I've excluded those and since I already tried out Web Assembly using Blazor I didn't include that as well.
I have used Visual Studio 2019 on Windows and Visual Studio 2019 for Mac. Uno Platform getting started guide for both platforms are available [here](https://platform.uno/docs/articles/get-started-vs.html).
### Project structure
Here is the Visual Studio project structure for this app -
<figure>

<figcaption>Uno Platform Visual Studio Project Structure (in Windows)</figcaption>
</figure>
* The Mac target project is unloaded as it is not supported on Windows.
* The Shared project is only a list of files. They are shared by each target platform and built for each as well.
* Each target project can have items specific to itself - fonts and image assets for each platform are couple of examples.
* Any reference\NuGet package to be added, has to be added to all target platforms.
* MainPage.xaml in the shared project is where the navigation is setup using the [NavigationView](https://docs.microsoft.com/en-us/windows/apps/design/controls/navigationview). Also the default view is set in Main.xaml.cs and in this case, Dashboard is set as default.
### The code
From the actual code perspective, it is writing a C#\XAML UWP app. We're working on the UI and we need UI controls. I will come back to this in part 2 to talk a bit more but I will mention the Windows Community Toolkit here. There is a section in the [docs](https://platform.uno/docs/articles/uno-community-toolkit.html?tabs=tabid-vswin) on how to use it, which is ported onto other platforms. I did include DataGrid, NavigationView, ProgressBar in the project and it worked fine.
I could not try out gesture support as I tried the app in the Android emulator and not on a physical device.
In a real production app, it may be the case that we will need platform specific behaviour. I didn't really have such a scenario for this app but I wanted to try it out anyway, so this is what I did - on Android I wanted the navigation menu to be the hamburger menu (3 stacked dashes), on the top left. For desktops\laptops, we have plenty of screen space so I did not want a hamburger menu and instead as menu items with the selected one being underlined.
```
#if __ANDROID__
RootNavigationView.PaneDisplayMode = NavigationViewPaneDisplayMode.Auto;
#else
RootNavigationView.PaneDisplayMode = NavigationViewPaneDisplayMode.Top;
#endif
```
Uno does support platform specific code both in [C#](https://platform.uno/docs/articles/platform-specific-csharp.html) and via [XAML](https://platform.uno/docs/articles/platform-specific-csharp.html) as well.
Couple of app screenshots here from Linux and Android - Linux was a virtual machine on Windows 10 running Kubuntu 18.04.
<figure>

<figcaption>C# UI app in Linux</figcaption>
</figure>
Android screenshot is from the emulator as I didn't root my phone to install this locally developed app. Yeah the app does not look great, thanks to my choice of painting it gray.
<figure>

<figcaption>Same app running in the Android Emulator</figcaption>
</figure>
### How is Uno so far?
You can target a wide variety of platforms with a single codebase and that is neat. It includes Web Assembly as well so it covers the browser space too. In the app, the code in the folders under Shared project (Models, Interfaces, Services, Helpers etc.) are code that I've used as-is from the previous project. This is the **biggest advantage** I've seen writing this app with Uno. I have my service and database and I have all the code to fetch and handle it. The focus is only on building the UI. Unless we need\want to do something platform specific, Uno abstracts it from us.
XAML Hot Reload is super useful, especially if we're used to, say React development for example. Uno does have an active Discord channel where core members participate, which is encouraging to see.
There is also decent support for 3rd party libraries (I'm excluding UI controls here, which I talk about in part 2) including presentation frameworks like MVVM Light and theming options like Material UI as well.
Link to the [GitHub repo for this app](https://github.com/AmitEMV/PortfolioTrackerCrossPlatform).
Cover image [credit](https://pixabay.com/photos/kings-cross-station-london-england-1647741/) | parallelthreads |
786,381 | My memory in Brisbane | I spent a year working holiday in Australia in the year 2013, and it changed my life. It was not... | 0 | 2021-08-10T01:16:42 | https://victorleungtw.medium.com/my-memory-in-brisbane-7376b1ac22ab | workingholiday, brisbane, life, australia | ---
title: My memory in Brisbane
published: true
date: 2021-08-09 17:38:51 UTC
tags: workingholiday,brisbane,life,australia
canonical_url: https://victorleungtw.medium.com/my-memory-in-brisbane-7376b1ac22ab
---
I spent a year working holiday in Australia in the year 2013, and it changed my life. It was not something I planned out, but life was full of adventures. I knew I had a dream to work overseas, but the chance was low when I studied chemistry as an undergraduate in Hong Kong. There were not many exchange opportunities, and I was also jealous that the business school students could easily travel abroad to study. After I took my last exam and before officially graduating, I went to Brisbane for my graduation trip, took a short English course to brush up my English and did a brief internship working at an adventure centre as free labour. It was an incredible journey. I was meeting new friends, hugging the koala and enjoying the beach. I remember the time I was lying on the sand at the Gold Coast. I was enjoying the sunshine and got a touching moment. I felt that why life was so unfair? I mean, I was suffering all the stress in public exams in Hong Kong, while Australians could just lying on the beach, surfing all day and enjoying their life! The competition in Hong Kong was keen, and our living conditions were poor, while Australia seems to be a prosperous country and rich in natural resources. I looked at the sky; it was so blue in Australia that it was better than Hong Kong.
I grabbed the sand, and my tears came down. Why did I not deserve such a good environment here? It reminds me of an ancient story in China; like a rat, if you were unlucky to be born on the dirty street, you would have a stressful life and worry about being killed by people all day and suffering from hunger. Instead, if you were lucky to be born as a mouse inside a kitchen, then you have unlimited clean food to eat all day and with a happy life. There was the realisation by Guan Zhong, and he decided to immigrate to another country with a better environment. Later on, he became very successful as the chancellor of the emperor. I felt the same way that the stress in Hong Kong was inhabiting my growth. I wanted to stay in Australia, and luckily I got a job offer after the internship.
Before I got the official job offer, I went back to Hong Kong after travelling to Australia. I graduated with a bachelor degree in chemistry and got my first full-time job working as a test engineer in a German company laboratory. The colleagues were friendly and taught me a lot, but the tasks were boring, repetitive and I had long work hours. I was in the department of food-grade testing safety of kitchen utensils. The daily duties were like this: first, I received the plastic cups to test. I need to use a scissor to cut them into standard sizes, such as 5cm x 5cm. Measure the surface area accurately. Second, depending on the sample size, I need to put them in a different volume of acid or water in the beakers. I need to prepare a couple of samples and put them in different ovens with different temperatures, such as 30°C, 50°C and 100°c. I need to label them properly and use a timer to measure the time accurately.
Finally, after the plastic was put in the acid or water for a certain period, the liquid was further test for chemical components, either with a couple more steps to mix different chemicals or put them in a machine to generate a report, using mass spectrometry. Sometimes, after all the chemical tests, I would need to run a taste test by putting the water-soaked with a plastic sample in my mouth to rate it the water tastes like plastic or not. It was a tedious job with not many career prospects. After ten years, I could be a senior test engineer or a testing manager, but I already know that this was not the career I want to pursue. Therefore a few months later, I resigned. But before I left, I met a new girlfriend in the marketing department, who we always hang out for lunch together. She was sad to know I was leaving and even worse, I was leaving Hong Kong to go to Australia. Because I decided to went back to Brisbane to start my new adventure, a second life with a new job title: Assistant Marketing Manager.
It was an excellent title, but it did not represent my real job nature. I earned the minimum wages as cheap labour in a tourist destination to do everything, ranging from social media, reception to answer the phone, kayaking instructor and toilet cleaner for the events. Anyway, the salary in Australia was higher than my wages as a test engineer in Hong Kong. And I was still young to apply for the working holiday visa for the job. The host family I stayed with was a lovely couple to offer me cheap rent to live. Lindsey Timms and Elisabeth Timms were genuinely lovely old couples in the host family. They were English teachers and Christian who welcomed me with generosity. They were a kind old couple living in the rural area with their kids visiting them sometimes. Their house was big compared to Hong Kong living standards, and I was scared of their dog in the house. When I tried to enter the house from the front door, it would follow me to the front, so I had to trick him that I was entering from the door behind, then enter the house at the front without the dog barking and waking everyone up. Sometimes, the host family would have other guests to stay with, such as students from Japan to learn English, and we all become very close friends. Every morning. I took the train from the host family to the city in an hour. The place I work was a tourist destination along the Brishand river in the town. It was an adventure centre for kayaking, rock climbing, standup paddleboarding, abseiling, Segways, bike hire etc. It also offers a venue for events, such as weddings and new year celebrations with fireworks.
Since my job title was fake, and I was in Brisbane alone, I had to do everything in this new environment to keep my job. I still remember the night of new year eve. My company organised an event for celebration. It was operated like a bar selling alcohol and drinks, with food and people dressing up to sing and dance. While all the customers were drinking and enjoying themselves, I was working. My task was heavy lifting the beer from the fridge to the venue. It was so heavy, and my arm was sore. But this was a pleasant task comparatively; at least I could train my arm. The worse task was cleaning up the rubbish bin and throwing the mess into the rubbish area. It was very smelly. It was a terrible smell with a mix of alcohol and human Vomit. I hate drinking from that moment as it smells so bad.
Meanwhile, the toilet was the worst. It was a biohazard with all the drunk people throwing up and the tissue paper clotting them to flush it out. I would not understand their fun of being drunk. To me, they were acting like animals. The men were not behaving like an educated gentlemen but just drunk and yelling at each other. The women were dressing up in a sexy way, but I did not find them attractive when they were drunk and rowdy. When the fireworks went off in the sky at midnight, everyone was cheering and happy, except I felt sad at the corner dealing with all the rubbish and mess by the drunk people. I wanted to cry, but I know I need to stay strong.
I only worked at the events on a few occasions. Instead, my main job was working on the tourism side. As I could speak Cantonese, mandarin and English, my role would be dealing with the Chinese tourists and local Chinese students, which was a big business market. I invited all the student unions to come for kayaking with a discounted rate for promotions. I also help with the TV channels to shoot tourist shows in Brisbane. One time, Hony Kong TVB came to our site, and I was kayaking with miss Melbourne at that time. It was memorable to kayak with this beautiful lady, and the show could still be found on the Youtube channel. I was the kayaking instructor for her. I started to enjoy kayaking after working there. I was living a healthy lifestyle with lots of exercises. I got a very burnt and dark skin that tourists thought I was from the Philippines instead of Hong Kong. It was laborious work because the kayaks were heavy to lift from the water. But sometimes, I enjoyed this hard work instead of sitting in the office. Working at the back office drives me crazy; sometimes, the phone keeps ringing, and I had to answer to take customers bookings. At the same time, I had to multitask when customers came in to buy a drink at the front desk. Also, we had a coffee cart outside in the cafe area, where I had to work as a barista making coffee. I had no idea the difference between flat whites and latte, so I just made the same coffee while not many customers would complain about the number of milk.
I bought value to the company, not just because of my multiple roles but also because I was digital-savvy. I knew how to use social media, so I was in charge of their online presence. I took good photos of every day of the visitors with the beautiful view of the Brisbane river. Also, the boss wanted to use hacks to boost the number of Twitter followers. There were many hacks, one of them was by exchanging followers. First, I used Twitter to follow a lot of different accounts to raise their awareness. Then I unfollow them after they followed me back. As a result, the boss was happy to see ten thousand followers on Twitter, resulting in a rank number one tourist destination account. It would probably be banned these days, but it was common practice generating all the fake followers back in the old day. Also, the term SEO was trendy, which means Search Engine Optimisation to hack the google algorithm, such that you got to the first search result, tricks like putting a lot of referral links etc. At that moment, it was the dream of all the marketing people, and I realise none of them was technical savvy to understand how it works. So I decided to learn more about it and specialised in building websites, which is the story for the next chapter.
_Originally published at_ [_http://victorleungtw.com_](https://victorleungtw.com/2021/08/10/my-memory-in-brisbane/) _on August 9, 2021._ | victorleungtw |
786,512 | OpenNMS On the Horizon – Docs, Nephron, Flows, Config Management, Cortex, Twin API, gRPC, Trapd, Non-Root, Vue, Maps, Web | Since last time, we did more work on documentation, Nephron and flows, config management, Cortex, the... | 0 | 2021-08-09T19:12:24 | https://www.opennms.com/en/blog/2021-08-09-opennms-on-the-horizon-docs-nephron-flows-config-management-cortex-twin-api-grpc-trapd-non-root-vue-maps-web/?utm_source=rss&utm_medium=rss&utm_campaign=opennms-on-the-horizon-docs-nephron-flows-config-management-cortex-twin-api-grpc-trapd-non-root-vue-maps-web | ooh, configapi, cortex, documentation | ---
title: OpenNMS On the Horizon – Docs, Nephron, Flows, Config Management, Cortex, Twin API, gRPC, Trapd, Non-Root, Vue, Maps, Web
published: true
cover_image: https://pbs.twimg.com/media/E7PfstXWQAQS9o-?format=jpg&name=large
date: 2021-08-09 19:10:03 UTC
tags: OOH,configapi,cortex,documentation
canonical_url: https://www.opennms.com/en/blog/2021-08-09-opennms-on-the-horizon-docs-nephron-flows-config-management-cortex-twin-api-grpc-trapd-non-root-vue-maps-web/?utm_source=rss&utm_medium=rss&utm_campaign=opennms-on-the-horizon-docs-nephron-flows-config-management-cortex-twin-api-grpc-trapd-non-root-vue-maps-web
---
Since last time, we did more work on documentation, Nephron and flows, config management, Cortex, the Twin API, Trapd, running as non-root, Vue Geomaps, and web input handling.
Also, Tarus Balog − the original project steward of OpenNMS and a co-founder of The OpenNMS Group − [announced his departure](https://www.adventuresinoss.com/2021/08/09/on-leaving-opennms/) this morning. The network monitoring landscape would not be the same without you, and we wish you luck on your next act.
<!--
git log --since='2021-08-02 00:00:00' --until='2021-08-09 00:00:00' --author=bamboo@opennms.org --invert-grep --all --no-merges --color=always --format='%Cblue%ai %Cgreen%aN %Creset%s %Cblue(%H)%Cred%d' --author-date-order | sort | less -R
-->
## Github Project Updates
### **Internals, APIs, and Documentation**
- Bonnie and Mark did more work on various cleanups in topology and provisioning-related documentation.
- Bonnie and Mark continued their work on doc table formatting cleanups.
- Stefan merged his fixes to flapping Nephron tests.
- Freddy continued his work on the new config management API.
- Chandra worked on diagnosing a bug in flow processing.
- Stefan continued his work on Cortex flow aggregation and labeling.
- Yang Li continued his work on config management APIs.
- Chandra worked on a gRPC implementation of the new Twin synchronization API.
- Dustin started working on implementing Trapd config support for the Twin API.
- Dustin created an in-memory implementation of the Twin API.
- I did more work on trying to wrap up run-as-non-root issues.
### **Web, ReST, UI, and Helm**
- Jane continued her work on building a proof-of-concept geomap in Vue.
- Jeff updated the notice wizard to handle input better.
### Contributors
<!--
git log --since='2021-08-02 00:00:00' --until='2021-08-09 00:00:00' --all --color=always --no-merges --pretty=format:'%aN' | sed -e 's,^ *,,' -e 's, *$,,' | sort -u | grep -viE '(dependabot|Atlassian Bamboo|CI.CD System)' | sed -e 's,^,\* ,'
-->
Thanks to the following contributors for committing changes since last OOH:
- Benjamin Reed
- Bonnie Robinson
- Chandra Gorantla
- Dustin Frisch
- Freddy Chu
- Jane Hou
- Jeff Gehlbach
- Mark Mahacek
- Patrick Schweizer
- Stefan Wachter
- Yang Li
## Release Roadmap
### August Releases
<!--
git log --no-merges meridian-2016.1.23-1..origin/release-2016.x -- . ":(exclude).circleci"
-->
OpenNMS is on a monthly release schedule, with releases happening on the second Wednesday of the month.
The next OpenNMS release day is August 11th, 2021.
We currently expect a Horizon 28.0.2 release, plus updates to all supported Meridian releases.
### Next Horizon: 29 (Q4 2021)
The next major Horizon release will be Horizon 29.
The current roadmap for Horizon 29 includes the following goals:
- running as non-root by default
- refactor the Minion's communication to get rid of out-of-band ReST calls to the OpenNMS core
- add support for persistence of flows to Cortex
- start the groundwork for replacing the topology UI with a pure-javascript version
### Next Meridian: 2022 (Q? 2022)
With Meridian 2021 recently out, we do not yet have a specific timeline for Meridian 2022.
Expect it to include -- at the very least -- the JDK11 requirement and flow aggregation improvements from Horizon 28.
Ideally it will contain work going into Horizons 29 (and probably 30) if our timeline holds. 
### Disclaimer
_Note that this is just based on current plans; dates, features, and releases can change or slip depending on how development goes._
_The statements contained herein may contain certain forward-looking statements relating to The OpenNMS Group that are based on the beliefs of the Group’s management as well as assumptions made by and information currently available to the Group’s management. These forward-looking statements are, by their nature, subject to significant risks and uncertainties._
_...We apologize for the excessive disclaimers. Those responsible have been sacked._
_Mynd you, møøse bites Kan be pretti nasti..._
_We apologise again for the fault in the disclaimers. Those responsible for sacking the people who have just been sacked have been sacked._
<!--
## Calendar of Events
### **[OpenNMS Training](https://hs.opennms.com/training-registration-2020) - Moonachie, New Jersey - April 27th through May 1st, 2020**
The OpenNMS Group [still hopes to be offering training](https://hs.opennms.com/training-registration-2020) at SecureWatch 24 Fusion Center in Moonachie, New Jersey the week of April 27th. 8 seats are available, and the deadline for signing up is April 17th.
-->
## Until Next Time…
If there’s anything you’d like me to talk about in a future OOH, or you just have a comment or criticism you’d like to share, don’t hesitate to [say hi](mailto:twio@opennms.org).
- Ben
<!--
https://github.com/OpenNMS/twio-fodder/blob/ee50b0f05f3f93a66c242448a70fdca3993b972a/scripts/twio-issues-list.pl
-->
## Resolved Issues Since Last OOH
- [NMS-1231](https://issues.opennms.org/browse/NMS-1231): Change the webUI so it runs as a non-root user easily and reliably
- [NMS-11730](https://issues.opennms.org/browse/NMS-11730): The Dev Documentation doesn't have information about the Hardware Inventory
- [NMS-11970](https://issues.opennms.org/browse/NMS-11970): Create opennms user on install
- [NMS-11982](https://issues.opennms.org/browse/NMS-11982): syslogd as non-root user
- [NMS-12005](https://issues.opennms.org/browse/NMS-12005): opennms.service in non-root environment
- [NMS-12007](https://issues.opennms.org/browse/NMS-12007): opennms init script "runas" setting
- [NMS-12026](https://issues.opennms.org/browse/NMS-12026): TrapD won't run as non-root user
- [NMS-12978](https://issues.opennms.org/browse/NMS-12978): Add missing Prometheus collectd example in our documenation
- [NMS-13370](https://issues.opennms.org/browse/NMS-13370): Hardware Inventory Plugin needs docs
- [NMS-13457](https://issues.opennms.org/browse/NMS-13457): Geo-map POC: Investigate using AG-Grid to display nodes list on the geo-map page
- [NMS-13459](https://issues.opennms.org/browse/NMS-13459): Unable to create report on Horizon 28.0.1
- [NMS-13466](https://issues.opennms.org/browse/NMS-13466): Nephron: add more tests | rangerrick |
786,540 | Bug Smash is Back — Join In! 🐛 | Bug Smash aims to surface issues in the Forem codebase that are particularly interesting! If you're looking for some open source issues to tackle, we'd love to have you! | 0 | 2021-08-10T18:04:51 | https://dev.to/devteam/bug-smash-is-back-join-the-challenge-g41 | devbugsmash, opensource, forem, contributorswanted | ---
title: Bug Smash is Back — Join In! 🐛
published: true
description: Bug Smash aims to surface issues in the Forem codebase that are particularly interesting! If you're looking for some open source issues to tackle, we'd love to have you!
tags: devbugsmash, opensource, forem, contributorswanted
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rbw9t2do95ai017bwhg.png
---
> _Author's note: A few folks from our community surfaced valid concerns that Bug Smash might perpetuate the exploitation of early-career developers' time in exchange for inadequate rewards. We hear these concerns and do not want this program to encourage overwork or unhealthy attitudes about newer developers._
> _As such, we wanted to clarify our goal. Bug Smash is a way for us to surface issues that active contributors might be interested in or newer contributors might enjoy working on. The Forem codebase indeed benefits from our contributors' time but our goal here is not free labor — it is to enjoy collaborating with one another on open source for those interested. It is also to surface issues that might not be easy to find within our product roadmap. As such, we have edited the post to remove language that might lead people to believe we view open source experience, stickers, or badges as payment. Bug Smash is meant to be a supplement to the work our paid maintainers and contributors do. We appreciate our community for pushing us to clarify this!_
### Announcing the Second DEV Community Bug Smash!
You might remember that a few months back, we announced a fun new opportunity to collaborate with other developers, participate in open source, and help maintainers make Forem a better place.
{% link https://dev.to/devteam/join-us-for-the-first-dev-community-bug-smash-3plm/ %}
We saw such fantastic response from all participants and merged so many amazing fixes during the first DEV Community Bug Smash that we decided to bring it back for another round!
#### Here’s everything you need to know about the DEV Community Bug Smash this time around.
_Rules have been updated since the first Bug Smash. Even if you participated last time, please read them!_
### Details
> #### What is Bug Smash?
The DEV Community Bug Smash invites members of our community to resolve one (or several!) of our predetermined bugs in the Forem codebase [here](https://github.com/forem/forem/labels/Bug%20Smash). _We will send a limited-edition sticker pack and profile badge to anyone who smashes a bug, writes a reflection post, and would like them!_
> #### When is Bug Smash?
Bug Smash will be running from August 10th through September 3rd, 2021.
Please note that anyone actively working on Pull Requests can go over the September 3rd deadline.
> #### Who can participate?
Bug Smash is open to anyone in our community who would like to participate.
> #### Where will I be smashing bugs?
**In the Forem repository!**
For Bug Smash, you’ll be tackling issues labeled with (you guessed it,) `bug smash` in our repo (click [here](https://github.com/forem/forem/issues?q=is%3Aissue+is%3Aopen+label%3A%22bug+smash%22+no%3Aassignee) to see the full list). Any issue not labeled with `bug smash` is not part of the DEV Community Bug Smash.
> #### Why should I participate?
The DEV Community Bug Smash is a great way for you to get more practice with tackling issues in GitHub if you’re a newer developer. If you have more experience, this initiative is a fantastic opportunity to help a community you know and love (DEV!). The Forem engineering team is excited to feature this list of issues that they feel active contributors and newer developers might enjoy working on.
We’ll be awarding limited-edition DEV profile badges and a sticker pack to anyone who submits a `bug smash` PR that gets approved.
---
## How to Participate, Step-by-Step
1. Please read our [Contributing to Forem](https://docs.forem.com/contributing/forem/) guide for contribution guidelines, rules, and etiquette related to working in our codebase. Also, please revisit our [Code of Conduct](https://dev.to/code-of-conduct) for overall expectations on how to treat one another.
2. **Claim an [issue labeled `bug smash`](https://github.com/forem/forem/issues?q=is%3Aissue+is%3Aopen+label%3A%22bug+smash%22+no%3Aassignee), by commenting on it.**
3. You will be notified by our team with a confirmation if you’ve been assigned to that issue. If the issue is already taken, we reply with a suggestion of a different one for you to tackle
4. Once you are assigned a bug, create a pull request (PR) ([don’t forget to link to the original issue!](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
5. After you’ve submitted a pull request (PR) for your assigned bug, you’ll be notified by our team if it was approved.
6. <mark>Once it’s approved (🎉) please use [this template](dev.to/new/devbugsmash) to write a reflection post about the bug you smashed, right here on DEV!<mark>
7. Find another post on DEV labeled [devbugsmash](dev.to/t/devbugsmash) and leave a comment or a question! Keep it encouraging, kind, and collaborative.
8. After you’ve completed steps 1-7, we’ll reach out with details on getting your sticker pack if you would like one. We will also send badges at this time.
_Note: We will only be sending one badge and one sticker pack per participant. **We determine who to send stickers to based on reflection posts shared on DEV** and the validity of their smashed bugs._
Thanks in advance for your patience following the publication of your Bug Smash post on DEV. The Forem engineering team is small but mighty and it will take focused time on their behalf to review Bug Smash PRs.
<mark>**For the list of Bug Smash-eligible issues, click [here](https://github.com/forem/forem/issues?q=is%3Aissue+is%3Aopen+label%3A%22bug+smash%22+no%3Aassignee).**<mark>
---
We hope you enjoy round two of the DEV Community Bug Smash! Remember, the goals here are to have fun and collaborate with your peers while getting involved in the Forem repository if you've been wanting to do so. Please keep basic [open source contribution etiquette](https://dev.to/azure/contributing-to-open-source-projects-contributors-etiquette-1bdm) in mind and abide by our [Code of Conduct](https://dev.to/code-of-conduct).
From all of us at [Team Forem](https://dev.to/about), thank you for your community spirit and interest in working on our code. 💖
| coffeecraftcode |
786,544 | Node.js with MySQL database. | Connecting node.js project with MySQL database is easy, follow the following steps to connect to the... | 0 | 2021-08-09T21:13:20 | https://dev.to/popoolatopzy/connecting-node-js-with-mysql-database-36dh | node, mysql | Connecting node.js project with MySQL database is easy, follow the following steps to connect to the the database
`By reading this article I believe you have installed nodee Js on your system and familiar with it.`
<h4> Step1</h4>
Open your code editor.
<h4> Step2</h4>
Create a new project
Open your command prompt or terminal and enter
```cmd
mkdir project_name
cd project_name
```
<h4> Step3</h4>
Create project file.
enter
```commad
npm init -y
```
into the command prompt or terminal.
<h4> Step4</h4>
To be able to connect to mysql database you need to install MySQL module into your project.
To do this is very simple, open your CMD or terminal and enter
```cmd
npm install mysql
```
this command will install the module into your project.
<h4> Step5</h4>
Now we can open connection From our project code to mysql database
add the code below to your app.js code to open a connection.
```js
const mysql = require('mysql');
// Creating a mysql connection
const conn = mysql.createConnection({
host: "localhost",
user: "db_username",
password: "db_password",
database: "db_name"
});
```
<h4> Step6</h4>
Let create a simple database and table
```js
conn.connect((err)=>{
if (err) throw err;
console.log("Database connected successful");
var sql = "CREATE TABLE books (book_name VARCHAR(255), book_description VARCHAR(255))";
con.query(sql,(err, result)=> {
if (err) throw err;
console.log("Table created");
});
});
```
You can leave a comment...
Thanks❤️ | popoolatopzy |
786,674 | Help me find websites dedicated to a single piece of data! | Hey Devs! I'm working on a project and am trying to collect websites that a) display a single piece... | 0 | 2021-08-10T01:13:25 | https://dev.to/genreshinobi/looking-for-simple-data-driven-sites-26g8 | webdev | Hey Devs! I'm working on a project and am trying to collect websites that a) display a single piece of real-time data to answer a question b) are niche. They're usually extremely specific to questions that aren't often asked. This can make them hard to find especially if you don't already know they are there. So I'm looking to crowd source some help!
I have a small handful of examples, but I'm looking for more. If you have built, or know of, a niche real time data display for a single specific piece of data I'd love to see it!
It can be something no one would ever care about. It can be useless info. Any website, dedicated to displaying a single piece of data. I look forward to seeing what's out there! Thanks!
[Is it still stuck?](https://isitstillstuck.com/)
A website dedicated to tracking the status of the Ever Given ship that was stuck in the Suez Canal.
[Is California on Fire?](http://iscaliforniaonfire.com)
A website dedicated to displaying California's fire status by answering a simple question "Is California on fire?"
[How many people are in space right now?](http://www.howmanypeopleareinspacerightnow.com)
This website displays the current number of people currently in space, along with some basic information about them such as their name and the country they represent.
[Is Mercury in retrograde?](http://www.ismercuryinretrograde.com)
This website displays whether or not Mercury is currently in retrograde. | genreshinobi |
786,905 | Get working Synergy on Ubuntu 20.04, Linux Mint 20.x | No doubt that Synergy is a great tool. But shifted into closed source and lack of update for open... | 0 | 2021-08-10T07:30:02 | https://dev.to/shawon/get-working-synergy-on-ubuntu-20-04-linux-mint-20-x-2pjc | linux, ubuntu, linuxmint, debian | No doubt that Synergy is a great tool. But shifted into closed source and lack of update for open source version makes it difficult to work across all platforms. Synergy is available on `pacman`. All you need to do is `sudo pacman -S synergy`. It's that simple but on Ubuntu 20.04 or similar Ubuntu based debian distro there is no easy solution. Most of the `.deb` are old. After searching for a while find a blog post from [Florian Panzer](https://rephlex.de/blog/2020/04/24/synergy-1-11-1-package-for-ubunu-20-04-focal-fossa/) where he rolled his own `.deb` package.

Blog Post: https://rephlex.de/blog/2020/04/24/synergy-1-11-1-package-for-ubunu-20-04-focal-fossa/
Mirror Download Link: https://github.com/shawonsaha/shawonsaha.github.io/tree/main/files/synergy_1.11.1_deb | shawon |
787,118 | Global Notification Bar: Best WordPress App for Ecommerce Business | We at Universe Technologies focus on client satisfaction and transform their project into Revenue... | 0 | 2021-08-10T10:56:44 | https://dev.to/universetechnologies/global-notification-bar-best-wordpress-app-for-ecommerce-business-3god | wordpress, wordpressplugin, ecommerce | We at Universe Technologies focus on client satisfaction and transform their project into Revenue Generating Business around the Globe.
We share the happiness of creating something fresh and new that keeps us composed. If you have any online services to offer, this is for you.
Want to boost 🚀your Business? Don’t know how?
Here it is.
The key to any successful business is a healthy sales funnel🛒💰. It is especially true for #eCommerce stores whose survival relies on transforming their site visitors from casual browsers into committing customers.
So, We develop a #globalnotificationbar based on wordpress.
Find new leads with us and transform them into clients for your product or service.
To attract more customers with the focus, we have added a new feature. You customize the background of the image for every notification.
Make your visitors getting engaged, so you can collect more emails and do #emailmarketing to increase sales.
Businesses Are Struggling, They Need Digital Services.
Never Pay For Leads Again.
📱 Get more info and Download App from here 👉
https://wordpress.org/plugins/global-notification-bar/ | universetechnologies |
787,161 | Why would a React component recreate all its memoized children ? | Hello from Belgium, I have a tree of similar components, each memoized with a custom areEqual. They... | 0 | 2021-08-10T12:29:08 | https://dev.to/mamorukunbe/why-would-a-react-component-recreate-all-its-memoized-children-4pdm | react | Hello from Belgium,
I have a tree of similar components, each memoized with a custom areEqual. They represent a directory tree in reality. Only the root component knows the full directory structure, and pass down that information recursively to its children as a prop. It is working perfectly well: if I "fold" a child, in exemple, the areEqual functions are called only for that child parents and nothing else is rerendered.
My only problem is when I fold the root component: the root areEqual is tested, return false (as expected)... and then all the children of the whole tree are unmounted/remounted, without a single call to their areEqual functions! In the inspect Profiler companion, I see those children "created for the first time", indicating me that the root parent, when rerendering, considers all its children as being new.
So my question is as fellow: the children areEqual function not being called, they are obviously not recreated because of a prop change. But after several hours of testing, I still don't have a clue on why the root parent consider all of its childrens to be new components. Do you have any idea on how to determine why a component considers its children components to be new instances – even if there are all supposed to be memoised instances ? I'm completely lost on this one :(
Thank you very much in advance ! | mamorukunbe |
787,181 | A JSON Based Serverless Quasi-Static Platform | Introduction I've been working with large NGOs to architect their multi-faceted systems.... | 0 | 2021-08-10T13:16:28 | https://arif.co/posts/serverless-json-based-quasi-static-platform/ | aws, devops, serverless, showdev | ---
title: "A JSON Based Serverless Quasi-Static Platform"
published: true
tags: ["aws", "devops", "serverless", "showdev"]
canonical_url: https://arif.co/posts/serverless-json-based-quasi-static-platform/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o78vge9af7kbzo7nrh5v.png
---
## Introduction
I've been working with large NGOs to architect their multi-faceted systems. These systems are responsible for information dissemination, data collection, analysis, and sources & sinks to other systems. Our near term goal was to build an information platform (IP). The MDP was narrowed down to the following feature set.
### Target persona
The IP was intended for 3 main personas.
- Villagers
- Direct consumers
- Low tech capability
- Language barriers
- Varied devices - sizes and capabilities
- Irregular bandwidth
- Ops team
- Responsible for training using the content
- Submitting feedback for content from users
- Updates to content (limited)
- Content team
- Primarily responsible for content
- Regular content updates
### Requirements
#### Media support
The portal has to support different media types which includes but not limited to; video, digital books, images, audio (podcast). The content is generated by a marketing and education team which is then uploaded to a public repository for downloads. All content is in the public domain.
#### Multilingual and region support
The content itself is versatile. The information and instructions change based on the local language, diet, and availability of resources. The portal has to support reuse of content as well as specific content for a particular region. Ease of management of the portal data by the content team was paramount.
#### Interval based updates
The content team updates the data several times a day. However there was no need to do real-time updates. New content can show up within the hour.
#### Analytics
Measurement is core to any successful deployment, especially for large and diverse ones. We factored in the need of granular measurement for clicks, bounces, playback, skips right from day one.
#### 3rd Party API
The content for the portal has to be made available to 3rd party applications for their internal consumption. The interesting part here is that the content over the API is the same as the website, however the 3rd party applications must be throttled to avoid proxy or overuse of our endpoint.
## Final Architecture
Apart from the requirements above, we also were tasked to ensure cost-effectiveness along with speed of delivery. The obvious choice was a typical three tier architecture that would achieve most of the objectives. The team also had the right experience. However I decided to go a different route. In the recent past I had deployed a JSON based architecture that had scaled well with up to 20 million visitors but nowhere close to the complexity.
I made a few changes and architected the solution loosely on CQRS - Command Query Responsibility Segregation pattern. All the data that was needed to be displayed (read path) were based on JSON files that were continously refreshed via a fleet of Lambda functions. On the other hand the write path were served by API Gateway (HTTP).

### Origin Database
These are the primary sources of information. They can be a Google Sheet, Airtable, RDBMS data sources. They provide the actual content metadata and rules of transformation.
### Pull Lambda Fleet
PLF run either on a reactive or schedule basis. They fetch data from the data sources and merge them based on rules. The output is split based on language, region, content, and use case. They generate JSON files at the end of the process and upload them to an S3 bucket using a convention that the webapp can understand.
### Information Portal (IP)
Built on React and Tailwind CSS, the IP delivers the content to users. It is a lightweight, responsive PWA. Can work with limited bandwidth and works across all device display sizes. Once it loads, it pulls the appropriate JSON from the server via CloudFront, depending on the region and language setting. All UI actions such as filtering, and search are done on the client side. The size of each JSON is manageable and uses aggressive compression to deliver data quickly. The auto reload mechanism in React (ReactSWR), ensures that the client refetches the JSON every 15 mins or on reload of page.
### API Clients
For the content 3rd party API, we use the main API Gateway (APIGW). Using the APIGW method, we can connect directly the GET method to an S3 resource. This requires no glue code or handling. It is seamlessly handled by APIGW.

Using API Keys & Usage Plans, we ensured only authenticated clients got access to data and also rate limited their API calls.

### Analytics & Read
All of the write paths use the main API Gateway (APIGW) and Lambda functions to write to DynamoDB. We capture all granular events such as playback, download, visit, and batch them up to the server. Partial data loss is ok with the volume we expected.
We created an internal portal to create a feedback loop of usage which was based on the data in DynamoDB. The web app used the HTTP API endpoints within the same APIGW to fetch data from DynamoDB.
### Aggregator Lambda
To monitor and analyse the content changes, we deployed a cron based lambda functions that pulled the current JSON in the data bucket and created a snapshot of it. The snapshots were aggregated over a time interval and uploaded to reporting S3 bucket. A webapp fetched the latest aggregate data and charted it for the admin to review.
## Results and Conclusion
The entire setup took about 1.5 months to build. Once deployed, we never breached the free tier of many services for processing. Our biggest cost center was data transfer. We had also enabled all edge nodes for Cloudfront which cost us a bit more. The latency issues were non existent. Our DR was pretty self solved due to S3 becoming our primary data store and DynamoDB was highly available but not in the critical path.
## Final notes
We had a specific use case that was served quite well with our design choices. This may not be the most optimal architecture for more demanding and real-time applications.
| arifamirani |
787,391 | Debugging a Rails App in Vim With Vimspector | Debugging a Rails App in Vim With Vimspector | 0 | 2021-08-14T15:08:11 | https://dev.to/iggredible/debugging-a-rails-app-in-vim-with-vimspector-pi | ---
title: Debugging a Rails App in Vim With Vimspector
published: true
description: Debugging a Rails App in Vim With Vimspector
tags:
//cover_image: https://direct_url_to_image.jpg
---
*Follow [@learnvim](https://twitter.com/learnvim) for more Vim tips and tricks!*
I recently published an article on Vimspector ([Debugging in Vim with Vimspector](https://dev.to/iggredible/debugging-in-vim-with-vimspector-4n0m)). There I covered different ways to run Vimspector for various Javascript environments. If you're a Rails developer like me, you may ask, "I'm sold on Vimspector, but how can I run it in Rails?"
To be frank, there are not a lot of resources online on how to accomplish this. After tinkering for a few days, I found a few setups that work on a basic Rails application.
I am not claiming that this is the foolproof way to debug any Rails application. There are rooms for improvement and exploration. But for starting out, this is sufficient.
This article assumes that you have had some experience with Vimspector. At minimum, you need to know how to step into, step over, and step out of breakpoints. You also need to know how to launch and restart Vimspector. If you don't, please read my previous article first.
## Create a Basic Rails App
I am a huge fan of practical learning. I believe you'll get far more mileage if you actually do the steps as you are reading this article. So for the sake of hands-on approach, let's create a brand new Rails app. Don't worry, it should take less than 5 min.
Create a new Rails app:
```
rails new hello
```
Create a controller:
```
rails generate controller Says hello
```
Let's go to the controller and write up some codes. Inside `./app/controllers/says_controller.rb`, modify the `hello` action:
```
class SaysController < ApplicationController
def hello
@time = DateTime.now
@greetings = "Greetings"
end
end
```
Modify the `hello.html.erb` file:
```
<h1>Says#hello</h1>
<p><%= @greetings %></p>
<p>It is now <%= @time %></p>
```
Excellent! Let's quickly test if the Rails app is running properly.
```
rails s
```
Visit http://localhost:3000/says/hello. You should also see the values of the `@greetings` and `@time` instance variables.
### Adding Important Gems
Vimspector isn't a debugger. It's a "middle-man" that talks to a debugger. Vimspector provides a standard protocol to communicate with different debuggers. With Vimspector, you can communicate the same way with a Node debugger, Python debugger, Go debugger, etc.
For Vimspector to work with Ruby, you need to install a Ruby debugger. We will use [ruby-debug-ide](https://github.com/ruby-debug/ruby-debug-ide).
In addition, you also need to install `debase` ([source](https://github.com/rubyide/vscode-ruby/blob/main/docs/debugger.md)). Add these two in your gemfile (in a real project, you probably want to put them inside the `group :development, :test do ...` block)
```
gem 'ruby-debug-ide'
gem 'debase'
```
## Vimspector JSON
Create a `.vimspector.json` at the Rails project root. Inside it:
```
{
"configurations": {
"rails": {
"adapter": "cust_vscode-ruby",
"default": true,
"configuration": {
"name": "Debug Rails server",
"type": "Ruby",
"request": "launch",
"cwd": "${workspaceRoot}",
"pathToBundler": "/Users/iggy/.rbenv/shims/bundle",
"pathToRDebugIDE": "/Users/iggy/.rbenv/versions/2.6.6/lib/ruby/gems/2.6.0/gems/ruby-debug-ide-0.7.2",
"program": "${workspaceRoot}/bin/rails",
"args": [
"server"
]
}
}
}
}
```
Note that you have to update `pathToRDebugIDE` and `pathToBundler` with your own paths. I'll explain below.
### Bundler, Debugger, and Adapter
There are three things that you need to provide Vimspector with:
1. The path to bundler.
2. The path to the debugger.
3. Which adapter to use.
To get the path for `pathToBundler`, run:
```
which bundler
```
In my case, it returns `/User/iggy/.rbenv/shims/bundle`. Use whatever path your machine uses.
Assuming you have installed the ruby-debug-ide gem via your Rails' Gemfile, to get the `pathToRDebugIDE` path, run:
```
bundle show ruby-debug-ide
```
In my case, it returns `/Users/iggy/.rbenv/versions/2.6.6/lib/ruby/gems/2.6.0/gems/ruby-debug-ide-0.7.2`. Use whatever path you see.
Finally, recall that Vimspector requires a special adapter for each language / environment you use (in my previous article, I installed adapters - also known as "gadgets" - for node and chrome). Since we're debugging a Ruby framework, we need a Ruby adapter.
### Adding a Ruby Adapter
If you look at the Vimspector config file above, you'll see:
```
"adapter": "cust_vscode-ruby",
```
Unfortunately, Ruby is not in one of the [supported languages in the Vimspector page](https://github.com/puremourning/vimspector#supported-languages) (*darn it!*). Don't worry, if you dig the Vimspector repo deep enough, you will find instructions on how to "install" a Ruby gadget there.
Here's [the page](https://github.com/puremourning/vimspector/wiki/languages#ruby-gadget-installer-file) with information for languages not officially mentioned in the README. If you scroll down, you'll find an instruction for Ruby.
Follow the instruction on the [introduction section](https://github.com/puremourning/vimspector/wiki/languages#introduction)
1. Inside the Vimspector *directory* (in my case, it is in `~/.vim/plugged/vimspector/gadgets/custom/cust_vscode-ruby.json` - yours could be in a different directory depending on what plugin manager you use), add a custom json file for your language.
2. Run `./install_gadget.py --upgrade`. Vimspector should install some files from `vscode-ruby`.
Phew! We are done with the preliminary setup.
If you're still curious what just happened, here are a few pages to read:
- [Ruby gadget installer](https://github.com/puremourning/vimspector/wiki/languages#ruby-gadget-installer-file)
- [VSCode-Ruby Debugger](https://github.com/rubyide/vscode-ruby/blob/main/docs/debugger.md)
### Program and Args
Let's take another look at a section inside the Vimspector config file:
```
"program": "${workspaceRoot}/bin/rails",
"args": [
"server"
]
```
Recall from the previous article, `program` is the program that Vimspector will run when you tell it to launch something and `args` is the argument that gets passed to `program`.
When running a rails app, you would (usually) run `bin/rails server`. The config does exactly that.
## Running the Vimspector
Now we are ready to run Vimspector. Our config is set to `launch`, so do not run `rails s` from the terminal. We will run it from the debugger.
Go to the `says_controller.rb` and add a breakpoint on `@time`:
```
class SaysController < ApplicationController
def hello
@time = DateTime.now # add a breakpoint here
@greetings = "Greetings"
end
end
```
Excellent. Now here comes the moment of truth - let's launch Vimspector!
Wait a few seconds, you should see on the `Console` window that Vimspector is launching a Rails app.
Now, visit http://localhost:3000/says/hello. The app should pause.
Check your Vimspector. You should see it paused at the breakpoint.

If this is what you see, congratulations! You've successfully launched a Rails debugger - from Vim!
From there, you can step into, step over, and step out of different lines of code.
## Attach Vs Launch
There are two ways you can run Vimspector: attach and launch. The former attaches the debugger into an already running process. The latter launches a process from the debugger.
The Rails example above is an example of launch, as it launches a Rails process directly from the debugger. Theoretically, you should be able to perform either attach or launch.
### Attaching a Debugger to a Rails Server
You've seen how to launch a Rails app from Vimspector. Let's see how you can attach Vimspector to a Rails process.
First, modify our `.vimspector.json` file:
```
{
"configurations": {
"rails": {
"adapter": "cust_vscode-ruby",
"default": true,
"configuration": {
"name": "Debug Rails server",
"type": "Ruby",
"request": "attach",
"cwd": "${workspaceRoot}",
"remoteHost": "0.0.0.0",
"remotePort": "1234",
"pathToBundler": "/Users/iggy/.rbenv/shims/bundle",
"pathToRDebugIDE": "/Users/iggy/.rbenv/versions/2.6.6/lib/ruby/gems/2.6.0/gems/ruby-debug-ide-0.7.2"
}
}
}
}
```
Here are the changes:
- `request` is now `attach`.
- We added `remoteHost` and `remotePort`.
- We removed `"programs"` and `"args"` (they are unnecessary for attach).
The `remoteHost` and `remotePort` are the IP address and port number that we will be running the debugger on. The host is set to `0.0.0.0` and the port is set to `1234`. These numbers will make sense in a little bit.
Once your `vimspector.json` file is configured, let's run the app. Instead of running the regular `bin/rails s`, run:
```
rdebug-ide --host 0.0.0.0 --port 1234 --dispatcher-port 1234 -- bin/rails s
```
This will launch the `ruby-debug-ide` program installed earlier. Note the host and port numbers: we are running the debugger on host `0.0.0.0` and port `1234` in addition to running the rails server.
Next, add the breakpoints inside the `says_controller.rb` file, then launch Vimspector. Since we are running Vimspector on attach mode, it won't launch a Rails server this time. Head to the page related to this controller: http://localhost:3000/says/hello. Watch your Vimspector pauses at the breakpoint(s).

Sweet chocolate pancake! Super cool, isn't it? Well, this also concludes this article.
# Conclusion
Congratulations! You've successfully debugged a Rails app. You are one step closer from becoming a supreme master developer.
There is still much to explore about Vimspector and Rails applications. There are different settings, environments, and configs that I don't mention in this article. Experiment. Share this article. Let me know how you do things differently.
In the end, I hope that this article has given you a good place to start. Happy Vimming! | iggredible | |
787,434 | A Simple Guide to create Popup Like AdSense Ad Style | Step 1 — Creating a New Project In this step, we need to create a new project folder and... | 0 | 2021-08-10T17:20:55 | https://dev.to/stackfindover/a-simple-guide-to-create-popup-like-adsense-ad-style-3580 | html, css, javascript, beginners | ### Step 1 — Creating a New Project
In this step, we need to create a new project folder and files (**index.html, style.css, main.js**) for creating an HTML Popup. In the next step, we will start creating the structure of the webpage.
### Step 2 — Setting Up the basic structure
In this step, we will add the HTML code to create the basic structure of the project.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Goolge Adsense Style Popup</title>
<link rel="stylesheet" href="style.css">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;800&display=swap" rel="stylesheet">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
</head>
<body>
</body>
</html>
```
This is the base structure of most web pages that use HTML.
Add the following code inside the `<body>` tag:
```html
<div id="content">
<div class="container">
<div class="click-me"><a href="#">Click Me</a></div>
</div>
</div>
<!-- Start popup code -->
<div id="ad_position_box">
<div class="card">
<div class="top-row flex-row">
<div class="colmun">
<span>Ad</span>
</div>
<div class="colmun">
<button class="report"><svg viewBox="0 0 14 24" fill="none"><path fill-rule="evenodd" clip-rule="evenodd" d="M2 8C3.1 8 4 7.1 4 6C4 4.9 3.1 4 2 4C0.9 4 0 4.9 0 6C0 7.1 0.9 8 2 8ZM2 10C0.9 10 0 10.9 0 12C0 13.1 0.9 14 2 14C3.1 14 4 13.1 4 12C4 10.9 3.1 10 2 10ZM0 18C0 16.9 0.9 16 2 16C3.1 16 4 16.9 4 18C4 19.1 3.1 20 2 20C0.9 20 0 19.1 0 18Z" fill="#5F6368"></path></svg></button>
<button class="skip"><svg viewBox="0 0 48 48" fill="#5F6368"><path d="M38 12.83L35.17 10 24 21.17 12.83 10 10 12.83 21.17 24 10 35.17 12.83 38 24 26.83 35.17 38 38 35.17 26.83 24z"></path><path d="M0 0h48v48H0z" fill="none"></path></svg></button>
</div>
</div>
<div class="ad-content">
<img src="ad.jpg" alt="ad">
</div>
</div>
</div>
<script src="main.js"></script>
```
### Step 3 — Adding Styles for the Classes
In this step, we will add styles to the section class Inside style.css file
```css
* {
padding: 0;
margin: 0;
text-decoration: unset;
list-style: none;
font-family: 'Poppins', sans-serif;
}
html, body {
width: 100%;
height: 100%;
background: url(bg.jpg) no-repeat center / cover;
position: relative;
overflow-x: hidden;
display: flex;
align-items: center;
justify-content: center;
}
.click-me a {
color: #ffffff;
padding: 5px 20px;
background: rgb(255 255 255 / 20%);
border-radius: 50px;
}
/* Adsense style popup */
svg {
width: 1.2em;
height: 1.2em;
}
div#ad_position_box button {
background: transparent;
border: unset;
font-size: 20px;
cursor: pointer;
}
.flex-row {
display: flex;
align-items: center;
justify-content: space-between;
}
div#ad_position_box {
display: none;
align-items: center;
justify-content: center;
height: 100%;
width: 100%;
position: fixed;
top: 50%;
transform: translateY(-50%);
backdrop-filter: blur(50px);
}
div#ad_position_box.active {
display: flex;
}
div#ad_position_box .card {
background: #fff;
padding: 10px 24px 25px;
border-radius: 6px;
position: relative;
box-shadow: 0px 8px 12px rgb(60 64 67 / 15%), 0px 4px 4px rgb(60 64 67 / 30%);
}
.ad-content {
display: block;
box-shadow: 0px 10px 22px rgb(0 0 0 / 65%);
}
.ad-content img{
display: block;
width: 100%;
}
```
### Step 4 — Add some line of jQuery code inside main js file
```js
$(".click-me a").click(function(){
$("#ad_position_box").addClass("active");
});
$(".skip").click(function(){
$("#ad_position_box").removeClass("active");
});
```
#### #Final Result
{% youtube 2KAfqqMNdC4 %}
> <a href="https://blog.stackfindover.com/html-popup-box/">Best Collection of Popup designs</a>
| stackfindover |
787,442 | Just JavaScript things… | Hello again, my dear readers and followers👋. Here I am back with another blog on JavaScript. This... | 0 | 2021-08-10T18:40:43 | https://dev.to/codereaper08/just-javascript-things-2klc | javascript, todayilearned, programming | Hello again, my dear **readers** and **followers**👋. Here I am back with another blog on JavaScript. This time it's going to be much more like a knowledge sharing than a technical thing. So, let's begin with today's topic, “Just JS things”.
We are going to discuss some peculiar features of JavaScript which, most of us, don't know. These peculiar things make JS a great language to learn, and for me, it's the most fun thing to do. So, let's **BEGIN**
## undefined and null :
Most of us would have come across the JS data types`undefined` and `null`. But we don't know the real difference between both of them. Let's start with `undefined`,
### undefined :
The `undefined` type is an object, which represents that the declaration of the variable done, but it is not assigned. This comes under the `undefined`, as its name suggests. This is literally **lack of value for the variable**.
### null :
`null` is a value assigned to a variable. Unlike `undefined` it's not the lack of value, as we know that `null` by itself is a value. `null` is voluntary absence of the value for the variable.
The below picture clearly explains the difference.

We'll see how they compare with each other in the below gist, where we use a simple conditional statement to know how `undefined` and `null` work.
{% gist https://gist.github.com/code-reaper08/52dd719332190d450450760f38447db5 %}
_**Note**_: Line numbers referred as L below.
Here, we only get to run L4 and L8 in our code. Which means that the variable `a` is not assigned a value and thus gives `undefined`, whereas variable `b` is assigned the value of `null` which make the L8 to execute.
You can also use this JSFiddle https://jsfiddle.net/Vishwa_R/ha8tqL69/5/ for execution.
## First class citizens, FUNCTIONS!
In the JavaScript world, functions enjoy many privileges as first class objects. We can pass one function as an argument for another function and can also return the same if needed for later execution. YES! That's possible in JS. These are called as **“Callback functions”.** They are commonly used in JS world. We use callback functions in asynchronous programming, to wait for execution until a previous function gets its job done.
Let's see a simple example, let us take the operation of reading a file and displaying its size. Here we have two functions to perform, they are,
1. Reading a file.
2. Displaying size.
This must be done in sequence, we cannot display the size first without reading the file. Scenarios like this, make Callback functions “**HEROES**”.
We'll see an example where we mimic the above operation (we are not going to actually read a file and display the size). Let us take a look at the below gist.
{% gist https://gist.github.com/code-reaper08/4ecb3b1b6bb310c7ec26bf1628cc6ed4 %}
So here in this example, we have two functions, namely `Readfile` and `sizefinder`. As per our sequence of execution, we want `Readfile` to be first executed, So, we call the `sizefinder` inside the `Readfile` function as an argument. Finally, we can asynchronously do two functions using callbacks. This makes Callback functions to be widely used.
You can also use this JSFiddle https://jsfiddle.net/Vishwa_R/hce58f39/9/ to have a look at execution.
And that's it for today, I think these two things are great in JavaScript and that's why folks like us LOVE JS 📜✨. JavaScript dominates all the possible domains of technology, from Web to Native (A big thanks to NodeJS), and reigns as the most famous programming language. Let us love JS, as we all do every time.
### Thanks for reading and give a 💖 if you liked the content, have some feedbacks? Put them down in the comments. Have a great time😄🎉
### Attributions:
Cover image : https://wallpaperaccess.com/javascript | codereaper08 |
787,638 | Developer Communities You Need to Join Today! | Three developer communities (other than Dev.to) that offer free opportunities to learn from and network with your fellow developers. | 0 | 2021-08-10T19:57:40 | https://remotesynthesis.com/blog/developer-communities | beginners, codenewbie | ---
title: Developer Communities You Need to Join Today!
published: true
description: Three developer communities (other than Dev.to) that offer free opportunities to learn from and network with your fellow developers.
tags: beginners,codenewbie
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u93k100l813vmkuh28d2.jpg
canonical_url: https://remotesynthesis.com/blog/developer-communities
---
Coding is hard. And there is so much to learn, no one person can really know it all. I think that's why developers tend to rely on the developer community so much (more than other types of jobs I think). It's why welcoming community hubs like [dev.to](https//dev.to) have exploded.
In this post, I wanted to share three communities that offer the opportunity to learn from and connect with your fellow developers. They are communities that I participate in actively and have gained a lot from. Even when they cater to new or aspiring developers (as a couple of them tend to do), I've found amazing opportunities to learn, even as someone with decades of experience as a developer.
## [CFE.dev](https://cfe.dev)

I am totally biased about this one as I created it and run it, but I do that as a way to give back to the developer community and share my love for learning about code and technology. This month is actually the anniversary of 4 years of running CFE.
**What is it?**
[CFE.dev](cfe.dev) runs live, virtual meetups pretty much every two weeks on a wide range of developer topics – everything from JavaScript or IoT to career and culture. We also run occasional conferences or workshops like [TheJam.dev](https://thejam.dev) in January and [Moar Serverless](https://moarserverless.com) (which is this week!). These are paid, which is how I am able to keep the lights on. However, all of the 4 years worth of prior recordings (including from our past events and workshops) are [available for free](https://cfe.dev/sessions/)!
## [Virtual Coffee](https://virtualcoffee.io)

Virtual Coffee was created by [Bekah Hawrot Weigel](https://twitter.com/BekahHW) and [Dan Ott](https://twitter.com/danieltott). It was inspired to build connections and networking opportunities for developers, especially new and aspiring developers, during the pandemic. Virtual Coffee has grown enormously because it offers a laid-back and welcoming place for you to meet and talk about important developer topics.
**What is it?**
[Virtual Coffee](https://virtualcoffee.io/) offers numerous opportunities to meet and connect via Zoom every week. They generally host two "morning crowd" get togethers each week at 9am ET and two "afternoon crowd" get togethers at noon ET. They also have a "brownbag session" most Friday's at noon ET. The brownbag sessions are for members only and are more focused on learning a particular topic, taught by a member of the Virtual Coffee community. Membership is completely free but does require that you've attended at least one morning/afternoon crowd get together. This also gives you access to their private Slack community, which is very active and extremely helpful.
## [MintBean](https://mintbean.io/)

Mintbean was created by [Monarch Wadia](https://twitter.com/monarchwadia) and [Navi Mann](https://twitter.com/Navi1Mann). Inspired by their own experiences in tech, they created it as a community aiming to help new and aspiring developers gain the knowledge and experience they need to land a job and be successful as developers. In particular, they help devs bridge the gap between a boot camp and a successful career.
**What is it?**
[MintBean](https://mintbean.io/) hosts regular virtual meets every week with speakers on a variety of topics as well as virtual hackathons. This gives you the opportunity not just to learn from the community, but also to team up with your fellow developers and build something fun and useful that you can add to your portfolio. Mintbean also hosts a very active Discord server where you can communicate in between meets and hackathons.
## Be a Part of the Developer Community!
Each of these communities emphasizes different learning styles and opportunities. You can, of course, join all of them (I did), but you can also find the one that fits you the best. I hope to see you there!
Cover Photo by <a href="https://unsplash.com/@cwmonty?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Chris Montgomery</a> on <a href="https://unsplash.com/s/photos/zoom?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
| remotesynth |
787,662 | Remote First, Not Remote Friendly | Shweta Saraf, the Senior Director of Engineering at Equinix, has a unique remote work story: she... | 0 | 2021-08-10T21:27:11 | https://devinterrupted.com/remote-first-not-remote-friendly/ | discuss, management, agile, leadership |

Shweta Saraf, the Senior Director of Engineering at Equinix, has a unique remote work story: she experienced a fully remote acquisition during the pandemic.
Her former employer — Packet — was acquired by Equinix, a huge company with more than 30,000 employees and over 200 locations around the world. Suddenly, the small team at Packet who were experts at remote work found themselves in the position of trying to onboard not just themselves at a new company, but onboard an organization of 30,000+ to the principles, structure, and best practices of a fully remote work environment. Because Equinix is the largest data center company in the world, operating data centers and office hubs all over the globe, the switch to remote work had to be as seamless and efficient as possible.
One of the key areas Equinix first looked to be efficient was in their meeting practices. They began what they refer to as ‘Better Way Wednesdays’ (their name for a best practice also utilized by Shopify) as a way to better inform employees and leadership. These meetings paired with a monthly business memo, capture the state of business along with key achievements, challenges or blockers, and give senior leaders KPIs and metrics.
This practice made it possible to cut down on the number of weekly status meetings, where the same information is passed on in different formats or through different levels of abstraction. The investment in this practice paid off immediately. Teams found that the “Better Way’ meetings would often only take an hour, but would save tons of time across the board. It also had the added benefit of reducing zoom fatigue. More focus time and better communication were realized by a single meeting shift.
{% youtube ovJ5hkUGO4U %}
The biggest focus Equinix implemented was [asynchronous communication](https://linearb.io/blog/asynchronous-communication-future/), because of the many time zones involved and the number of people all over the world, including engineering teams. Rather than restrict productivity to a specific set of time zones, async communication gives employees the agency to be held accountable for completing their work on their own time. Meaning that it is no longer necessary to align employees on separate continents onto the same Zoom call if that information can be transcribed in a chat app.
However, for companies where office culture is strong, with ceremonies happening in-office, it can be a learning process to adapt to working completely remote. With Packet’s experience aiding the transition for Equinix, a cross-synergy of ideas was realized. Employees from both companies found themselves questioning former agile ceremonies, such as stand-ups and retros, and whether these can be done asynchronously, or if they require a meeting at all. The merger resulted in an easier working environment for everyone.
Equinix, a company of tens of thousands of employees and hundreds of locations, transitioned to remote work successfully during the pandemic not because they were remote-friendly, but because they adopted a mindset of remote-first. Meaning that a developer on the other side of the globe could participate meaningfully and not feel left out. While not every company underwent an acquisition during the pandemic, Equinix’s journey to a fully-remote organization is a familiar story for many tech companies this year. To learn more about Equinix and how other companies transitioned to remote work, check out Dev Interrupted’s Remote Work Panel on August 11, from 9–10am PST.
## Interested in learning more about how to implement remote work best practices at your organization?
##
[Join us](https://linearb.io/new-leaders-remote-work-panel/?utm_source=Referrals&utm_medium=devinterrupted.com&utm_campaign=devinterrupted%20referrals&__hstc=75672842.ea2a35812d5192739a119c7ab37040a0.1624488310794.1628548846055.1628626654695.77&__hssc=75672842.13.1628626654695&__hsfp=1483215930) **tomorrow,** August 11, from **9am-10am PST** for a panel discussion with some of tech’s foremost remote work experts. This amazing lineup features:
* Darren Murph Head of Remote at GitLab & Guinness World Record Holder as the most prolific blogger ever
* Lawrence Mandel Director of Engineering at Shopify & Hockey Enthusiast
* Shweta Saraf Senior Director of Engineering at Equinix & Plato Mentor
* And the Panda himself, Chris Downard VP of Engineering at GigSmart
Dan Lines, COO of LinearB, will be moderating a discussion with our guests on how they lead their teams remotely, how the current workplace is changing, and what’s next as the pandemic continues to change
Don’t miss the event afterparty hosted in discord from 10–10:30am with event speakers Chris and Shweta, as well as LinearB team members Dan Lines and Conor Bronsdon.

If you haven’t already joined the best developer discord out there, WYD?
Look, I know we talk about it a lot but we love our developer discord community. With over 1500 members, the Dev Interrupted Discord Community is the best place for Engineering Leaders to engage in daily conversation. No salespeople allowed. [Join the community >>](https://discord.gg/tpkmwM6c3g)
*Originally published at [https://dzone.com](https://dzone.com/articles/remote-first-not-remote-friendly).*
| conorbronsdon |
787,681 | Monitoring GitHub Pull Requests with Prometheus | The problem Have you ever wanted to track your open source contributions? Or perhaps... | 0 | 2021-08-10T23:13:06 | https://dev.to/circa10a/monitoring-github-pull-requests-with-prometheus-57p2 | github, hacktoberfest, devops, monitoring | ## The problem
Have you ever wanted to track your open source contributions? Or perhaps monitor contributions made by multiple users for some sort of event? Well I had the exact the same problem 😊.
With [Hacktoberfest](https://hacktoberfest.digitalocean.com/) just around the corner, I wanted a way to automatically track open source contributions to be able to incentivize participation via prizes or similar. It's fairly difficult to track who opens pull requests to what projects on GitHub at scale within a large organization, but boy do I have the solution for you.
## The solution
I built a [prometheus exporter](https://github.com/circa10a/github-pr-exporter) that takes in a config file with a list of users and will use the GitHub search API to find pull requests created by said users. This exporter exposes some useful data such as `user`, `created_at`, `status` and `link`.
## What is a prometheus exporter?
[Prometheus](https://prometheus.io/) is an open source time series database that uses a "pull" model where it reaches out to configured clients (basically plugins) called exporters. Then ingests the data from the exporters via configured interval which is typically 15 seconds.
### I want to see some users' pull requests, what now?
If you're familiar with prometheus, you can view the github-pr-exporter docs [here](https://github.com/circa10a/github-pr-exporter).
Maybe you're not familiar with prometheus, follow along for how to get started!
## How does it work?
The exporter is written in [Go](https://golang.org/) and utilizes a [GitHub client library](https://github.com/google/go-github) to execute searches using the GitHub search API. The exporter is basically a CLI that takes in configuration options via arguments then runs a web server in a loop that looks for new pull request data periodically.
### Searches
The exporter uses the search API and runs this search for every user: `"type:pr author:<user> created:>=<calculated timeframe>"`
The search is executed once per user instead of a bulk search due to the 256 character limit of the search API, but because the search API has a rate limit of 10 searches per minute for unauthenticated clients, there is a hard 6 second wait between collecting pull request data for each user to avoid said rate limit. This isn't a huge deal since this would process around 1000 users per 90 minutes which completely fine for this kind of data.
### Configuration Options
- `--config` YAML config file with usernames
- `--days-ago` How many days prior to the current to get pull requests from. Defaults to `90`
- `--ignore-user-repos` Ignore pull requests that the user made to their own repositories (no cheating!). Defaults to `false`.
- `--interval` How long (in seconds) to sleep between checking for new data. Defaults to `21600` (6 hours)
- `--port` Which port to run the web server on. Defaults to `8080`
## Deploying github-pr-exporter, prometheus, and grafana

There are 3 components needed to get up and running to make use of the data.
- [github-pr-exporter](https://github.com/circa10a/github-pr-exporter)
- [Prometheus](https://prometheus.io/)
- [Grafana](https://grafana.com/)
You will also need docker, docker-compose, and git installed. This will run the 3 components above in the form of containers and connect them together via a docker network.
- Start by cloning the repo and cd'ing into the directory
```bash
git clone https://github.com/circa10a/github-pr-exporter.git
cd github-pr-exporter
```
- Start the containers
```bash
docker-compose up -d
```
- Profit!
You should now be able to access grafana at http://localhost:3000 and the preconfigured dashboard at http://localhost:3000/d/h_PRluMnk/pull-requests?orgId=1
> The default admin login is username: `admin` password: `admin`
## Monitoring your own user or others
To change the default configuration, simply update `examples/config.yaml` to include your GitHub username or others then recreate the containers by running:
```bash
docker-compose down
docker-compose up -d
```
Then check out the new details on the dashboard! Please note that it can take around 2 minutes for the dashboard to reflect the new data so just try refreshing or turn on auto-refresh. | circa10a |
787,690 | AluraChallenges #2 ( Intro e Config ) | Introdução Nesta série, irei demonstrar a minha versão do desafio proposto no Alura... | 14,064 | 2021-08-16T22:13:52 | https://dev.to/delucagabriel/alurachallenges-2-intro-e-config-3fd0 | alura, node, typescript, nestjs | ## Introdução
Nesta série, irei demonstrar a minha versão do desafio proposto no [Alura Challenges #2.](https://www.alura.com.br/challenges/back-end)
### O que é um Challenge da Alura?
> *"É uma forma de implementar o Challenge Based Learning que a Apple ajudou a criar. Um mecanismo onde você vai engajar em cima de um problema, para só depois investigar soluções com cursos, conteúdo e conversas; ou até mesmo com o conhecimento que você já possui! Finalmente vai agir e colocar seu projeto no ar. Tudo isso com você comentando e ajudando nos projetos de outros alunos e alunas."*
No que consiste esse desafio?
Ao longo de 4 semanas foram disponibilizados 1 painel por semana com os cards de funcionalidades que deveriam ser implementadas.
#### A minha versão
Eu escolhi o framework [NestJs](https://nestjs.com/) para o desafio, que traz uma série de facilidades para o desenvolvimento, além de uma ótima arquitetura.
##### E como essa série vai funcionar?
A cada semanas, farei posts com as implementações realizadas, que deve seguir mais ou menos assim:
###### Semana 1:
* _API de Vídeos com rotas implementadas segundo o padrão REST;_
* _Validações feitas conforme as regras de negócio;_
* _Implementação de base de dados para persistência das informações;_
* _Testes das rotas GET, POST, PATCH e DELETE._
###### Semana 2:
* _Adicionar `categorias` e seus campos na base de dados;_
* _Rotas CRUD para `/categorias`;_
* _Incluir campo `categoriaId` no modelo `video`;_
* _Escrever os testes necessários._
###### Semana 3:
* _Paginação de videos_
* _Segurança para os recursos_
###### Semana 4:
* _Documentando a API com Swagger_
* _Integrando com o front-end_
* _Conclusão_
Hoje como "dia 0", mostrarei como vamos preparar e configurar tudo o que é necessário para iniciarmos esse projeto, go go go!
## Configurando o projeto
Para começarmos a brincadeira, precisamos instalar e configurar o nosso ambiente.
Vou utilizar como IDE o VSCode, que você pode baixar [aqui](https://code.visualstudio.com/).
Como linguagem de programação, vou utilizar o NodeJs (Typescript) na versão 14, [download aqui](https://nodejs.org/en/).
Após o download e instalação deles, vamos criar uma pasta chamada aluraChallenge2 e abri-la no VSCode.
Nele, vamos abrir o terminal integrado

e digitar os comandos:
```bash
npm i -g @nestjs/cli
```
para instalar o CLI do Nest globalmente e
```bash
nest new alura-challenges-2
```
para criar o novo projeto Nest.
O CLI vai perguntar qual o gerenciador de pacotes que vamos utilizar e escolheremos o npm.
 feito isso, a instalação será feita e uma mensagem de sucesso aparecerá ao final.

Legal, criamos o projeto, mas não para por aí. Vamos reabrir o VSCode na pasta alura-challenges-2 que o Nest criou e repare que já nos foi disponibilizado toda uma organização de pastas e configurações de testes, linter, prettier e git, o que já nos adianta demais, só que para elevar ainda mais o nível, vamos adicionar algumas outras ferramentas que vão nos ajudar na padronização do código.
Para que a gente proteja e padronize nossos commits, vamos utilizar os pacotes:
husky e lint-staged
```bash
npm install husky@4 lint-staged --save-dev
```
e adicionar no nosso package.json
```json
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},
"lint-staged": {
"*.ts": [
"eslint --fix", "npm test", "git add"
]
}
```
vamos instalar também os pacotes commitlint e commitizen
```bash
npm i @commitlint/config-conventional @commitlint/cli commitizen --save-dev
```
Assim que terminar a instalação, execute:
```bash
npx commitizen init cz-conventional-changelog --save-dev --save-exact --force
```
precisaremos agora criar um arquivo na raiz do projeto com o nome commitlint.config.js e o conteúdo
```javascript
module.exports={
extends: ['@commitlint/config-conventional']
}
```
após a criação desse arquivo, vamos executar o comando:
```bash
npx mrm lint-staged
```
e após ele, vamos adicionar mais dois hooks no husky, com os comandos:
```bash
npx husky add .husky/commit-msg 'npx commitlint --edit "$1"'
```
```bash
npx husky add .husky/prepare-commit-msg 'exec < /dev/tty && git cz --hook || true'
```
Com isso, teremos a segurança de que só conseguiremos efetuar commits seguindo os padrões do [conventional commits](https://www.conventionalcommits.org/pt-br/v1.0.0/) e passando nos testes.
Vamos testar?
Adicionaremos todos os arquivos alterados:
```bash
git add .
```
E faremos o commit:
```bash
git commit
```
Ao fazer isso, o husky irá chamar o lint-staged, que por sua vez rodará os comandos que colocamos no package.json, chamando a CLI do commitlint, se tudo estiver correto.
E a partir daí, você vai preenchendo conforme a sua alteração:

Após responder tudo, ele fará o commit (já padronizado)

Exemplo: (git log)

Ufa! Por enquanto é isso...
> "Uma das melhores formas de aprender é se propor a ensinar..."
Eu criei esses posts com a intenção de exercitar e fixar meus conhecimentos, mas talvez isso ajude você que está lendo também.
Se você gostou desse post, reaja, comente, compartilhe... enfim, faça algo para eu sentir que não falei sozinho. rs
Abraços e até os próximos...
| delucagabriel |
787,780 | Passwordless SSH on Raspberry Pi | This post is a reference for me and others who wants to improve their InfoSec hygiene. As a software... | 0 | 2021-08-17T00:34:21 | https://dev.to/ductapedev/passwordless-ssh-on-raspberry-pi-4l60 | security, pki, ssh, raspberrypi | This post is a reference for me and others who wants to improve their InfoSec hygiene. As a software engineer who deals with lots of servers, accounts, and IoT devices, one common task that is a daily routine is to SSH into various computers. SSH commonly is based on username and password. For Raspbian, the [default](https://www.raspberrypi.org/documentation/computers/using_linux.html#:~:text=User%20management%20in%20Raspberry%20Pi,and%20change%20each%20user%27s%20password.) is:
```
raspberrypi login: pi
Password: raspberry
```
Which is convenient for starting out with a new board, or for new users. But this is not the most secure, especially when enabling SSH to connect into your devices remotely (even if just for engineering and development). <sarcasm> I've never been guilty of forgetting to change the default login on the devices I leave connected to my network. 🙄 </sarcasm> Having those devices around on your engineering and development network makes a great pivot for attackers (see [Mirai](https://en.wikipedia.org/wiki/Mirai_(malware))).
Let's start with a Raspberry Pi device.
1. Connect to your Raspberry Pi device over the serial port, or by using a monitor and keyboard and log in.
2. Use the raspi-config to configure Wi-Fi or plug in Ethernet cable.
3. Enable SSH
4. Upload your SSH public key using ssh-copy-id. This automatically creates the .ssh directory with the correct permissions and puts your public key in the authorized_keys file.
```
ssh-copy-id pi@[ip_address]
```
**NOTE:** Sometimes, if you are using a key-manager like [Krypt.co](https://krypt.co/) you will not have the typical `.pub` file to copy, in which case using `ssh-copy-id -f` option will force it to copy anything close to a public key and this works for me.
5. Disable the password/challenge-response login so that only your SSH key will work. (But first, make a backup in case you make a mistake! If you do make a mistake, you will have to connect directly to the UART or have a local mouse/monitor/keyboard to fix it and the backup file will be super handy)
```
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
sudo vi /etc/ssh/sshd_config
```
Uncomment and/or set the following parameters in the sshd_config.
```
ChallengeResponseAuthentication no
...
PasswordAuthentication no
...
UsePAM no
```
Then restart the ssh server.
```
sudo systemctl reload ssh
```
### Downsides
Now, once you disable password/challenge response login, you get the benefits of increased security that no-one can access your pi without being in the authorized_keys file. However, if you ever lose your SSH private key, you can no longer get into your Pi remotely. But, with commodity hardware like raspberry pi, you can always pull the SD card and manually edit the authorized_keys file, or just reflash the card and start again, or connect using a local keyboard/monitor/mouse or via the UART console.
## References
Original base of Cover Photo by <a href="https://unsplash.com/@alex13naf?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Takacs Alexandra</a> on <a href="https://unsplash.com/s/photos/key?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>, modified by me to add Raspberry Pi.
https://www.cyberciti.biz/faq/how-to-disable-ssh-password-login-on-linux/
https://phoenixnap.com/kb/setup-passwordless-ssh
https://community.perforce.com/s/article/6210#:~:text=ssh%20directory%20permissions%20should%20be,-----). | ductapedev |
787,801 | My Textable Cat Printer | I was scrolling on twitter one afternoon and saw that xssfox had a Bluetooth Thermal Cat Printer. I... | 0 | 2021-08-11T04:39:08 | https://dev.to/mitchpommers/my-textable-cat-printer-18ge | twilio, showdev | I was scrolling on twitter one afternoon and saw that xssfox had a Bluetooth Thermal Cat Printer. I have had an idea that required a thermal printer for a while now and this looked like a really good printer for what I wanted to do, so I ordered one.
{% twitter 1420254881613451272 %}
The original idea I had was to set up a thermal printer so you could message it a picture, have it print off and stick it in a guest book, like you might do with a polaroid at a wedding. But It doesn't look like I will be going to a wedding any time soon. So instead, to allow people to interact with it remotely, I decided to give it a phone number and let people send messages that would print out on my desk.
To contact it, message `cat:[message to print]` to one of the below numbers to have it printed.
**Aus:** +61 480 029 995
**US:** +1 520 521 0228
# Giving The Cat Printer A Number
To be able to give the cat printer a number, first I had to get it to print. I pulled out a raspberry pi I had from the time I had my apartment lights controlled using amazon dash buttons. xssfox had some code that allowed you to send messages and images to be printed via HTTP requests. After getting that running on my pi and debugging some differences between how python handles bytes between Windows and Linux, I was able to send my first messages to print!
The python code I am using is available at https://gist.github.com/mpomery/6514e521d3d03abce697409609978ede
{% twitter 1420599852061237248 %}
Once I had it printing, I went into my Twilio console and started adding some functions to my textable light project. I created a new function that would look at the incoming message for the prefix `cat:` and redirect the message to the cat printer. Later I went back and made my light require the prefix `light:` and an error message if neither of the prefixes were present.
The function looks like this:
```javascript
exports.handler = function(context, event, callback) {
let twiml = new Twilio.twiml.MessagingResponse();
if (event.Body.toLowerCase().startsWith("cat:"))
twiml.redirect("/cat_printer");
else if (event.Body.toLowerCase().startsWith("light:"))
twiml.redirect("/textable_light");
else
twiml.message(`"light:[colour]" to add a colour to the light
"cat:[text]" to print on the cat printer`);
return callback(null, twiml);
};
```
I then set up a second function `cat_printer` to format the message to be sent to the printer. It takes the incoming message, removes the `cat:` prefix, redacts part of the number it was received from and formats it with a timestamp for printing. Then it makes a call to the web server running on my raspberry pi (which was made available to the internet using ngrok) to print it. Once that request has been made, the response from the printer gets sent back to the number who texted it.
```javascript
const axios = require('axios');
exports.handler = function(context, event, callback) {
let twiml = new Twilio.twiml.MessagingResponse();
let incomingMessage = event.Body;
if (incomingMessage.toLowerCase().startsWith("cat:"))
incomingMessage = incomingMessage.substr(4).trim();
let from = event.From;
from = from.substr(0,3) + "*".repeat(from.length-6) + from.substr(from.length-3);
let now = new Date();
let printMessage = now.toLocaleString('en-GB', { timeZone: 'Australia/Sydney' }) + "\r\nFrom: " + from + "\r\n" + incomingMessage;
console.log(printMessage)
axios.post(context.catURL + '?text=' + encodeURIComponent(printMessage))
.then((response) => {
twiml.message(`${response.data}`);
return callback(null, twiml);
})
.catch((error) => {
twiml.message(`You sent a message with special characters or the cat printer is offline.`);
console.error(error);
return callback(null, twiml);
});
};
```
# Getting People To Message It
Once I had it working I started telling people about it. Some of my friends had already seen my textable light, so the idea of sending a text message to something on my desk wasn't new to them. I was initially cryptic about what I had made, but once people knew what it was they started messaging it!
{% twitter 1420892406476468228 %}
Some of my friends made me regret telling them about the cat printer:
{% twitter 1421019519485571072 %}
I streamed it briefly so people could see what they sent print in real time, including this really long message:
{% twitch https://www.twitch.tv/mitchpommers/clip/PatientInexpensivePigTakeNRG-W0ojMkY5ohNpk5ha %}
Then I started to see several people sending it messages internationally, so I purchased a second Twilio number so I could respond to them:
{% twitter 1424959545302937601 %}
And then the TwilioDev twitter account tweeted it, expediting this blog post:
{% twitter 1425255647017324546 %}
Now that I have it built and it has a semi-permanent position on my desk, I need to make a better way to keep all of my projects displayed on my desk. I would really like to make them all visible on the internet permanently. I have ideas on how I could achieve this, but don't have access to what I need to make it possible or the space to do it at the moment.
And if you are considering making something similar yourself using Twilio Programmable Messaging, you can [get $10 when you upgrade using this link](www.twilio.com/referral/QqpNkR).
| mitchpommers |
787,811 | Quick Notes Based On "Learning JavaScript Exercise" Section of Frontend Masters' "Complete Intro to Web Development, v2" | What I did (in code): const character = 'a' const timesRepeated = 50 let answer =... | 0 | 2021-08-11T02:40:11 | https://dev.to/benboorstein/quick-notes-based-on-learning-javascript-exercise-section-of-frontend-masters-complete-intro-to-web-development-v2-51gm | javascript, fundamentals, forloops, frontendmasters | What I did (in code):
`const character = 'a'`
`const timesRepeated = 50`
`let answer = ''`
`for (let i = 0; i < timesRepeated; i++) {`
`answer += character`
`}`
`console.log(answer)`
`console.log(answer.length)`
Logs: `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`
Logs: `50`
What I did (in English):
- Assign the string `'a'` to the variable `character`.
- Assign `50` to the variable `timesRepeated`.
- Assign `''` (an empty string, i.e., a string with a length equal to 0) to the variable `answer`.
- The `for` loop: Set the variable `i` to start at `0`, run the loop as long as `i` is less than `timesRepeated`, and at the end of each iteration of the loop, increment `i` by 1. Each iteration of the loop, store `answer + character` in `answer` (i.e., update `answer`).
- Log to the console `answer`.
- Log to the console `answer.length`.
What I practiced:
- Same as above.
What I learned:
- From the above code, I didn't learn anything new.
- For this section, there was no instruction outside of the exercise itself.
What I still might not understand:
- Nothing for this section. | benboorstein |
787,852 | ASP.NET Core -Blazor Datagrid server side paging | In this video we will discuss how to implement datagrid server side paging in ASP.NET Core Blazor... | 0 | 2021-08-11T04:23:43 | https://dev.to/techguy/asp-net-core-blazor-datagrid-server-side-paging-12b2 | webdev, blazor, csharp, dotnet | In this video we will discuss how to implement datagrid server side paging in ASP.NET Core Blazor Webassembly project. It's also called on-demand paging or database paging.
Syncfusion Website: https://www.syncfusion.com/
Syncfusion Blazor Components:https://www.syncfusion.com/blazor-components
Blazor WebAssembly Tutorial Playlist
https://www.youtube.com/playlist?list=PL6n9fhu94yhXXmhl1U4_oHZS5nhaabpPN
Blazor WebAssembly Course Page
https://www.pragimtech.com/courses/blazor-webassembly-tutorial/
{% youtube d8GbAa14uZ0 %} | techguy |
787,969 | How to manage CSS with esbuild | In this article, I'll show how to add styling to our application. The starting point is where we left... | 14,013 | 2021-08-11T05:05:53 | https://how-to.dev/how-to-manage-css-with-esbuild | javascript, esbuild | In this article, I'll show how to add styling to our application. The starting point is where we left in [step 2](https://how-to.dev/how-to-set-up-a-dev-server-with-esbuild).
# JS
To start, let's replace our dummy JS with code that at least put something on the screen. I go with vanilla JS because frameworks tend to complicate the esbuild setup. Let's set `src/index.js` to:
```JS
import "./style.css";
const header = document.createElement("h1");
header.innerHTML = "Hello world";
document.body.appendChild(header);
```
* `import "./style.css";`- by default, esbuild has CSS loader set up, but styles are not included in JS bundle. Before we get to it, we have to add the `./style.css` because now it's failing to build
* `const header = ...` & the following lines - simple code to add element to the page. By doing it in JS, by one glimpse, we can tell if the JS is working or not.
# CSS
The styling goes to `./src/style.css`:
```CSS
body {
color: #66f;
}
```
If we build our application with `npm run build` or start the server with `npm run start`, we will see the header without the color. That's because styles are emitted to style file - with the same name as our bundle, but with `.css` extension.
# HTML
To include the styling we have to add:
```HTML
<link rel="stylesheet" type="text/css" href="./main.css"/>
```
With this in place, the application should look like this:

# Links
The [repo](https://github.com/marcin-wosinek/esbuild-tutorial), branch [step 3](https://github.com/marcin-wosinek/esbuild-tutorial/tree/step-3).
You can check out my [video course about esbuild](https://bit.ly/esbuild-course).
# Summary
In this article, we have seen how to add styling to our esbuild application. If you are interested in the hearing when there is a new part ready, you can sign up [here](https://landing.mailerlite.com/webforms/landing/b8k4x6). | marcinwosinek |
788,021 | What is VCS ? | In simple terms, version control is nothing but a system that keeps track of the changes made to... | 0 | 2021-08-11T07:25:31 | https://dev.to/itsdr9/what-is-vcs-5bji | github, git, vcs | In simple terms, version control is nothing but a system that keeps track of the changes made to source code or files. With a version control system, you can look back at the changes made to a particular file, either by you or another person, by accessing the version control database. This system gives you the ability to compare different versions of a file, thus allowing you to stay informed about the changes that have happened over a period of time.
The version control system can be referred to as a database that stores snapshots of different files in a project. These snapshots are taken every time a file is modified. It maintains all the records of the different versions of a file. In addition to comparing different versions of a file, VCSs also allows you to switch between them. VCSs can either be distributed or centralized. Let us see how these two types differ.
Centralized version control systems use a single, centralized server to store all the different versions of a file. Users can access these files by gaining access to this centralized server. Now, there is a disadvantage associated with this type of VCS. If the central server fails to function due to any reason, the entire history stored on its will be gone and no one will be able to recover any version/versions of the lost files.
Git is one of the most popular VCS, and I will discuss about Git in the next post.
Till then keep learning and keep sharing.
| itsdr9 |
788,280 | Coding formalities | Another holy war Recently I was reading blog posts and heated forums discussions on what... | 0 | 2021-08-11T20:06:49 | https://dev.to/rdentato/coding-formalities-387h | programming, theory | ## Another holy war
Recently I was reading blog posts and heated forums discussions on what seems to be the latest flame war: *Object Oriented* vs *Functional programming*.
It was rather funny to read opinions ranging from "OOP considered harmful" to "FP is just a toy" but, after the amusement, I wondered if we can use this discussions to make a step forward.
Let's face it: we (as *The Software Industry*) are lagging behind. In all the past years we did not progress much in the way we develop our code.
Yes, we now have much better tools (our IDEs are a breeze to work in), we are more disciplined in the way we manage the development process (I remember one of my first project where version control was done on paper! I had to introduce the latest and greatest tool of that time for version control: RCS!), we have access to much higher abstract concepts but there's not much difference in how we write our programs.
## Elephant on sight!
There is so much rage about this or that feature of a programming language but the elephant in the room, that everybody pretends not to see if this questions:
> Are we getting better at ensuring our programs are *correct*?
There are many aspects of *correctness* but, basically, we should be interested in being confident that what we create is what it was intended and that it works in the way it is expected to work.
The elephant question has many sub-elephant questions: "*Are we getting better at specifying what we want to develop?*", "*Are we getting better at ensuring that the system has still the same behaviour after we made the last change to the code?*", etc. etc.
## Formality
To me, it all has to do with *formality*, i.e. the ability to express precise and unambigous statements.
In the end the code we write is a formal system. It must abide to strict rule so that our wonderful computers can run it.
In contrast, our starting point is completely *informal*. We start with specifications like: "I want you to write a software that allows plumbers to compete in a go-kart race" and have to move forward until we create a good clone of a well-known videogame. Except that, in the end, we discover that we were expected to create a software to schedule and manage the results of go-kart races held by the local *Plumber Association*.
Our job as developers (a term that will include coders, architects, designers, analysts, ...) is to reduce this *formality gap* over time, until we'll release a "*correct software*".

It would be great if we could devise a *formal method* (i.e. an always precise and unambigous way) to express requirements and (within this formal system) refine them to the code but this is just impossible (I could invoke Gödel here but let's not digress). And even if it was, it would be extremely unpractical and undesirable.
So, what can we do to ensure that, at least, we are heading in the right direction?
## Measuring formality
Quite some time ago, I came across an interesting article from professor Francis Heylighen about [Formality](http://pespmc1.vub.ac.be/DEFFORM.html) and it struck me as spot on.
Following the thesis in the report, we can say that an expression is more and more *formal* if its meaning is less and less dependent on the *context*.
The formality of an expression can never be 100%. Even programming language statements like `printf("Hello %s!\n",name);` are dependent on the some context (namely, the fact that the programming language used is C) to be interpreted correctly.
However, the more context a group of people have in common, the easier is to have formal expressions whose meaning is unambigous (for that group of people).
The *plumbers go-kart races* above is a clear example. We assumed that the shared context between us was related to videogames, they assumed the shared context was the recreational activities of the Plumbers Associations. Different context, different meaning!
## What is *good*?
We can use these concepts of *formality* and *common context* as a compass to see if we are heading in the right direction.
We can increase *formality* by adding tools, notations and processes that would remove "syntatical ambiguity" and we can increase the *common context* with the practices we use nowdays: User stories, Stand up meeting, Industry standards, ...
An example: "*release early, release often*" is good because it allows to add a formal object (the released functionality) to the *common context* so that it's easier to see if something had been left out until then!
An experiment: think about something you think is crucial for a successful development and let's see if we can describe with this two concepts.
## The Focus
What we should focus on in our development process is that, at each step, the *formality* increases so that things are really progressing.

This means that each actor in the picture below should use the common background with the actor on his left to understand what needs to be done and use the common background he has with the actor at his right to describe what he understood.
Thinks go wrong when someone just ignores this fact; for example when an Analyst writes a User Story purely on his background with the Client without adding anything that is part of his common background with the Designer. The designer will most probably misunderstand or will have to go back asking for further clarification.
"What do you mean, "you didn't know a *Plumber Association* existed"? And who is this short Italian plumber with mustache?"
## Back to the beginning
What has all this to do with the FP vs OOP flame war I talked about?
Very little. And that's the point!
At the very bottom we all have the same view of a *program*. It is:
- a *state* (represented by the value in a set of *variables*)
- a set of rules that specify how the state change
We use the *variables* to represent the chunk of reality we are interested in and specify the *rules* that will change those variables so that they will represent the evolution of a process.
So in the debate "OOP vs FP", the question for me are:
- which one helps me reasoning better about transition from on state to another? (increase of formality)
- which one helps me representing my *reality* at different level of abstractions in a consistent way? (ease of handling common context)
- which one will "shorten the lenght" from end user to the code? (reduce the need of multiple common contexts).
My answer is that they are roughly at the same level. One may be better in a situation, the other may be better in another situation but none of them solve the ineherent problems we face when creating a new application.
## Conclusion
All this sounds very abstract, I know, but I think it has very deep practical impact.
Maybe what we need to move forward is some better common view that we can use as a *context* to increase the *formality* of our development process.
Don't know about you, but I feel now it's the time to focus on finding what could make us work **substantially** better, rather than spending time debating which fad of programming language is "*better*".
| rdentato |
788,358 | 10 Free Public APIs for developers you need to use for your next projects | If you found value in this thread you will most likely enjoy my tweets too so make sure you follow me... | 0 | 2021-08-17T13:32:51 | https://blog.vladpasca.dev/10-free-public-apis-for-developers-you-need-to-use-for-your-next-projects | webdev, javascript, programming, beginners | _If you found value in this thread you will most likely enjoy my tweets too so make sure you follow me on [Twitter](https://twitter.com/VladPasca5) for more information about web development and how to improve as a developer. This article was first published on my [Blog](https://vladpasca.hashnode.dev/)_
### 1. New York Times
Provides news
🔗https://developer.nytimes.com/?ref=apilist.fun
### 2. Cat Facts
Daily cat facts
🔗https://alexwohlbruck.github.io/cat-facts/?ref=apilist.fun
### 3. GeoName API
Geographical database covers all countries and contains over eleven million placenames that are available for download free of charge
🔗http://www.geonames.org/export/?ref=apilist.fun
### 4. Food API
Let’s you access over 330,000 recipes and 80,000 food products
🔗https://spoonacular.com/food-api
### 5. Fixer. io
Exchange rates and currency conversion
🔗https://fixer.io/?ref=apilist.fun
### 6. Currencylayer
Exchange rates and currency conversion
🔗https://currencylayer.com/documentation?ref=apilist.fun
### 7. Adorable Avatars
Generate random cartoon avatars
🔗https://adorable.io/?ref=apilist.fun
### 8. Ipstack
Locate and identify website visitors by IP address
🔗https://ipstack.com/?ref=apilist.fun
### 9. Random Facts API
Get random Facts on different topics
🔗https://fungenerators.com/api/facts/?ref=apilist.fun
### 10. Superhero API
Get all SuperHeroes and Villians data from all universes under a single API
🔗https://superheroapi.com/?ref=apilist.fun
_I hope found this useful and if you did please let me know. If you have any questions feel free to DM me on [Twitter](https://twitter.com/VladPasca5)._ | pascavld |
788,409 | 🤯 10 [Insightful] Programming Wisdom Quotes! | Clean Code StudioFollow Clean Code Clean... | 14,000 | 2021-08-11T14:31:59 | https://dev.to/cleancodestudio/10-insightful-programming-wisdom-quotes-18ba | programming, writing, beginners, discuss | {% user cleancodestudio %}
[](https://twitter.com/cleancodestudio)
So this is a bit of a fun response to @inhuofficial's ([10 Tips For Clean DEV Articles!](https://dev.to/inhuofficial/10-tips-for-clean-dev-articles-59id)) - a fun post **critiquing my @cleancodestudio article** [10 Tips For Clean Code](https://dev.to/cleancodestudio/10-tips-for-clean-code-4nm6) for its structure and accessibility.
**To be clear**, the tips in the article are great, this is purely a post pointing out that I read and appreciated @inhuofficial's accessibility tips (and am down to use them).
There is no hate or malice here, I - like @inhuofficial - am just mischievous (as most of you know) and the article had loads of **accessibility advice** given by a passionate spokesperson for digital accessibility...so you know I'm going to have to respond with my own article mimicking the structure of the article that critiqued my own.
Anyway, it is silly with an important message, please treat it as such!
Here is my list of **10 [Insightful] Programming Wisdom Quotes!** (I would suggest going to read the article first or read them at the same time, otherwise some of these points used in this article and pointed out in @inhuofficial's article may not make sense!)
## The Quotes
1 **John Carmack Quote:**
> _"Sometimes the elegant implementation is a function. Not a
method. Not a class. Not a framework. Just a function."_
2 **Doug Linder Quote:**
> _"A good programmer is someone who always looks both ways before crossing a one-way street."_
3 **Mikko Hypponen Quote:**
> _"Rarely is anyone thanked for the work they did to prevent the disaster that didn't happen."
4 **Jeff Atwood Quote:**
> _"Hell isn't other people's code. Hell is your own code from 3 years ago."
5 **Rick Hickey Quote:**
> _"Programming is not about typing, it's about thinking"_
6 **Unknown Quote:**
> _"Weeks of coding can save you hours of planning"_
7 **Ron Jefferies Quote:**
> _"Always implement things when you actually need them, never when you just foresee that you need them."_
8 **Nicoll Hunt Quote:**
> _"The first step of any project is to grossly underestimate its complexity and difficulty"_
9 **Richard Pattis Quote:**
> _"When debugging, novices insert corrective code; experts remove defective code."_
10 **Filipe Fortes Quote:**
> _"Debugging is like being the detective in a crime movie where you are also the murderer."_
## The End
Obviously this may all be a bit of fun but the **quotes are all valid and important** (I might have had a quote or two in there that simply shares how projects typically go for programmers that aren't necessarily helpful and more so simply state truths, but _"8 valid quotes for programmers with a filler quote or two"_ wouldn't make for a good title - now would it? 🤣).
**Advice for everyone:** put some effort in to formatting your articles properly, it makes them easier to read and also has the added bonus of including everyone in the conversation!
And yes, for those of you who read the original article, I did even steal the cover image style and design, if you are going to copy someone's work do it right!🤣 Technically, I just stole it back - I used that cover photo on my 10 Clean Code Tips article first and it's **a damn good looking cover photo**! Feel free to steal it for yourself!
## Bonus Quote 11
Always have a bonus quote, people seem to love that!
Bonus **Clean Code Studio Quote:**
> _"Follow [inhuofficial on Twitter](https://twitter.com/InHuOfficial), I have as they put some insightful tweets out there!_
> _Oh and I have some followers as well (although I only started tweeting daily this past week), so you could always follow @[CleanCodeStudio](https://twitter.com/cleancodestudio) if you fancy it, my knowledge sharing will continue there."_
Hope this article made you smile and I hope you are having a great week!
---
<center>
[Clean Code Studio](https://cleancode.studio)
☕️ Code Tips
☕️ Career Advice
☕️ Developer Memes
<small>Shiny button, Clean Code 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 👇, juicy dev tips...wanna join?</small>
[](https://cleancodestudio.paperform.co/)
<small>(Discover [50+ pages] of my personal FAANG interview notes!)</small>
<a href="https://www.bluehost.com/track/cleancodestudio/" target="_blank"> <img border="0" src="https://bluehost-cdn.com/media/partner/images/cleancodestudio/120x90/120x90BW.png"> </a>
</center>
| cleancodestudio |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.