id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
938,771
[BTY] Day 2: Fancy packages to work with Dataframe
2 packages I want to mention is pandas-profiling and Mito. pandas-profiling It will...
16,070
2021-12-28T16:19:07
https://dev.to/nguyendhn/day-2-fancy-packages-to-work-with-dataframe-203p
betterthanyesterday, python, pandas, dataanalysis
2 packages I want to mention is [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling) and [Mito](https://trymito.io). ### pandas-profiling It will generate profile reports from a pandas DataFrame. It's more powerful than the default `df.describe()`. These are statistics presented in an **interactive** HTML report: - Type inference: detect the types of columns in a dataframe. - Essentials: type, unique values, missing values - Quantile statistics like minimum value, Q1, median, Q3, maximum, range, interquartile range - Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness - Most frequent values - Histogram - Correlations highlighting of highly correlated variables, Spearman, Pearson and Kendall matrices - Missing values matrix, count, heatmap and dendrogram of missing values - Text analysis learn about categories (Uppercase, Space), scripts (Latin, Cyrillic) and blocks (ASCII) of text data. - File and Image analysis extract file sizes, creation dates, and dimensions and scan for truncated images or those containing EXIF information. ### Mito The main functionalities are exploring, transforming, and presenting your data with the ease of Excel. All without leaving Jupyter (see the [video demo](https://www.youtube.com/watch?v=b94KgHZyt_E&list=TLGGGtIOMhraeGQyOTEyMjAyMQ&t=11s)). It's so easy to use and it really makes "advanced data analysis accessible to all." Check it out!
nguyendhn
938,780
Vagrant with xdebug
Finally i found the problem , change the vagrant timezone to be same as host timezone server xdebug...
0
2021-12-28T16:33:37
https://dev.to/toqadev91/vagrant-with-xdebug-41ke
xdebug, php, vagrant, vscode
Finally i found the problem , change the vagrant timezone to be same as host timezone server xdebug configuration .ini file changed to be : ``` xdebug.idekey=VSCODE xdebug.mode=debug xdebug.start_with_request=yes xdebug.remote_autorestart = 1 xdebug.client_port=9003 xdebug.discover_client_host=1 xdebug.max_nesting_level = 512 xdebug.log_level=10 xdebug.connect_timeout_ms=600 xdebug.log=/var/log/xdebug/xdebug33.log xdebug.show_error_trace=true ``` note that this configuration related to xdebug V3 in vagrant file , add port 80 as forwrded_port to be like this ``` Vagrant.configure(2) do |config| config.vm.box_url = "file:///Users/toqaabbas/projects/theqar_vagrant/ubuntu16_php7.4_vagrantbox" config.vm.box = "baazbox" config.vm.provider "virtualbox" do |v| v.memory = 5120 v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate//var/www","1"] v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/var/www","1"] end #config.vm.box = "ubuntu/xenial64" config.vm.network "private_network", ip: "192.168.33.33" config.vm.network "forwarded_port", guest: 80, host: 80 config.vm.provision :shell, :inline => "sudo rm /etc/localtime && sudo ln -s /usr/share/zoneinfo/Asia/Amman /etc/localtime", run: "always" config.vm.synced_folder "../", "/var/www" config.vm.provision "fix-no-tty", type: "shell" do |s| s.privileged = false s.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile" end config.vm.provision "file", source: "root", destination: "~" config.vm.provision :shell, path: "setup_vagrant.sh" config.vm.box_check_update = false end ``` and then run "vagrant reload" to re-build vagrant again finally change lanuch.json to be : ``` { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Listen for Xdebug", "type": "php", "request": "launch", "port": 9003, "pathMappings": { "/var/www" : "/Users/myusername/projects" }, }, { "name": "Launch currently open script", "type": "php", "request": "launch", "program": "${file}", "cwd": "${fileDirname}", "port": 9003 } ] } ``` the most important part was the correct location of "pathMappings" and finally it's works :)
toqadev91
938,806
Anbody working with MERN? I have error.
I have a problem with reading data from mongo. I can't find the problem. I get an empty array from...
0
2021-12-28T17:38:10
https://dev.to/ivkemilioner/anbody-working-with-mern-i-have-error-16pk
mern, react, help
I have a problem with reading data from mongo. I can't find the problem. I get an empty array from mongo. https://github.com/dragoslavIvkovic/MERN-WOLT/tree/master
ivkemilioner
938,822
A brief look at Go's new generics
In this post, I'm describing my experience of trying out Go 1.18's new generics, pitfalls I came across and how I solved them.
0
2021-12-28T19:19:59
https://dev.to/akrennmair/a-brief-look-at-gos-new-generics-52j3
go, generics
--- title: A brief look at Go's new generics published: true description: In this post, I'm describing my experience of trying out Go 1.18's new generics, pitfalls I came across and how I solved them. tags: golang,generics --- I have been following Go's development fairly closely ever since it was first publicly released in November 2009. I remember a time when you still had to terminate all statements with semi-colons, the build system was rudimentary and Makefile-based, and the standard error type was not `error` but `os.Error` (having the effect that virtually every package was importing the `os` package). At the time, I was mostly doing Unix systems programming with C and a bit of C++, on both Solaris and Linux. Go impressed me because it really felt like an evolution of C that kept most of C's simplicity while adding memory safety, garbage collection and a whole set of features enabling users to build complex concurrent algorithms. And even better: the language runtime implemented an M:N threading model that distributed potentially many light-weight goroutines among a few heavy-weight OS threads, effectively parallelizing the execution of your concurrent code. Among the many early review articles of pre-1.0 Go, a common criticism was the lack of generics, or parameterized types. Having used Go for my personal projects [since late 2009](https://github.com/akrennmair/gopcap/commit/d141b24473a7aaadf723b2eb02b9804749214e71) and professionally since 2013, I kind of understood the criticism because it was the kind of feature that you would naturally expect in a new language as other modern languages of the time would often feature generics in some shape or form. C++ went to the extreme with it, as its template system goes beyond just type parameterization and was even found to be [Turing-complete](https://wiki.c2.com/?TemplateMetaprogramming). Practically though, in the several years of actively using Go pretty much every work day, writing many tens of thousands of line of code, I've only ever come across one situation where I thought that parameterized types would have been really handy and would have made it so much easier to write less repetitive code. So when [Go 1.18 beta1](https://go.dev/blog/go1.18beta1) was released a few weeks ago, I decided to revisit that code to try and remove some of the code duplication for several different types by changing this over to use parameterized types. I'll spare you the most basic introduction. The Go team has already written a very [straightforward intro](https://go.dev/doc/tutorial/generics) to this new language feature, so there's no point in repeating it here. So let me describe a simplified situation of my existing code and the issue I ran into here. I'm starting with multiple implementations of encoding `int32` and `int64` values to `byte`s. ```go package main import ( "encoding/binary" "fmt" ) type int32encoder struct{} func (e *int32encoder) EncodeList(values []int32) (ret []byte) { for _, v := range values { ret = append(ret, e.Encode(v)...) } return ret } func (e *int32encoder) Encode(v int32) (ret []byte) { ret = make([]byte, 4) binary.BigEndian.PutUint32(ret, uint32(v)) return ret } type int64encoder struct{} func (e *int64encoder) EncodeList(values []int64) (ret []byte) { for _, v := range values { ret = append(ret, e.Encode(v)...) } return ret } func (e *int64encoder) Encode(v int64) (ret []byte) { ret = make([]byte, 8) binary.BigEndian.PutUint64(ret, uint64(v)) return ret } func main() { var ( enc32 int32encoder enc64 int64encoder ) fmt.Printf("int32 list: %+v\n", enc32.EncodeList([]int32{23, 42, 9001})) fmt.Printf("int64 list: %+v\n", enc64.EncodeList([]int64{23, 42, 9001})) } ``` While this is a very simplified example, you do see some code repetition: the `Encode` methods are similar (but not identical) between `int32encoder` and `int64encoder`, while the `EncodeList` methods are virtually the same, with the only difference being the use of `int32` vs `int64`. So how would we turn this into a more generic version that reduces this duplication to only the minimum necessary? The most straight-forward way that I implemented first looked like this: ```go package main import ( "encoding/binary" "fmt" ) type intEncoder[T int32 | int64] struct{} func (e *intEncoder[T]) EncodeList(values []T) (ret []byte) { for _, v := range values { ret = append(ret, e.Encode(v)...) } return ret } func (e *intEncoder[T]) Encode(v T) (ret []byte) { switch interface{}(v).(type) { case int32: ret = make([]byte, 4) binary.BigEndian.PutUint32(ret, uint32(v)) case int64: ret = make([]byte, 8) binary.BigEndian.PutUint64(ret, uint64(v)) } return ret } func main() { var ( enc32 intEncoder[int32] enc64 intEncoder[int64] ) fmt.Printf("int32 list: %+v\n", enc32.EncodeList([]int32{23, 42, 9001})) fmt.Printf("int64 list: %+v\n", enc64.EncodeList([]int64{23, 42, 9001})) } ``` This is better already: there's only one `intEncoder` type parameterized to support both `int32` and `int64`, only one `EncodeList` and `Encode` method each. But what's that? Oh no, a type switch! Since `int32` and `int64` are encoded slightly differently, initially I chose this hack-ish way of coding the specialization for each supported type. It works, but it's not great, as it encodes the list of supported types both in the type constraint of the type parameter and the `Encode` method. That means that every time I want to add support for another type, I need to add it both in the type constraint and in the `Encode` method. Not only is this tedious and error-prone as it means that multiple changes in different parts of the code need to be conducted, it also strictly locks in the set of supported types. If this was in a package used by other developers, it would not allow them to add encoding support for their own custom types. So that approach is a no-go. I then looked around whether there is any other solution, but due to the lack of comprehensive documentation (it's only been released in a beta version), it took me a little bit of thinking and playing around to find a solution that works. Back from my C++ days, I remembered that you could specialize templates for concrete types to provide concrete implementations. I have to admit though, my C++ is pretty rusty these days. I couldn't find a specific way of doing that with Go, and even the Go generics proposal documents as a limitation that it doesn't allow specialization. Surely there must be some way of doing it, I thought, this new feature into which many years of discussion, design and implementation went, ought to support more than just the most basic use cases, no? After a bit more experimentation, I eventually found a solution that works, it keeps the code extendable with your own types, and it also ensures type safety. And here it is my first attempt at it: ```go package main import ( "encoding/binary" "fmt" "math" ) type encodeImpl[T any] interface { encode(T) []byte } type encoder[T any] struct { impl encodeImpl[T] } func (e *encoder[T]) EncodeList(values []T) (ret []byte) { for _, v := range values { ret = append(ret, e.Encode(v)...) } return ret } func (e *encoder[T]) Encode(v T) (ret []byte) { return e.impl.encode(v) } type int32impl struct{} func (i int32impl) encode(v int32) []byte { ret := make([]byte, 4) binary.BigEndian.PutUint32(ret, uint32(v)) return ret } type int64impl struct{} func (i int64impl) encode(v int64) []byte { ret := make([]byte, 8) binary.BigEndian.PutUint64(ret, uint64(v)) return ret } type float64impl struct{} func (i float64impl) encode(v float64) []byte { ret := make([]byte, 8) binary.BigEndian.PutUint64(ret, math.Float64bits(v)) return ret } func main() { var ( enc32 = encoder[int32]{int32impl{}} enc64 = encoder[int64]{int64impl{}} encf64 = encoder[float64]{float64impl{}} ) fmt.Printf("int32 list: %+v\n", enc32.EncodeList([]int32{23, 42, 9001})) fmt.Printf("int64 list: %+v\n", enc64.EncodeList([]int64{23, 42, 9001})) fmt.Printf("float64 list: %+v\n", encf64.EncodeList([]float64{23.5, 42.007, 900.1})) } ``` What I've essentially done is that I moved the code that is different per type, the inner-most encoding functionality, into separate concrete types, and added a specific implementation. As you can see, I could even effortlessly add support for another concrete type, `float64`. There's one problem though: if you forget to provide the encoding implementation, calling `Encode` will panic. So here's an improved version of it: ```go package main import ( "encoding/binary" "fmt" "math" ) type encodeImpl[T any] interface { encode(T) []byte } type encoder[T any, I encodeImpl[T]] struct { impl I } func (e *encoder[T, I]) EncodeList(values []T) (ret []byte) { for _, v := range values { ret = append(ret, e.Encode(v)...) } return ret } func (e *encoder[T, I]) Encode(v T) (ret []byte) { return e.impl.encode(v) } type int32impl struct{} func (i int32impl) encode(v int32) []byte { ret := make([]byte, 4) binary.BigEndian.PutUint32(ret, uint32(v)) return ret } type int64impl struct{} func (i int64impl) encode(v int64) []byte { ret := make([]byte, 8) binary.BigEndian.PutUint64(ret, uint64(v)) return ret } type float64impl struct{} func (i float64impl) encode(v float64) []byte { ret := make([]byte, 8) binary.BigEndian.PutUint64(ret, math.Float64bits(v)) return ret } func main() { var ( enc32 = encoder[int32, int32impl]{} enc64 = encoder[int64, int64impl]{} encf64 = encoder[float64, float64impl]{} ) fmt.Printf("int32 list: %+v\n", enc32.EncodeList([]int32{23, 42, 9001})) fmt.Printf("int64 list: %+v\n", enc64.EncodeList([]int64{23, 42, 9001})) fmt.Printf("float64 list: %+v\n", encf64.EncodeList([]float64{23.5, 42.007, 900.1})) } ``` The only change here is that the type of the type-specific implementation has been added to the list of parameterized types, but still with the same constraint that it needs to implement the same type as the type of the encoder itself. So even though this adds a bit of stuttering (in the code above, the type name appears multiple times when declaring an instance variable of the `encoder` type), it's still perfectly type-safe and provides the concrete encoder implementation for whatever type you want to use. You can even go further and have multiple encoder implementations per type, e.g. one for big endian and one for little endian. All in all, I'm quite happy with Go generics. They're fairly simple, easy to learn and to understand, but have just the right balance to provide enough power to do more complex things. Not nearly at the level of C++ templates, but more than good enough for the vast majority of use cases. My only criticism is that the amount of documentation is rather sparse. There's the brief tutorial from the Go team itself, the original proposal that eventually led to this implementation (which even slightly differs in syntax), and a few more blog articles, but nothing of the like how to do more complex things with generics. But then, this was the motivation to write this article, and I'm certain that the Go community will soon produce more great documentation about all the details and pitfalls of the new generics feature. As with other language features previously, it will probably take a while until a set of best practices or new surprising properties of the language feature will be discovered. Work on Go started in 2007 and was first made public in 2009. In 2022, we'll see the first stable release of Go with support for type parametrization. Even though it took 15 years for that feature to reach consensus within the developer community and to be implemented, I'd say it was totally worth the wait.
akrennmair
939,065
Adding Cypress to an existing Angular project
(Originally published on my blog December 23, 2021) When I set up my Angular13 single-page...
0
2021-12-29T02:14:54
https://annardunster.com/programming/2021/adding-cypress-angular.html
angular, cypress, testing, typescript
(Originally published on my blog December 23, 2021) When I set up my Angular13 single-page application (SPA) at work, I initially considered setting up Selenium for end to end testing, but due to the steep learning curve I was experiencing with the stack in general (and some other projected hurdles such as figuring out how to mock data that would be accessed through the `window.external` property in the live environment), I kept putting it off. But, in the past month I encountered a situation in which the tests that I have written for this project did not actually catch a dependency error that caused the page not to load when the template was accessed at runtime. I suspect this is because I am *not* using the `TestBed` for the majority of my unit tests, since TestBed has a high overhead and slows down my couple thousand test cases dramatically. I also recently read some articles (particularly [From Zero to Tests](https://corgibytes.com/blog/2019/12/16/From-Zero-to-Tests/) on Corgibytes) that got me rethinking my test structure. Most of my tests currently are simply unit tests, possibly a few integration tests - mostly, does this function do what we expect when given the inputs we expect it to get (or handle it gracefully if we give it bad inputs)? I definitely don't have any tests that address DOM rendering. So, I had a stretch of downtime where I don't have anyone waiting on any specific projects and figured it would be a great time to set something up. --- > **Why Cypress?** > You may have noticed that I mentioned Selenium to begin with, but this article is about Cypress. Going back to Corgibytes, this time to [Integration Tests Can Be Fun!](https://corgibytes.com/blog/2017/02/21/integration-tests-fun/), I didn't really care for the idea of setting up a black box that was hard to understand to handle my higher level tests. Plus, I've heard good things about Cypress from other sources, and it looks like it is relatively easy to set up, compared to Selenium. --- ## Getting Started The first thing I did after deciding to set up Cypress was hop over to [their website](https://docs.cypress.io/guides/overview/why-cypress) and start watching the [introduction video](https://www.youtube.com/watch?v=LcGHiFnBh3Y). This video suggests to install Cypress in any project with `npm install -D cypress`. I'm sure this would work, but I am also aware that Angular has its own package addition process, and sure enough, double checking for an Angular-specific install path lead me to [End-to-End testing with Cypress - Testing Angular](https://testing-angular.com/end-to-end-testing/), which gives the Angular CLI command `ng add @cypress/schematic` instead. --- > **NOTE:** > If, like me, the first thing you do after installing Cypress is run it with `npm run cypress:open`, you may be confused by the immediate error message of "Warning: Cypress could not verify that this server is running." Don't worry about this for now if you're following the tutorials. Later in, they go over running your development server alongside Cypress for testing. --- From there I hopped into the [introductory tutorials that Cypress provides](https://docs.cypress.io/guides/getting-started/writing-your-first-test) to get started. This is my first experience with a browser automation suite, and I have to say, it's pretty fun to enter commands in text, hop back over to the browser window and see it executing them! Definitely excited to push forward and get to a state where I could test my actual application. So, the next step was to ensure that Cypress actually could test against it. The Cypress tutorials direct the user to enter a localhost address into `cy.visit()`, but the Angular CLI `ng add` command we used earlier already set up a default localhost address that's *different* from the one mentioned in Cypress's tutorials... *and* matches the one available from `ng serve`. If you have customized `ng serve`, I don't know if this will hold true, but in my case the address Cypress was looking for is a match. (If you want to customize the localhost address, the Cypress tutorial goes over configuring this in [Step 3 of Testing Your App](https://docs.cypress.io/guides/getting-started/testing-your-app#Step-3-Configure-Cypress).) Angular also provides a basic test file with a couple of `contains()` assertions when you use `ng add` to include Cypress in your project. In my case, since this is an existing project with actual content, neither passed. I'm not sure whether they're valid for the empty new app that Angular's CLI generates when creating a new application; I suspect they might be. My case is a bit unusual in that a user should never actually see this page; users access individual pages by URLs that are stored in the software that they show up in. The only time a user would end up at the root of the application is if somehow the stored link in the software didn't match any valid route. Nonetheless, it's useful to ensure that the page loads and renders, since if it doesn't, none of the rest of the application will either! I ended up just writing a few basic test cases against the minimal content and links that I have at the root of my application for proof of concept. I also took one of the user-facing pages that was fairly simple and worked up an initial-state test for that, too, as proof of concept. I added `data-cy="whatever"` tags to the important elements, and had Cypress check that the ones that should exist on page load, do; and the ones that should be hidden on page load, are. ## Setting up the Mock Environment Most developers at this stage are going to be starting to test their login flow and look at building stubs for server requests. For my application, though, the software that the pages get loaded in handles all of that, my code doesn't have anything to do with authenticating a user. So what I needed to do next was figure out how to stub the entire interface that lives on `window.external` when the pages are live in the production environment. ### What Didn't Work First I decided to try using the custom commands feature to add a command to apply the mock I've been using for my unit tests to `window.external`. There was an immediate hurdle involving Typescript and parsing the commands file: as soon as I copied the example namespace declaration from the comment at the top of `commands.ts` and adapted it to name my custom command, I got `Argument of type 'mockWindow' is not assignable to parameter of type 'keyof Chainable<any>'.ts(2345)` from my linter. I tried a few things and got other errors, then ended up following along with [Adding Custom Commands to Cypress Typescript](https://medium.com/@gghalistan/adding-custom-commands-to-cypress-typescript-28d23f90c2fd). I created an `index.d.ts` file in `./support` to hold my modified namespace declaration, and added `./support` to the `tsconfig.json` in the `cypress` folder, which resolved the type errors. ```typescript // index.d.ts declare namespace Cypress { interface Chainable { mockExternal(): Chainable<Element> } } ``` Unfortunately, then the part of the custom command where I import the mock caused a huge error, the first part of which looked like this: ```javascript Error: Webpack Compilation Error ./node_modules/@angular/router/fesm2015/router.mjs Module not found: Error: Can't resolve '@angular/common' in 'C:\Users\adunster\Documents\repos\HtmlApp\node_modules\@angular\router\fesm2015' resolve '@angular/common' in 'C:\Users\adunster\Documents\repos\HtmlApp\node_modules\@angular\router\fesm2015' Parsed request is a module using description file: C:\Users\adunster\Documents\repos\HtmlApp\node_modules\@angular\router\package.json (relative path: ./fesm2015) Field 'browser' doesn't contain a valid alias configuration resolve as module // ... ``` I tried getting around this by duplicating my mock to a file in cypress's directory and editing out any references that imported anything in my Angular project such as types or mocks. Only then I did I find out that nothing I assigned to a property of `window.external` was actually showing up in the browser. ### How did I fix it? My ultimate solution was to skip using Cypress commands entirely, and change program behavior based on environment variables. I'll probably refine this to a more specific npm script, maybe with an additional `environment.ts` file (as opposed the development environment in general) in the future. For now, the code that assigns the `.external` property in my Angular service changed from: ```typescript function _external (): any { return window.external } ``` To: ```typescript function _external (): any { if (environment.production) { return window.external } else { return new MockCEMRWindow().external } } ``` Now, as I develop my specific Cypress tests, I can mock data into the functions in the `MockCEMRWindow.external` property to produce the results I need to test. It's not ideal, but for the moment it will do what I need it to do, and actually is something I'd meant to do for a while for manual testing in a browser window running the development environment. I'm sure I will want to dig further into `cy.stub()` and `cy.spy()` when I get the chance, too. ## Running the Tests Automatically I'm sure there's a lot of room for growth in automating our deployment processes, but for now, the way building and deploying to production is currently set up is through the npm scripts in the root `package.json`. After completing code and testing on a feature or bug fix, the code gets merged into the `fresh` branch, and then we run an npm script that will cause the production server to pull the `fresh` branch, run the Karma/Jasmine unit tests, and then run build it if the tests don't fail. (It's similar for the non-production server we use for semi-live testing on real data, only usually on the current development branch instead of `fresh`.) Unfortunately, it looks like this will be neither simple nor straightforward to set up for this situation (especially given that all of these have to live inside a `pushd` command since we are using UNC paths), so this will be an adventure for another week. But Cypress has some direction here in their [Continuous Integration](https://docs.cypress.io/guides/continuous-integration/introduction#Boot-your-server) documentation if you're looking for where to head next. ## Conclusion After getting my feet wet with Cypress and seeing a few of the things it can do, I'm actually really genuinely excited about it. It looks like a great tool and I can't wait to use it to help guarantee the resiliency of my code! It isn't too hard to get the basics set up on an existing project, although there are a few tricks you may have to deal with depending on your environment. Their tutorials are great, though, and their documentation is clear and easy to follow. I think if I were building a page that works through more typical API calls and HTTP requests it would be a lot easier to set up appropriate mocks and stubs, or at least have a lot more clear direction. Still, even with the unique challenges of my environment, the basic setup has not been difficult at all. Don't be afraid to just install it and get started!
ardunster
938,827
Pass secrets to Docker build to fetch private Github repositories
To fetch private repositories as dependencies in a Docker image build procedure, you must set the...
0
2021-12-28T18:20:17
https://dev.to/theredrad/pass-secrets-to-docker-build-to-fetch-private-github-repositories-291i
docker, github
To fetch private repositories as dependencies in a Docker image build procedure, you must set the Github credentials. The popular errors while an invalid credential is set: > fatal: could not read Username for ‘https://github.com': No such device or address --- ## Git URL config There are lots of posts on the internet to do this by setting your Github personal token in the git URL config, all of them are insecure because actually, they’re suggesting to pass your Github personal token to the Dockerfile as an argument and set it as the username of the Github repositories. remind that every command in Dockerfile, is printed on the build log and is accessible from the build history, so your private token is printed on the logs and …. ## The SSH key method You can also use an SSH key to download private repositories in the Docker build procedure. for this you need to generate a new SSH key and assign it to your Github account, then store the private key into the repository secrets and write it into a file in the Github action job step, then copy the file from the context directory to the .ssh directory of build container. By using this method, you are not in control of the SSH key scopes, the SSH key is your identity and has access to your whole account. ## Using .netrc file [The .netrc file contains login and initialization information used by the auto-login process.](https://www.gnu.org/software/inetutils/manual/html_node/The-_002enetrc-file.html) To use .netrc file, you must generate [a new personal access token](https://github.com/settings/tokens/new) with private repositories related scopes, for example, if your private repositories exist under your account, therepo scope is enough, but if they exist under your organization or team account, theadmin:org scope is required. Then store the generated personal access token in the repository secrets. Here I saved it with `API_TOKEN_GITHUB` name. Create a Github action file: ```yaml name: Docker Image CI on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Store netrc file run: echo 'machine github.com login ${{ secrets.API_TOKEN_GITHUB }} password x-oauth-basic' > .netrc - name: Build the Docker image run: DOCKER_BUILDKIT=1 docker build --file Dockerfile --tag test-image:$(date +%s) --secret id=github,src=.netrc . ``` In this action, first, we store our secrets into the .netrc file, then pass the file as secret to the Docker build command. Github action job hides the secrets in the logs, so it’s safe to use secrets in the run command. After that, you must mount the secret file to root/.netrc in Dockerfile, (in this example we’re running Golang module download command after mounting the secret) ```dockerfile RUN --mount=type=secret,id=github,dst=/root/.netrc \ go env -w GOPRIVATE=github.com/YOUR_USERNAME/* \ && env GIT_TERMINAL_PROMPT=1 go mod download ``` **Notice**: You must run the go mod download or npm install commands right after mounting the secret file, actually in the same Docker Run command. ## Using docker/build-push-action@v2 If you are using a Github action to build or push your image, you should pass the .netrc file like this: ```yaml # bluh bluh bluh - name: Store netrc file run: echo 'machine github.com login ${{ secrets.API_TOKEN_GITHUB }} password x-oauth-basic' > .netrc - name: Push to GitHub Packages uses: docker/build-push-action@v2 with: context: . push: true secret-files: | "github=./.netrc" ``` ## TL;DR * Don’t pass your secret to Docker build command as an argument, your secrets are printed in the build logs and history. * Using SSH key limits your control of the scopes. * You can store secrets in the .netrc file via the Github action and then pass it as a secret file to the Docker build command. (follow .netrc section)
theredrad
938,869
Hooks
React's new feature is the React hook.It has made many difficult things easier since it came. The...
0
2021-12-28T19:35:58
https://dev.to/dev_learner/hooks-4d7p
React's new feature is the React hook.It has made many difficult things easier since it came. The hook can be compared to a hook. The hook helps to hold something in place, just as the hook holds something in place.We can store any data in external file in the form of stat with the help of hook. We can use this data by calling Hook as we need. This increases the mobility of the web.Hook is a method that does not require any kind of function. It can only be used in React. It cannot be used in ordinary JavaScript.To use the hook you have to call it at the beginning of the reaction. The hook does not work inside a function or inside a loop or anywhere else. So everyone has to call the start hook. Pre-requisites for React Hooks Node version 6 or above NPM version 5.2 or above Create-react-app tool for running the React App. Custom Hooks A custom Hook is a JavaScript function. The name of custom Hook starts with "use" which can call other Hooks. A custom Hook is just like a regular function, and the word "use" in the beginning tells that this function follows the rules of Hooks. Building custom Hooks allows you to extract component logic into reusable functions.
dev_learner
938,880
Speeding up geodata processing with feather
Previously, on speeding up geodata processing... In this post, I compare the read and write...
0
2021-12-28T20:15:04
https://dev.to/spara_50/speeding-up-geodata-processing-with-feather-4bmk
feather, geodata, pickle, geopandas
Previously, on [speeding up geodata processing](https://dev.to/spara_50/speeding-up-geo-data-processing-ig7)... In this post, I compare the read and write performance of the feather file format against the pickle file format. > From Hadley Wickham's [blog](https://www.rstudio.com/blog/feather/): > What is Feather? > Feather is a fast, lightweight, and easy-to-use binary file format for storing data frames. It has a few specific design goals: > - Lightweight, minimal API: make pushing data frames in and out of memory as simple as possible > - Language agnostic: Feather files are the same whether written by Python or R code. Other languages can read and write Feather files, too. > - High read and write performance. When possible, Feather operations should be bound by local disk performance. Geopandas has supported the feather format since version 0.8, and the test used version 0.10. ```python import geopandas as gpd import time import pickle from pyogrio import read_dataframe import warnings; warnings.filterwarnings('ignore', message='.*initial implementation of Parquet.*') # read shapefile read_start = time.process_time() data = read_dataframe("Streets.shp") read_end = time.process_time() # write feather test write_start = time.process_time() data.to_feather('test_feather.feather', 'wb') write_end = time.process_time() write_time = write_end - write_start print(str(write_time/60)+" minutes to write feather file") # read feather test read_start = time.process_time() csv_feather_df = pd.read_feather('csv_feather.feather') read_end = time.process_time() write_time = read_end - read_start print(str(write_time/60)+" minutes to read feather file") ``` ## Results | r/w minutes | pickle | feather |-----|--------|--------- | read | 0.92 | 1.07 | write | 4.36 | 1.69 Read times are comparable, but write times 4x faster. The longer write time is probably caused by converting geometry to a [Well Known Binary](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry), which is compatible with the feather format. The caveat is that the feather format is subject to change, as evidenced by the import `ignore warning`. ## Thoughts If your data is static or distributed, the pickle format may be better. Feather may be the right choice if you need to transfer geodata within a processing workflow with a file, e.g., from Python to R.
spara_50
938,975
How to emulate iOS on Linux with Docker
After several unsuccessful attempts, I was finally able to virtualize a macOS to run tests on an iOS...
0
2021-12-29T01:06:06
https://dev.to/ianito/how-to-emulate-ios-on-linux-with-docker-4gj3
linux, ios, virtualization, xcode
After several unsuccessful attempts, I was finally able to virtualize a macOS to run tests on an iOS app I was working on. But before proceeding, it is necessary to know that this is not a stable solution and has several performance issues, however, for my purpose I managed to do what I wanted. We'll use QEMU to emulate a mac and inside it we'll use xCode to emulate an iOS. That process not will be lightweight. The repository on github of [Docker OSX](https://github.com/sickcodes/Docker-OSX) has an explanation of how to use an iPhone via usb instead of emulating, but I don't have iPhone. xD ## Summary - [What is Docker OSX](#what-is-docker-osx) - [Hardware Specifications](#hardware-specifications) - [Installation](#installation) - [Running a app with React Native](#running-a-app-with-react-native) - [Running a app with Cordova](#running-a-app-with-cordova) - [Creating a connection of folders over SSH](#creating-a-connection-of-folders-over-ssh) - [Final considerations](#final-considerations) ## What is Docker OSX The Docker OSX is a docker image that uses QEMU so that we can emulate an operating system. _Read more: [What is Docker?](https://www.docker.com/)_ ## Hardware Specifications My computer's specs are considered OK to do this, however, I still managed to notice some lags while using Docker OSX + xCode + Visual Studio Code + Dev Server. (I was even able to heat my room with that much stuff) - **OS:** Manjaro Linux x86_64 - **Kernel:** 4.19.220-1-MANJARO - **Shell:** zsh 5.8 - **Resolution:** 1440x900 - **DE:** GNOME 41.2 - **WM:** Mutter - **WM Theme:** Orchis-orange-compact - **Icons:** Win11-purple-dark [GTK2/3] - **Terminal:** gnome-terminal - **CPU:** Intel i7-3770 (8) @ 3.900GHz - **GPU:** NVIDIA GeForce GTX 1050 Ti - **Memory:** 4105MiB / 15985MiB - **SSD:** Crucial BX500 240gb (**Highly recommended SSD**) ## Installation First, is necessary have docker installed on your computer. I use Manajaro, so I just opened the terminal and type: **Installation Docker** `pacman -S docker ` **Start docker services** `systemctl start docker.service` **Enable docker services to boot with system** `systemctl enable docker.service` **Testing docker:** `docker run hello-world` ![Docker run hello-world](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xgj3c3zue8x13y0tps7c.png) Alright, now we'll download the docker osx and run using the command below: `docker run -it --device /dev/kvm -p 50922:10022 -e DEVICE_MODEL="iMacPro1,1" -e WIDTH=1440 -e HEIGHT=900 -e RAM=8 -e INTERNAL_SSH_PORT=23 -e AUDIO_DRIVER=alsa -e CORES=2 -v /tmp/.X11-unix:/tmp/.X11-unix -e "DISPLAY=${DISPLAY:-:0.0}" -e GENERATE_UNIQUE=true -e MASTER_PLIST_URL=https://raw.githubusercontent.com/sickcodes/osx-serial-generator/master/config-custom.plist sickcodes/docker-osx:big-sur` You can check what each flag means at [github of docker osx](https://github.com/sickcodes/Docker-OSX). but briefly, i specified the resolution, memory, cores of processor, version and others things. Then, the docker osx will downloaded and initialized. When the emulator opens, select option `macOS Base System` ![Emulador Docker OSX](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fubit7hhx6x50d4rynwe.png) When system be booted, select `Disk Utility` ![Emulador Docker OSX](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5ksa4x7piwfsj4qe291.png) Now, we'll search the partition that be with most storage space and select the option `Erase`. ![Docker OSX apagando sistema](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/irj97wizelw0hbfoamql.png) To format, the chosen options must be strictly equal to this: ![Formatação opções](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/drf8gki6wway416rb85h.png) Click on `Erase`, wait process finish and then you can close the Disk Utility window. Next, select the option `Reinstall macOS Big Sur`, accept the terms and select the partition we just created `macOS` and start installation. (This process can take 30 min ~ 1 hour). ![Docker OSX Instalação](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rd67yhek2mjer3bkm1h5.png) So, the system must restart (or not), in my case, I had to do it manually because it didn't restart. In that case, close the QEMU window. ![Qemu error](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44eandfh00uh6swzrc0k.png) In terminal, we'll type: `docker ps -a` To know what is ID of our container and next we'll start typing the command below: `docker start ID` ![Docker IDS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0ythg12xpdnf5x9zhls.png) Select `macOS Installer` and the installation will continue. ![Docker OSX Instalação](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xq39lesxjerdba85w41k.png) Then, the system will restart automatically (or not), so close again the QUEMU and start again the container. ![Docker OSX Instalação](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xe5lk5aq4xzw55j0afwj.png) When the system boot, select option `macOS Installer` and wait the process finish. At end, the system will restart. (Now it's true). Well, macOS was installed successfully. Select the option `macOS`. ![Docker OSX Inicialização](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkzdtw9jozj91gi66v2f.png) Once that's done, it will restart again and you select the same `macOS` option. Well done, welcome screen appeared. This configuration part is quite slow, but after this process finished, the system will work very well. ![Tela de bem vindo macOS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/up2vajvs9p2l8g1vq39f.png) Configure the system , but don't login your AppleID at this moment. When that process done, our desktop will appears and we'll wait until the dock appear because after this, the system will be most stable. ![Tela de instalação mac OS Sem dock](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydprxzwqrnyp88tht3hf.png) ![Tela de instalação mac OS Com dock](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoucf5ea5kl7r8cd2zr4.png) Now, we'll use the `brew` for install the packages more fast. Open terminal in macOS and install brew with the command below: `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"` Insert your password and wait the process finish. Now, we'll install `xcode` on Apple Store. ![Apple store xcode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zrchopyo4u207fyou05a.png) So now, we can login on AppleID. ![Login apple store](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elma6mfjd6us7pst5y0l.png) After that, wait for the installation. Then, open `xcode` and accept the terms and wait install the dependencies. So now, go at `Preferences -> Locations -> Command-Line Tools` it will be blank, select the option with the xCode version. When the installation done, we'll open the terminal again and install `cocoapods` with command below. It serve as packager manager of xCode. `brew install cocoapods` Done that, our `macOS` will be installed and configured to run the projects. ## Running a app with React Native Okay, let's start with a hello world of React Native to check if everything we've done before will work. __I'll only check iOS.__ Open the terminal and type: **Install node:** `brew install node` **Install yarn (optional):** `npm install -g yarn` **Creating a project with RN:** `npx react-native init teste` If ask you to install cocoapods again, select the option with brew. **Enter at the project directory:** `cd teste` **Enter at ios directory:** `cd ios` **Install dependencies:** `pod install` **Back to root directory:** `cd ..` **List all simulate availaibles:** (Optional) `xcrun simctl list devices` **Run the project with xcode:** `npx react-native run-ios --simulator="iPhone 13"` ![App rodando react native](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jq34wwlfr17dubmhcy4s.png) To a better experience, see section: [Creating a connection of folders over SSH](#creating-a-connection-of-folders-over-ssh) ## Running a app with Cordova Okay, let's start with a hello world of Quasar to check if everything we've done before will work. __I'll only check iOS.__ Quasar uses Cordova/Capacitor for iOS and Android. **Install node:** `brew install node` **Install yarn (opcional):** `npm install -g yarn` **Install quasar:** `yarn global add @quasar/cli` **Install cordova:** `yarn global add cordova` **Creating a project with Quasar:* `quasar create teste` **Enter in project directory:** `cd teste` **Add cordova in your project:** `quasar mode add cordova` **Enter in cordova directory:** `cd src-cordova` **Add iOS in project:** `cordova platform add ios` **Verify if everything is okay:** `cordova requirements` **List all simulate availables:** (Opcional) `cordova emulate iOS --list` **Install dependencies:** `yarn` **Back to root directory:** `cd ..` **Install dependencies:** `yarn` **Run quasar with development mode on iOS:** `quasar dev -m iOS -e "iPhone 8, 15.2"` ![macOS quasar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zjoyvlx1pl9u41n9jy15.png) To a better experience, see section: [Creating a connection of folders over SSH](#creating-a-connection-of-folders-over-ssh) ## Creating a connection of folders over SSH Now our app is already running on macOS, we have a problem: Opening our code editor or IDE inside macOS is a very bad experience because of slowdowns, crashes, keyboard mapping and so on. So I researched a solution to create a file connection using SSH. Also, I can open the development server inside macOS and create a connection where I can change files directly from my linux or macOS so that it updates on both sides, like a two-way. This guarantees us to take advantage of some things that exist in development mode, like fast refresh. ### Connection from linux to mac First, we need to allow connection via ssh via mac login. To do this, we will open the terminal and type: **Command to open edit ssh configuration file:** `sudo nano /etc/ssh/sshd_config` Search for `PasswordAuthentication` and put `yes` and remove `#` at beginning line. ![Configuração sshd](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3aqms7coe0l4xwdga0j1.png) Save file. Go to `System Preferences -> Sharing -> Remote Login` and active for all users: ![macOS configuração](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77o3ld5524ufbtvue507.png) **Command to restart ssh:** `sudo launchctl stop com.openssh.sshd && sudo launchctl start com.openssh.sshd` Now, in linux terminal: **Install sshfs:** `sudo pacman -S sshfs` **Get container IP** `docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ID_CONTAINER` **Create a folder:** `mkdir projeto` **Command to open a new connection with mac:** `sudo sshfs USER_MAC@IP_CONTAINER:/PATH/OF/PROJECT/ON/MAC /PATH/ON/LINUX -p 23` **Example:** ![Exemplo de conexão](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwcd70m354t5hnviy9h9.png) Alright, now we can open the VSCode on linux and update the files updating directly on mac. If you have another computer, you can do this connection and let your computer with emulator just emulating. ### Connection from Mac to Linux Same process previously, but the sshfs package on mac can be installed using the command below: **Install sshfs** `brew install --cask macfuse && brew install gromgit/fuse/sshfs-mac` On Linux: **Command to edit ssh configuration file:** `sudo nano /etc/ssh/sshd_config` Search for `PasswordAuthentication` and put `yes` and remove `#` at beginning line. ![Configuração sshd](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3aqms7coe0l4xwdga0j1.png) Save the file. **Command to restart SHH on Manjaro:** `sudo systemctl restart sshd.service` On Mac, we'll create a folder to open connection. **Create a folder:** `mkdir projeto` **Command to open a new connection with linux** `sudo sshfs USER_LINUX@IP_HOST:/PATH/LINUX /PATH/MAC -p 23` When we type the command, an error will occur. ![Erro MAC](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgq3r1245m3s94qe5yql.png) Open the preferences and click `Allow` ![MacOS Preferencias](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smhcxmmfcvvxdmrf8ssg.png) Restart your mac. **Now, we can open the connection:** (My SSH is opened in a different port, but the default is 22) ![Conexão SSH OK](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wr871w7pezvkz80t1k53.png) Once that's done, we can update from either side which will also reflect on both. ## Final considerations Thank you very much for reading this tutorial, by the way it's the first one I've published in years. Any questions or suggestions(mostly my english) are always welcome. Oh, never update your mac. :)
ianito
938,989
HTML-Form 📝
Going to start my journey and mark 100 days calendar.👨‍💻🚀 Day: 1/100 📅 Project - 1 🔨 # HTML...
0
2021-12-28T23:36:31
https://dev.to/itsahsanmangal/html-form-1k56
html, css, javascript, webdev
Going to start my journey and mark 100 days calendar.👨‍💻🚀 Day: 1/100 📅 <u>**Project - 1 🔨**</u> ``` # HTML Form📝 > Client side form validation with HTML 📜 `Check required, length, email and password match 📧` - Create form UI 👨‍💻 - Show error messages under specific inputs ⚠️ - check Required() to accept array of inputs 🔣 - check Length() to check min and max length 〽️ - check Email() to validate email with regex ✔️ - check Passwords Match() to match the confirm password 🔑 ``` Publish ➡️ [Netlify](https://keen-jennings-d13a2c.netlify.app) Source Code </>👉 [GitHub](https://github.com/itsahsanmangal/HTML-Form)
itsahsanmangal
939,071
Reporting dan logging di PostgreSQL
Tujuan Log query yang berjalan mulai dari 2 detik (2000ms) Memastikan file log berada di...
0
2021-12-29T02:48:46
https://dev.to/sihar/reporting-dan-logging-di-postgresql-5b07
logging, postgres
## Tujuan - Log query yang berjalan mulai dari 2 detik (2000ms) - Memastikan file log berada di direktori /var/log/postgresql - File log tergenerate harian - Log mencatat kapan dijalankannya sebuah query PostgreSQL yang digunakan versi 12 Create direktori /var/log/postgresql dan set user ownernya ``` # mkdir /var/log/postgresql # chown -R postgres.postgres /var/log/postgresql ``` Sesuaikan bagian logging di file /var/lib/pgsql/12/data/postgresql.conf ``` ... log_min_duration_statement = 2000 log_directory = '/var/log/postgresql' log_filename = 'postgresql-%Y-%m-%d.log' ... ``` Reload konfigurasi PostgreSQL
sihar
939,109
Secret Key Cryptography
Hi. I have just finished writing my first "real" book (that is, not counting my 5 books of Sudoku...
0
2021-12-29T05:18:44
https://dev.to/contestcen/secret-key-cryptography-10ck
security, computerscience
Hi. I have just finished writing my first "real" book (that is, not counting my 5 books of Sudoku puzzles and SumSum puzzles.) The title is, you guessed it, "Secret Key Cryptography," and that is exactly what it's about. If anyone is into writing cryptography primitives, this book is loaded with ideas, 140 different types of ciphers in all, at every level of security, including a practical method for achieving the One-Time Pad. A big feature of the book is a novel method to accurately measure the security of a block cipher. The book is aimed at professional engineers, however I have made a great effort to make the book readable for non-technical people. You won't find a single footnote, or theorem or proof. You won't find integrals, summations, Venn diagrams, transforms or circuits. There isn't a single line of code. In short, there is nothing that would keep a smart high school student from learning cryptography. Let me say a bit about what else is in there, besides cryptography. There is a lot about text compression. Anyone who has worked in this field knows that arithmetic coding beats discrete codes, such as Huffman codes, and that adaptive coding like Lempel-Ziv-Welch beats fixed codes. So far as I know, my book has the first algorithm that combines arithmetic coding with adaptive coding to produce a compression that beats LZW, while having essentially the same speed and storage requirements. A portion of the book is already available through the Manning Early Access Program at https://www.manning.com/books/secret-key-cryptography There is a discount code **mlrubin** that gets you 50% off all versions of the book until Jan. 11.
contestcen
939,253
5 WEB UX LAWS EVERY DEVELOPER SHOULD KNOW
1. JAKOB’S LAW Users spend most of their time on other sites. This means that users prefer...
0
2021-12-29T08:21:55
https://dev.to/visualway/5-web-ux-laws-every-developer-should-know-f6h
javascript, webdev, design, programming
--- title: 5 WEB UX LAWS EVERY DEVELOPER SHOULD KNOW published: true description: tags: javascript, webdev, design, programming cover_image: https://i.imgur.com/aglkI3m.png --- ## 1. JAKOB’S LAW Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know. Websites do better the more standardized their design is. ![jakob](https://4ciipx2wt5iq3qj5qr4cu74k-wpengine.netdna-ssl.com/wp-content/uploads/2021/07/Jakobs-Law.png) ## 2. FITT’S LAW The time it takes someone to select an object in the screen depends on how far the cursor is from the object and the size of the object. Thus, the longer the distance and the smaller the target’s size, the longer it takes. ![fitt](https://4ciipx2wt5iq3qj5qr4cu74k-wpengine.netdna-ssl.com/wp-content/uploads/2021/07/Fittss-Law-1.png) ## 3. MILLER’S LAW The average person can only keep 7 (plus or minus 2) items in their working memory. Organize content into smaller chunks to help users process, understand, and memorize easily. ![miller](https://4ciipx2wt5iq3qj5qr4cu74k-wpengine.netdna-ssl.com/wp-content/uploads/2021/07/Millerss-Law.png) ## 4. LAW OF PROXIMITY Objects that are near, or proximate to each other, tend to be grouped together. - Proximity helps to establish a relationship with nearby objects. - Proximity helps users understand and organize information faster and more efficiently. ![proximity](https://4ciipx2wt5iq3qj5qr4cu74k-wpengine.netdna-ssl.com/wp-content/uploads/2021/07/Law-of-Proximity.png) ## 5. HICK'S LAW The time it takes to make a decision increases with the number and complexity of choices. Hick’s Law is a fairly commonsense idea: the more choices you present to a person, the longer they take to make a decision. It’s essentially a fancier way to describe the KISS rule: Keep It Simple, Stupid! ![hick](https://4ciipx2wt5iq3qj5qr4cu74k-wpengine.netdna-ssl.com/wp-content/uploads/2021/07/Hicks-Law.png) ### Thank you for reading If you liked this post, subscribe to our newsletter to never miss out on our blogs, product launches and tech news. [Subsribe to Visualway's newsletter](https://tinyletter.com/visualway)
visualway
939,264
Striver's SDE Sheet Journey - #8 Merge Overlapping Subintervals
Problem Statement :- Given an array of intervals where intervals[i] = [starti, endi], merge all...
0
2021-12-29T09:14:24
https://dev.to/sachin26/strivers-sde-sheet-journey-8-merge-overlapping-subintervals-4jff
beginners, programming, computerscience, codenewbie
> **<u>Problem Statement</u> :-** _Given an array of intervals where `intervals[i] = [starti, endi]`, merge all overlapping intervals, and return an array of the non-overlapping intervals that cover all the intervals in the input._. **Example 1:** ``` Input: intervals=[[1,3],[2,6],[8,10],[15,18]] Output: [[1,6],[8,10],[15,18]] ``` **Explanation :** _Since intervals [1,3] and [2,6] are overlapping we can merge them to form [1,6] intervals._ _So, in this problem, we need to merge those intervals which are overlapping, which means intervals that start point lies between the start & endpoint of another interval._ ## <u>Solution 1</u> **step-1** first, we need to **sort** the `intervals` on the basis of their starting point. by doing this we can easily merge overlapping adjacent intervals. **unsorted intervals** ![dsa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7ahbqnqqn5rplzzmmq7.png) **sorted intervals** ![dsa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iig6aamvyjncml77sgcm.png) **step-2** take the interval and compare its `end` with the next interval `start`. > **1** if they overlap, update the `end` of the first interval with the max end of overlapping intervals. > **2** if they do not overlap, move to the next interval. See below the java version of this approach. > Java ```java class Solution { public int[][] merge(int[][] intervals) { final int start =0, end =1; List<int[]> validIntervals = new ArrayList<>(); // sort the array by thier starting point Arrays.sort(intervals,(a,b)-> Integer.compare(a[0],b[0])); // store first interval as valid validIntervals.add(intervals[start]); for(int i=1; i<intervals.length;i++){ int[] interval = intervals[i]; int[] validInterval = validIntervals.get(validIntervals.size()-1); // if intervals overlapping,then marge if(validInterval[end] >= interval[start]){ validInterval[end] = Math.max(interval[end],validInterval[end]); }else{ validIntervals.add(interval); } } return validIntervals.toArray(new int[validIntervals.size()-1][]); } } ``` **Time Complexity :** sorting the intervals array + traversing through the array => **O(nlogn)** + **O(n)** **Space Complexity :** creating a new n size of array => **O(n)** Thank you for reading this blog. if you find something wrong, let me know in the comment section.
sachin26
939,288
How to specify your Xcode version on GitHub Actions
Hello ! I’m Xavier Jouvenot and in this small post, I am going to explain how to specify your Xcode...
15,933
2021-12-29T18:05:29
https://10xlearner.com/2021/12/29/how-to-specify-your-xcode-version-on-github-actions/
cpp, howto, tutorial, githubactions
--- title: How to specify your Xcode version on GitHub Actions published: true date: 2021-12-29 05:00:00 UTC tags: Cpp,Howto,tutorial,GitHubActions canonical_url: https://10xlearner.com/2021/12/29/how-to-specify-your-xcode-version-on-github-actions/ series: Xcode switch --- Hello ! I’m Xavier Jouvenot and in this small post, I am going to explain how to specify your Xcode version on GitHub Actions. _Self promotion_: You can find other articles on computer science and programming on my [website](www.10xlearner.com) 😉 ## Problematic If you have CI/CD processes or if you want to create some for your project, then it may be wise to make sure that you can always compile your project using the same version of Xcode as the one you are using on your machine, or the one used by your development team. Another use case where specifying a version of Xcode can be very useful, is to upgrade the version you, and/or your dev team, are actually using. Indeed, by specifying a newer version of Xcode in your CI/CD configuration, you can make sure everything works with the new version before asking everybody to upgrade the version of Xcode on their machine, and have a smooth transition. ## Solution The short answer, for the people who don’t want to read through the entire article (I know you do that! I do it too 😆) is to insert the following setup in your GitHub Actions process, before trying to compile anything: ``` jobs: build: runs-on: macos-latest steps: - uses: maxim-lobanov/setup-xcode@v1 with: xcode-version: 'version_number' ``` So, for example, if you want to select the version `13.1` of Xcode on GitHub Actions, then, it would look like that: ``` jobs: build: runs-on: macos-latest steps: - uses: maxim-lobanov/setup-xcode@v1 with: xcode-version: '13.1' ``` This solution uses a free GitHub Actions, under the license MIT, named "setup-xcode" and you can find all the details about it on the [GitHub MarketPlace](https://github.com/marketplace/actions/setup-xcode-version). If you are not sure about the version of Xcode you are actually using on GitHub Actions, you can always look to the documentation relative to the image you are using ([osx11](https://github.com/actions/virtual-environments/blob/main/images/macos/macos-11-Readme.md#xcode) or [osx10.15](https://github.com/actions/virtual-environments/blob/main/images/macos/macos-10.15-Readme.md#xcode)), or you can add the following step to you process and see the information directly in Azure Pipelines: ``` jobs: build: runs-on: macos-latest steps: - name: "Displays Xcode current version" run: sudo xcode-select -p ``` ## Diving deeper with concrete examples To make sure that, even in the future, you have working examples to look at, I have made a [GitHub repository](https://github.com/Xav83/tutorials) with an GitHub Actions process for each version of Xcode present on each OSX virtual environment available. In the [xcodes.yml file](https://github.com/Xav83/tutorials/blob/main/.github/workflows/xcodes.yml) in the GitHub Action folder (`.github/workflows`) of the repository, you will see jobs set up for each OSX environment using a [matrix]([https://www.GitHub](https://www.GitHub) Actions.com/blog/2018/04/25/specialized-build-matrix-configuration-in-GitHub Actions/), and ["setup-xcode" action](https://github.com/marketplace/actions/setup-xcode-version) which selects a version of XCode available defined in the matrix. ![Xcode jobs in GitHub Actions](https://raw.githubusercontent.com/Xav83/Xav83.github.io/master/res/GitHub%20Actions/Xcode_jobs.png) ![Xcode job detail in GitHub Actions](https://raw.githubusercontent.com/Xav83/Xav83.github.io/master/res/GitHub%20Actions/Xcode_job.png) I will maintain those GitHub Actions processes when GitHub will update their environment images, so that we will always be able to use this repository as reference! So if you see a version of Xcode or an version of OSX missing, or a way to improve the GitHub Actions script, do not hesitate to create a new issue 😉 * * * Thank you all for reading this article, And until my next article, have a splendid day 😉 ## Interesting links - [GitHub repository with the actual working code up to date](https://github.com/Xav83/tutorials) - [Xcode releases](https://xcodereleases.com/) - ["setup-xcode" GitHub Action](https://github.com/marketplace/actions/setup-xcode-version) - [GitHub Actions Microsoft-hosted agents](https://github.com/actions/virtual-environments#available-environments) - [Xcode version for OSX 11](https://github.com/actions/virtual-environments/blob/main/images/macos/macos-11-Readme.md#xcode) - [Xcode version for OSX 10.15](https://github.com/actions/virtual-environments/blob/main/images/macos/macos-10.15-Readme.md#xcode) - [10xlearner website](www.10xlearner.com)
10xlearner
939,316
Quickly ship your changes to git? Use 'ship'
Ship is a small cli command I wrote that automatically adds and commits all your changes and pushes...
0
2021-12-29T10:25:31
https://dev.to/karsens/quickly-ship-your-changes-to-git-use-ship-1i4c
bash
Ship is a small cli command I wrote that automatically adds and commits all your changes and pushes them to your current branch. It then lets you know if it succeeded or failed. To add ship to your cli, copy and paste this into your terminal: ```echo "ship () { BRANCH=$(git branch --show-current); git add . && git commit -m \"${1:-Improvements}\" && git push -u origin \"$BRANCH\" && say you shipped it || say something went wrong }" >> ~/.zshrc && source ~``` Go ahead and try it! Hope you like it ;) P.S. only tested on MacOS.
karsens
939,349
7 front-end interview processes I did in December 2021
I recently went through the task of getting myself a new job and, to do this, I took part of 7...
0
2021-12-31T09:08:49
https://dev.to/anabella/7-front-end-interview-processes-i-did-in-december-2021-5484
career, webdev, javascript, react
I recently went through the task of getting myself a new job and, to do this, I took part of **7 simultaneous interviewing processes for front-end roles** with React and Typescript. I learned a lot as days, weeks, and interviews went by. I learned about myself and about the way companies evaluate candidates. I think this knowledge, paired with a real view into how front-end interviewing looks like today could be really useful for other people in search of a new job and teams who are looking to hire (to get interview ideas!). In this article I'll go through each of the companies I interviewed with (without giving names, sorry papparazzi! 📸), I'll outline the process and its stages and try to give my view on the pros and cons of each approach. **_Disclaimer_** > _To be completely honest, doing 5-6 interviews per week **wasn't such a wonderful idea**._ > >_It was stressful, tiring and came with a constant state of context switching. Interviewing is, in a way, a performance and you have to be at the top of your game on every call, 'cause it won't matter how well it went with the other company you spoke earlier in the day._ > > _I'd recommend job seekers to **focus your energy in 2, max 3 processes at the same time.** Job hunting really is a full time job, and limiting your options will help you focus on the ones you're really interested in._ --- ## Company 1️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>< 20</td> </tr> <tr> <td><strong>Domain</strong></td> <td>work management tool</td> </tr> <tr> <td><strong>Position</strong></td> <td>front-end developer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with one of the founders <em>(45 min)</em> </li> <li> Show n' tell of a project with a FE engineer <em>(1 hr)</em> </li> <li> Call with the other founder <em>(45 min)</em> </li> <li> Demo of the product (at my request) <em>(30 min)</em> </li> <li> Call with the FE technical lead <em>(1 hr)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> good! 👍🏼 </td> </tr> </tbody> </table> ### My take on it **The good 😇** - Fair and easy going process - Show and tell of a project is one of the best ways to evaluate a candidate's tech skill without going through the dreaded "live coding" or the tedious "take home test" - "No wrong answers" approach to tech talks - Talks with C-level people (founders) were very interesting and layed back **The bad 😈** - The talk with the front-end lead was confusing. They seemed undecisive and sloppy and not a "leader type". This had a big influence in my decision to drop out **The ugly 👹** - They were trying to hire remotely but hadn't figured out anything about how to do it **Conclusion** I dropped out before they made an offer (they said they were ready to do so). I realized I wanted to join a bigger engineering organization. --- ## Company 2️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>> 3000</td> </tr> <tr> <td><strong>Domain</strong></td> <td>technical tools for developers</td> </tr> <tr> <td><strong>Position</strong></td> <td>front-end engineer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with an internal recruiter <em>(30 min)</em> </li> <li> Algorithms live coding (w/ study material provided by them) <em>(1 hr)</em> </li> <li> Take home test <em>(~a week)</em> </li> <li> "More complex" live coding exercise <em>(1 hr)</em> </li> <li> Software design (FE) with whiteboarding <em>(1 hr)</em> </li> <li> Final talk with an engineering manager <em>(1 hr)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> bad 😒 </td> </tr> </tbody> </table> ### My take on it: **The good 😇** - Clearly structured process - They provided study material for the algorithms test - They provided thorough feedback after dropping me - They sent an anonymous Greenhouse survey about my experience **The bad 😈** - Too many technical tests, all of them stressful - Slow (~weekly) communication - Unclear live coding test (they didn't say there were 2 problems so I took too much time on the first and simpler one) - Untrained technical interviewers reading questions from a script **The ugly 👹** - Dropping an experienced candidate based on their ability to solve basic algorithms while under peer and time pressure 🚩 (personally, that's not a company I want to work for) - During the algorithms call they either gave me false tips (leaning me into the wrong approach) or were too ambiguous with their words (I *really really* hope it's the latter) **Conclusion** They dropped me so I might be a bit bitter about it but: cracking long-solved, highly googleable problems or implementing existing algorithms is very far from the value I can bring to a product team. If that's the first thing they care about, then that's not a company for me. --- ## Company 3️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>~ 300</td> </tr> <tr> <td><strong>Domain</strong></td> <td>payments</td> </tr> <tr> <td><strong>Position</strong></td> <td>Senior front-end engineer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with in-house recruiter <em>(30 min)</em> </li> <li> Technical talk with a FE developer <em>(1 hr)</em> </li> <li> FE system design w/ 2 devs (more in this below!) <em>(1 hr)</em> </li> <li> Values interview with an eng. manager and a non-technical teammate <em>(1 hr)</em> </li> <li> Meet the potential team (at my request) <em>(45 min)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> very good! ❤️ </td> </tr> </tbody> </table> ### My take on it: **The good 😇** - All kind and nice people, all around - The in-house recruiter took the time to speak with me after *every* interview, this built a friendly bond - (Almost) no live coding, no whiteboarding, no take home tests - Favorite interview (of them all!): **FE system's design** - No whiteboarding - Look at app screen designs, break them down, find problems, think of implementation, evaluate options and their pros and cons. - 👆🏻 Literally one of the things you'll do the most while at the job (aside from writing/reviewing code). - Finally, a small algorithms coding challenge (bit of a surprise :/ ) but I was already warmed up and confident and it went well :) **The bad 😈** - The live coding part of that interview came as a surprise, which is usually seen as a bad practice. Candidates should know about every part of the interview right when it starts. It gives them the chance to manage time and energy accordingly. - I spoke to the team lead and a teammate of my potential team. They weren't ready to pitch an interesting challenge for my position, which in the end resulted in my loss of interest. **The ugly 👹** - **Managers need to be trained in diversity matters** - When I asked the manager I spoke to about how they were giving a voice to underrepresented people in the company he said "we have an open doors policy, anyone can talk to anyone, no matter their rank" - For the record, **"open doors" is not enough for underrepresented folks**, as most of us won't feel entitled to speak our minds openly - Humble advice: put underrepresented people in situations where they are _expected_ to speak their minds **Conclusion** They made an offer which was tough to say no to (no pun 🐴). But I felt like the work I'd be doing wasn't very clear and the team lead fell really short in pitching the project, so with a heavy heart I went a different way. --- ## Company 4️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>< 20</td> </tr> <tr> <td><strong>Domain</strong></td> <td>logistics</td> </tr> <tr> <td><strong>Position</strong></td> <td>software engineer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with third-party recruiter <em>(30 min)</em> </li> <li> Initial call with CTO <em>(45 min)</em> </li> <li> Take home test <em>(~a week, took me about 6 hrs)</em> </li> <li> Call to review take home test + add a feature <em>(1 hr)</em> </li> <li> Call with CEO / founder <em>(45 min)</em> </li> <li> Call with 2 team members (at my request) <em>(30 min each)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> regular 😕 </td> </tr> </tbody> </table> ### My take on it: **The good 😇** - They were very clear about their intention to make me an offer almost from the start **The bad 😈** - The take home test was really low quality. - They gave me a boilerplate project and some designs to implement. There were no specs or acceptance criteria, icons couldn't be exported, entities were inconsistently named, and it was hard to match the data coming back from the API with the designs. **The ugly 👹** - Bad manners from a C-level interviewer - During the review of my solution the CTO questioned the file structure of the project (wut?) and seemed to be trying to find things I "did wrong". - Later on, when I was verbosely and carefuly refactoring my code to introduce a new feature he interrupted me because he didn't "understand what I was doing". - After I was done with a working and clean implementation he said "there was an easier and faster way to get to the same result". - All of this was inconsistent with the external recruiter's claims that they were incredibly excited for me to join. - In a later call with the CTO he asked me to name which other companies I was interviewing with and, even though this made me really uncomfortable, I told him. I wish I had stood my ground and refused to share that info. **Conclusion** They made a 3-folded offer (different distribution of salary and stock) which I declined. --- ## Company 5️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>~ 150</td> </tr> <tr> <td><strong>Domain</strong></td> <td>Finance</td> </tr> <tr> <td><strong>Position</strong></td> <td>Senior front-end engineer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with third-party recruiter <em>(30 min)</em> </li> <li> Technical talk with 2 front-end devs <em>(60 min)</em> </li> <li> Live coding with 2 devs (they shared the tasks in advance) <em>(90 min)</em> </li> <li> Round table w/ people from different teams/areas <em>(60 min)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> great 1st impression, bad ending 💔 </td> </tr> </tbody> </table> ### My take on it: This was the company I was most excited about, and the one which broke my heart when they dropped me. **The good 😇** - They have public salary bands and career paths - The process was short and focused - They shared a highly realistic project (with tickets) in advance, which I'd have to work on during the live coding **The bad 😈** - We spent a lot of time during the live coding debugging accessory things which they suggested but then weren't sure how to implement. **The ugly 👹** - 2 weeks have past and they still haven't provided any feedback about what made them drop me after the live coding. I've requested it twice, no answer 🚩 **Conclusion** No matter how cool a company can look, they need to walk the walk and treat their candidates with respect. I was sad they dropped me, but the fact that they've ghosted me for feedback makes me feel they weren't as cool as they presented themselves. --- ## Company 6️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>~ 150</td> </tr> <tr> <td><strong>Domain</strong></td> <td>Open source messaging</td> </tr> <tr> <td><strong>Position</strong></td> <td>Front-end engineer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with third-party recruiter <em>(30 min)</em> </li> <li> Initial call with in-house HR person (to whom I was meant to ask the questions) <em>(45 min)</em> </li> <li> "Domain-agnostic" take home test <em>(~a week, should take 3-4 hrs)</em> </li> <li> Pair programming on a very basic (and legacy) react app <em>(60 min)</em> </li> <li> Prep call with HR people for the systems design interview <em>(30 min)</em> </li> <li> "Deceivingly simple" systems design discussion with the VP of Engineering and a team lead <em>(60 min)</em> </li> <li> Talk with a member of the front-end team (at my request) <em>(30 min)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> good! 👍🏼 </td> </tr> </tbody> </table> ### My take on it: **The good 😇** - All interesting, respectful, and kind people - Fun and simple take home test, actually doable in 2-3hs (although I spent more 'cause I wanted to get it just right, that's just me) - The pair progamming interview was *actually* a pair programming exercise (not live coding in disguise). **The bad 😈** - A bit of a long process, too many technical tests for my taste. The one focused on React was very outdated (class components, no Typescript). It didn't reflect the actual state of the app I'd be working on. **The ugly 👹** - The person to whom I spoke when I requested a talk with a team member wasn't really prepared to pitch the project and that had the biggest impact in my decision. **Conclusion** They made an offer, which I declined in favor of another one (read below!). But they said the terms of the offer would stand for about 6 months! How nice! 😍 --- ## Company 7️⃣ <table> <tbody> <tr> <td><strong>Size</strong></td> <td>~ 300</td> </tr> <tr> <td><strong>Domain</strong></td> <td>Payments</td> </tr> <tr> <td><strong>Position</strong></td> <td>Software engineer</td> </tr> <tr> <td><strong>Process</strong></td> <td> <ul> <li> Initial call with third-party recruiter <em>(30 min)</em> </li> <li> Pair programming to which I had to bring the problem to work on <em>(60 min)</em> </li> <li> Technical / values talk with an engineering manager <em>(90 min with a break half-way)</em> </li> <li> Values talk with the in-house recruiter <em>(45 min)</em> </li> <li> Meet the team and the team lead (at my request) <em>(30 min each)</em> </li> </ul> </td> </tr> <tr> <td><strong>Experience</strong></td> <td> good! 👍🏼 </td> </tr> </tbody> </table> ## My take on it **The good 😇** - Short and fast paced process - Every interviewer feedback at the end of each interview (including if I passed!) - Pair programming was *actually* pair programming (not live coding in disguise) - Bring-your-own coding challenge felt like I was in control of how I'd be evaluated - They arranged 2 calls to meet my potential team - All the talks gave me a clear sense of what it's like to work with them **The bad 😈** - I was a bit confused / annoyed to have to "put in work" preparing a challenge to bring before I even spoke to anyone in the company. That might have been different if I had been contacted by an in-house recruiter and learned more about them first. **The ugly 👹** - The person doing the pair programming with me had very little knowledge about React, this was beneficial to me because I love explaining React to people, but we might have gotten more done if they had been front-end focused. **Conclusion** They made an offer and I accepted it! 🎉 The biggest selling point for me was the ways of working (XP/Lean, pair programming by default) combined with the fact that I'd be way out of my comfort zone working a lot on backend projects and being the person-of-reference for front-end and React matters. --- ## My overall learnings 🧠 ### For candidates 👩🏻‍💻 **Show and tell interview** - Bring something you're really excited or proud of - It can be something small, you can even build it specifically for the interview (that way it'll show your most up-to-date skills!) - Start with _why_ you wanted to build that - Think in advance about how you're going to walk through it, the reasons for your decisions and things you'd like to add or improve on **Live coding** - Make sure you know how many exercises you'll have to go through - You can even ask how much time they think they should take. That way you can adapt your rhythm. **Helping your decision** - If you have doubts about joining a company, or if are trying to decide between competing offers, requesting a call with potential teammates can help a lot in picturing how the day-to-day work will feel like. To me that was a dealmaker because: - I'll be working with a certain group of people - In certain projects - And with a certain dynamic - 👆🏻 that should have more weight in my decision than anything else, since it'll have the most impact on you while at the job. - In my experience companies and recruiters will be more than happy to arrange a call with the team for you at a final stage of the process **Decide how much you want to share** - You'll probably get asked about other processes you're taking part in. - Companies often ask this to make sure they're not lagging behind timewise. - They might ask you about "where they stand" in your preference list. - They might ask you for details of other companies, size, domain. - Be as honest or ellusive as you want. None of this should affect your chances of getting an offer. **Just don't give them names** **Ask questions, give feedback** - Everyone knows you're supposed to bring questions to every interview. If you didn't, now you do! - Ask about things you care about, anything that'll help you picture yourself working with them or make up your mind about joining. - Take the opportunity to give feedback to companies and interviewers after each call. - Include what you liked about it and what could be improved - This, if done right, could make you stand out as a candidate! ## For hiring teams 🏢 **Show and tell interview** - This is a great way of evaluating a candidate's experience and skill without putting them on the spot! - Instead, it puts them in control of the situation and you'll get to see a lot more of how it's like to work with them on a daily basis. - You won't see much of that 👆🏻 with a coding kata or an over-simplified feature development exercise. **Train people on how to interview candidates** - Especially for bigger organizations: train your interviewers in conducting conversational and technical interviews. They're the face of the company to potential employees. **Live coding interviews** - Especially for kata style ones, make sure the candidate is aware of how many problems they'll go through during the call and give them an estimate of time budget for each one. - Mention if they're going overtime with one problem and give the option to move one to the next or work on solving the current one. **Pitching the project** - When reaching the final stages of interviewing, especially if you're a small/medium company, prepare your interviewers in pitching the team and the company to candidates - Those final conversations usually make or break the deal for people trying to decide between more than one offer. - If you have all positive feedback across the board about a candidate, **make sure you can give them an offer that's interesting to them**. - By this **I don't mean money**: most experienced candidates will get similar offers and you can probably match whatever they're getting somewhere else. - Pitch them a position and a project that they'll feel excited about, and it might even be worth not going for the highest paying offer! **Give feedback to candidates** - This can be before the interview ends - It can be in "catch up" talks with the recruiter - It can be as a warm up before making an offer - **And it should definitely be there if the company drops a candidate**, especially after the candidate requests it. - Idea 💡: ask candidates for feedback of each interview! --- That's it, thanks for reading this far, please leave comments about your own experiences interviewing and being interviewed. I hope some of this is useful for you in 2022!
anabella
939,419
Blur Animation: CSS Transition
Little experiment for create a blur movement effect using CSS animation.
0
2021-12-29T11:42:23
https://dev.to/argonauta/blur-animation-css-transition-2ndc
codepen, javascript, css, tutorial
<p>Little experiment for create a blur movement effect using CSS animation.</p> {% codepen https://codepen.io/riktar/pen/bdEVPP %}
argonauta
939,471
Force Https with .htaccess
After installing an SSL certificate, your website will be accessible via HTTP and HTTPS. However, it...
0
2021-12-29T12:34:00
https://dev.to/dhuettner/force-https-with-htaccess-5bb9
htaccess, https
After installing an SSL certificate, your website will be accessible via HTTP and HTTPS. However, it is better to use only the latter, because it encrypts and additionally secures the data of your website. With some hosters there is a setting in the web interface to force HTTPS with just one click. Unfortunately, it happens again and again that this setting does not work correctly with various CMS systems. You can also use the .htaccess file (which we recommend) to force an HTTPS connection. This tutorial shows you how to do that. ##Force HTTPS for all traffic One of the many options you have with .htaccess files is to redirect HTTP traffic to HTTPS. You can enable the feature to force HTTPS for all incoming traffic by following the steps below: 1. Go to the file manager in your hosting panel and open the .htaccess in the appropriate folder where your domain points to. If the file is not there, you may need to create and/or share it. 2. Scroll down until you find "RewriteEngine On" and paste the following lines of code below it: ```php RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] ``` 3. Save the change. > IMPORTANT: Make sure that the line "RewriteEngine On" does not appear twice. If the line already exists, simply copy the rest of the code without it. ##Forcing HTTPS for a specific domain If you have two domains that both point to the same website, but only the first domain should be redirected to https (for whatever reason), you can use the following code: ```php RewriteEngine On RewriteCond %{HTTP_HOST} ^yourdomain-1.de [NC] RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] ``` ##Force HTTPS for a specific folder The .htaccess file can also be used to force HTTPS for specific folders. However, the file should be placed in the folder that is to establish the HTTPS connection. ```php RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(folder1|folder2|folder3) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] ``` Replace the "folder" with your "directory". After you have made the changes, clear your browser's cache and try to connect to your website via HTTP. If everything was added correctly, it will now redirect to the HTTPS version. ##Conclusion Congratulations! You have just successfully edited your .htaccess file and redirected all HTTP traffic to the secure version HTTPS. Depending on the platform you developed your site on, there may be alternative methods to enable this feature. If you have any tips, tricks, or suggestions to share with us, we welcome your comments! Connect me on [Twitter], [LinkedIn] and [GitHub]! Visit my [Blog] for more articles like this. [Blog]: https://waterproof-web-wizard.com/blog [Twitter]: https://www.twitter.com/proofwebwizard [LinkedIn]: https://www.linkedin.com/company/waterproof-web-wizard
dhuettner
939,479
Crypto business that makes you a Billionaire (Part-1)
Crypto is booming day by day pace. According to the projections, digital money would be over $1100...
0
2021-12-29T12:49:15
https://dev.to/avalaauren/crypto-business-that-makes-you-a-billionaire-part-1-bg1
businessideas, cryptobusinessideas, cryptoexchange
Crypto is booming day by day pace. According to the projections, digital money would be over $1100 million by 2025. Cryptocurrencies will have an annual growth rate of around 10% which indicates that in the future we will going to talk more than now. Among the customers, they buy and sell items using cryptocurrency through the crypto business. In addition, the reach of cryptocurrencies among people gives them more confidence to start a crypto business. Cryptocurrencies are accepted by top companies like Microsoft, AT&T and Wikipedia these are the organizations where you can pay with cryptos like Bitcoin. Looking to step into the crypto business, but have a blur idea of where and what to start? The good news is there are a plethora of opportunities in your hand to make a great living! Here are the ideas to start a most profitable business in 2022! Here they are: Cryptocurrency business is the topmost business idea among millionaires. The first one is Cryptocurrency Exchange Platform Development. Cryptocurrency Exchange Platform Development is a business that allows users to trade, buy, sell their assets using fiat money or other digital currencies. Well, there are two ways to create your crypto exchange platform 1) Start from scratch 2) clone script — more secure, safe, and cost-efficient There are a lot of crypto exchange clone scripts are available in the market like Binance, LocalBitcoins, wazir, etc. But for the inevitable features and functionality, most of them are prefer [Binance Clone Script](https://maticz.com/binance-clone-script). In 2020, the Binance exchange has generated over $570 million in revenue. Before you going to launch you have to find the best Crypto Exchange Development Company to build your own Crypto Exchange platform like Binance. Maticz, the pioneer Crypto Exchange Development Company that delivers 200+ successful projects worldwide. Before choosing us look a proven demo’s from our experts. ([Get a Live Demo](https://maticz.com/requestquote)).We also offers [Pancakeswap Clone Script](https://maticz.com/pancakeswap-clone-script), [NFT Development Services](https://maticz.com/nft-development-company), [BSC NFT Marketplace](https://maticz.com/bsc-nft-marketplace-development) , [BEP20 Token Development](https://maticz.com/bep20-token-development). Hope this article should help you to step into the crypto business. Stay connected with us for more business ideas will engage with you soon in part-2.
avalaauren
939,482
Top 15 Web Application Templates with Perfect Design [2021]
Web application templates are turnkey solutions for your website. They are affordable and easy to adjust. In this article, we list our favorite web app templates and explain which ones and why we recommend.
0
2021-12-29T13:05:05
https://flatlogic.com/blog/web-application-templates-with-perfect-design/
webapp, webdesign, webdev
--- title: Top 15 Web Application Templates with Perfect Design [2021] published: true description: Web application templates are turnkey solutions for your website. They are affordable and easy to adjust. In this article, we list our favorite web app templates and explain which ones and why we recommend. tags: webapp, webdesign, webdev //cover_image: ![Top 15 Web Application Templates with Perfect Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0ay8znwdt1wtq7q4jn1.png) canonical_url: https://flatlogic.com/blog/web-application-templates-with-perfect-design/ --- #Introduction The most general meaning of the word template is an object used to produce or mould new objects with a high degree of uniformity. This is an old concept. A well-shaped spearhead mould meant the whole army could be supplied with spears of adequate quality. A set of bricks’ uniformity increased the chances of a building staying intact for generations to come. It was all about efficiency over individuality. The principle of a template originated in harsh times when custom solutions were an unaffordable luxury for all but very few. Today words like uniformity and mass production get mixed reception. As the technology evolved, mass production became a given. We started exaggerating its minuses and taking its pluses for granted. This is understandable. When students of prestigious universities in developed countries say they support socialism they usually mean Scandinavian countries, not North Korea. When a biker says he wants a custom motorcycle, he probably means Orange County Choppers, not an old bicycle someone fixed a motor to. Don’t get this wrong, wanting premium custom solutions is perfectly fine. Not only are they tailored to your taste, but also have this air of premium build that’s hard to resist. Well, they do if they were made by true masters. If someone offered you a mass-produced Harley and the aforementioned ‘custom’ motorized bicycle, for the same price, what would you lean towards? Custom solutions aren’t created equal. Some are most useful as an example of what to avoid. ####Takeaway 1: a custom solution must be produced by professionals, often established masters of mass-produced solutions who outgrew them There’s another issue with all things custom. The individual design drives the cost up. Orange County Choppers bikes' prices start around $50k. You can find an equally dependable and brandy but mass-produced counterpart for one fifth the price. As established earlier, there’s nothing wrong with a custom solution. Except when the cost is of importance. Remember the individuality/efficiency dilemma. If you want a custom lifestyle solution, the main question is often if you can afford it. If you want your website’s mechanics to work flawlessly, you might want to check existing solutions first. Few customers care if the website’s waterworks were crafted by hand. And the ones who do will still care more about reasonable prices. ####Takeaway 2: a custom solution drives the cost up and is usually better in specific ways not always relevant to your business. Custom solutions are for custom cases. If your case is similar to many others, see if you can stick with a mass product. But enough about bikes and bricks. You’re here to know about web applications, so that’s where we’re going with this article. ##Web Apps Most popular websites incorporate web applications to some extend. Web apps offer lots of interactive features, decrease security risks and make the content scalable to multiple devices without any significant increase in traffic. Furthermore, some web apps can be great substitutes for conventional desktop apps. One example is Google apps. They let you keep your text, spreadsheets and presentations online, update them from anywhere where there’s internet, and not have to think which copy of the file is the contemporary one. ##Web App Templates We’ve covered the benefits of templates in their most general meaning earlier. Web application design is different from heavy industry but some principles still apply. When we need lower base cost, a mass item is preferable. When we need flawless compatibility, chances are there is a suitable solution. When we need satisfied customers, chances are they’ll forgive the fact that you didn’t handcraft every line of code on your website. Web app templates are our speciality. We could go on and on about them for hours but this time will be best spent with you having some applicable data and examples. So, without further ado, let’s dive into the benefits of web apps and our picks of the best web app templates on the market! ##Web Application Benefits Web applications have various pluses, and let’s name just a few: - No installation required - Compliant with any devices and work seamlessly across varios browsers - UI is highly customizable - Easy to upgrade and maintain #Examples Of Web Application Templates ##1. Histogram ![Histogram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tj3nrcejcvdcgp1rp3zj.jpg) Source: http://preview-histogram.ucraft.site/ The first template we would like to tell you about is called Histogram and, as you can see, it bears the app by which it was inspired, on its sleeve. And, we are sure that you have already guessed its main purpose by this point, as Histogram is a template, whose main focus is visual content. Perfect as a portfolio app, Histogram is conveying the simple message that less is more. It is uncluttered by any unnecessary and distracting details and presents the photos in the portfolio in large boxes, which helps the viewer to concentrate on the picture itself. And, due to the attractive layouts and eye-pleasing shapes, Histogram, in our opinion, is pretty successful in actually conveying its main message, so, despite initial simplicity, we strongly recommend you to pay your attention to this web app template. ##2. Flatlogic Ecommerce ![Flatlogic eCommerce](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zcbf3ymb06rtpops9zan.png) Source: https://flatlogic.com/templates/ecommerce-react-template/demo Flatlogic Ecommerce is a perfect web app template for your online eCommerce store. Whether you build a website or a web application, the list of options available inside is more than rich: landing page, categories pages, product description pages, a CMS for the blog, the basic support pages like FAQ, contact, etc. Thanks to NextJS, Flatlogic Ecommerce template uses server-side rendered code that makes your site SEO-friendly. ##3. Stylepoint ![Stylepoint](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l7kou7v1uu97twnflq12.png) Source: https://preview-stylepoint.ucraft.site/ Stylepoint, also delves into the visual and showcase the side of the question, but takes a rather different approach to it. And although you can say that it still utilizes such aspects as minimalism in text and monochromaticity in its backgrounds, Stylepoint makes a much greater emphasis on such details as transitions and elements while at the same time not overusing these. This makes the user focus on those details much less, but rather creates the overall sense of interconnection between each and every detail for them, making it somewhat of a stylistic trip and an experience even of itself. ##4. Composer ![Composer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1gkf4cygx9xw97odazi5.jpg) Source: http://themeforest.net/item/composer-responsive-multipurpose-highperformance-wordpress-theme/13454476 Composer template packs in a lot in itself, as it is a compilation of over 50 ready-made gorgeous looking demo sites to choose from and work with. Such an abundance of demos allows for Composer to cover such absurdly impressive variations of web designs and features that we could have spent an entire article just talking about them. So, in the arms of a capable and crafty web app developer such a template can become something of a constructor to work with, borrowing different interesting aspects from different demos Composer possesses and making a new worthwhile app just with them. ##5. Zeen ![Zeen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3bl4d5viy0yxq79uo2g.jpg) Source: http://themeforest.net/item/zeen-next-generation-magazine-wordpress-theme/22709856 If what you want to look for is ideas for news or a magazine project, then this web app template is for you. Zeen web template is kinda embodiment of minimalism, making it an exemplary modern app template. But this is not all of Zeen’s advantages, as it also carries within itself such cool features as dark mode, voice search capabilities, gradients and compatibility with such services, as MailChimp. ##6. Wunderkind ![Wunderkind](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cecj1k4b3meavw5w8e9s.jpg) Source: http://templateshub.net/template/wunderkind-one-page-parallax-personal-portfolio In stylistic regard, the Wunderkind web application template ticks all of the boxes of what a currently relevant project should be: it is clean, sleek and, most importantly, ultra-smooth. From a developer’s standpoint, Wunderkind is extremely easy to tinker around with and customize and its multipurpose capabilities are beyond your wildest dreams. Such features as, full-screen touch-friendly sliders, video backgrounds, an abundance of gallery options and smooth, performant parallax render Wunderkind applicable to any project of your choosing and make it unbelievably developer-friendly, as well as the fact that this web app template was based on the latest Bootstrap. A definite catch if you’re looking for a versatile and worthwhile template. ##7. DashCore ![DashCore](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7k5gz1xxurcq1ogh457t.png) Source: https://themeforest.net/item/dashcore-saas-startup-software-template/22397137 DashCore is super customizable and lightweight, which is pretty easy to explain as this template is made on WordPress. We also cannot talk about DashCore without mentioning how responsive it is and how its documentation is precise, straight to the point and, this one is most important for those, who just started their dive into the swirling seas of app development, step-by-step documentation. The same beginners would appreciate the presence of the round the clock email support. Also, we are more than sure that such a feature would be useful not only to them but to even the most hardened app developments sea wolves, as it is always more than pleasant to feel backed up. So, summing up this entry, use DashCore for all the developers who seek a reliable and flexible way to change web app templates for their start-up, SaaS, marketing and social projects. ##8. theNa ![theNa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09w2po26fc5tisepqczx.jpg) Source: https://themeforest.net/item/thena-photography-portfolio-wordpress-theme/22953759 The feature that makes theNa web app template stand out from the majority of other templates is its incredible horizontal-scrolling feature. Don’t get us wrong, we don’t say that it is a unique feature, but theNa implements it tremendously and can be a definite attention attraction point and an eye-catching feature for any portfolio-oriented project. But portfolios are not the be-all and end-all of TheNa, as its module structure allows a handy and crafty developer to repurpose and restructure this template into absolutely anything he or she can imagine. ##9. Definity ![Definity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8th9k6u0zmogsgc2msy.jpg) Source: https://themeforest.net/item/definity-multipurpose-onemulti-page-template/12379946 Definity is precise and straight to the point, although these qualities do not prevent it from being stylish and packed to the brim with different useful stuff. Definity is unbelievably responsive, has such cool features as video backgrounds, hover effects and parallax scrolling and its modular design will definitely be appreciated by any developer. And thus, the Definity web app templates seamlessly take their rightful place among the other cream of the crop templates by being versatile and multipurpose. ##10. Flaunt ![Flaunt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8xfe5cfuoktx6jzmysgu.jpg) Source: http://themeforest.net/item/definity-multipurpose-onemulti-page-template/12379946 Flaunt template is for all Adobe Muse enthusiasts. Especially for those who find it difficult to manoeuvre around its hover effects tricky to implement. Flaunt has you covered in that regard, as it bypasses Adobe Muse’s restrictions with some custom CSS magic. But that is not all Flaunt is good for, as it is simply a fully responsive and simple template with over 50 slick cover effects for texts and images alike. ##11. Enfold ![Enfold](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfpm8ffcigx7x64h8wbk.jpg) Source: http://themeforest.net/item/enfold-responsive-multipurpose-theme/4519990 Based on WordPress, Enfold web app template is exceptionally user-friendly and prides itself on that. In fact, Enfold was designed to be the most user-friendly WordPress web app template out there with its versatile and fully responsive theme. This template is quite fitting for business sites, online stores and portfolios. What is also great about Enfold is its drag and drop template builder and a stack of ready-made demos, which allow you to create your own app layout swiftly and easily. Basically, an ideal web app template for those developers, who do not want to spend lots and lots of time on creating greatness. ##12. Maple ![Maple](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8i8l4bwfmyud4ylqu3nu.jpg) Source: http://themeforest.net/item/maple-responsive-wordpress-blog-theme/12843046 Another WordPress template on our list is Maple. There are not one, not two, but six reasons for you to fall in love with this web app template: - Bold and unique design - Responsive and retina-ready - Features both light and dark styles - Parallax header backgrounds - Multi sidebar support - Finally, Maple is ridiculously easy to use Combine those six reasons with 15 layout combinations and a handful of features and widgets, and you have got yourself a crazy mix worth your time called Maple. ##13. NOHO ![NOHO](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3hyukdzw0akdajjjt3g.jpg) Source: http://themeforest.net/item/noho-creative-agency-portfolio-muse-template/11174979 One more Adobe Muse web app template is called NOHO. As you can see in the picture, NOHO was designed with creative professionals in mind, so it is remarkably easy to edit in the above-mentioned Adobe Muse. What is also great about it is the presence of pre-installed desktop, tablet and mobile versions, as well as multiple layouts, image sliders, parallax scrolling and CSS rollover effects for developers to play around with. ##14. BeTheme ![BeTheme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h3feazuour9erfhr51f.jpg) Source: http://themeforest.net/item/betheme-html-responsive-multipurpose-template/13925633 If you, as a developer, are more of an HTML person, then BeTheme, is the one for you. That’s because BeTheme packs within itself just an astounding number of different themes – 450, to be exact. And each of them is comprehensive, flexible and fully complete, ready to be used on any business or personal website. So, the biggest problem you are going to have while using BeTheme is what fully responsive, retina-ready theme with parallax scrolling (and smooth one at that) to actually choose as there are so many great ones. ##15. Valenti ![Valenti](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9loehjyul9dpti5c17h.jpg) Source: http://themeforest.net/item/valenti-wordpress-hd-review-magazine-news-theme/5888961 Valenti is also a WordPress template and once again a magazine-oriented one, but quite deserving of attention in its own right. What makes Valenti so deserving is its flexibility and richness, as it boasts an impressive variety of vibrant and colorful home pages of your app, as well as being packed to the brim with different background image styles. And the parallax scrolling it possesses does no harm in that regard either. So, what we finish this list with is actually quite representative of the theme of web app templates as a whole. But more on that in the conclusion. #Conclusion As you can see on the plethora of different web app templates we presented to you today, the market is bursting with different variants and options for you to choose from. And that brings nothing but good, despite the fact that initially, the perceived oversaturation with options can be quite scary. If you get to the core of the issue, such oversaturation allows you to find this one particular template that will fit your project like a glove. And with that, we wrap today’s article. Have a nice day and, as always, feel free to read up on more in the blog of Flatlogic!
anaflatlogic
939,501
Api Gateway Simple Tutorial
CREATE A SIMPLE API GATEWAY ENDPOINT How to create a very simple API with API...
0
2021-12-29T13:48:23
https://dev.to/aws-builders/api-gateway-simple-tutorial-548m
CREATE A SIMPLE API GATEWAY ENDPOINT ==================================== *How to create a very simple API with API Gateway.* Log into AWS and open the API Gateway module. This is where we will create an API gateway (mock for this purpose) to be used later. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.06.40-AM-500x377.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.06.40-AM.png)1. Under API click on Create API. Then your screen will look like the screen to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.07.52-AM-500x378.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.07.52-AM.png)2. Click on Build under HTTP API. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.10.08-AM-500x379.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.10.08-AM.png)3. For our example, enter “test-api” as the API Name and hit Review and Create, since this will be basic mock API. This will create a basic templated API. Hit Create again at the bottom. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.11.09-AM-500x377.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.11.09-AM.png)4. Now that your API is created, you can click on routes, on the left. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.12.00-AM-500x377.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.12.00-AM.png)5. Click “Create“ to create a test route. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.13.31-AM-500x378.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.13.31-AM.png)6. We will now create a test method. Click on the dropdown that says “ANY” and choose “GET”. Then enter in the path /test-call in the box next to it. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.15.39-AM-500x377.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.15.39-AM.png)7. Click Create. After the endpoint is created, click on the action “GET”. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.19.24-AM-500x379.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.19.24-AM.png)8. Click on Attach Integration. We will choose “HTTP URI” as integration type. Also enter the HTTP Method, “GET”, and the URL: <https://catfact.ninja/fact>. I entered an optional comment. You can as well if you want to. Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.20.24-AM-500x377.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.20.24-AM.png)9. You now have an endpoint created. To call this endpoint, click on “Stages” on the left: Your screen should look like one to the left. [![](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.26.55-AM-500x378.png)](http://norristechnologies.com/wp-content/uploads/Screen-Shot-2021-12-20-at-11.26.55-AM.png)10. This has $default as a default stage. This is set to auto deploy by default. This makes it easy for us. Your screen should look like one to the left. [![Text Description automatically generated](http://norristechnologies.com/wp-content/uploads/text-description-automatically-generated-500x412.png)](http://norristechnologies.com/wp-content/uploads/text-description-automatically-generated.png)11. Now you will see the invoke URL: that is the URL we will use to test our endpoint. copy and paste it into a browser and add test-call to the end of it. Your screen should look like one to the left. If when you browse to this link, you should see the screen above. If so, congratulations you have created a simple API with one endpoint.
chrisnorristech
939,513
Software Architecture Design and Engineering at a Startup
The great thing about starting a new project is that you get a clean slate. No baggage of design...
0
2021-12-29T14:03:34
https://dev.to/scgupta/software-architecture-design-and-engineering-at-a-startup-3lc0
programming, design, architecture
The great thing about starting a new project is that you get a clean slate. No baggage of design choices that you hated to look at every day in your last project. But how many times have you seen a shiny new project not turning into the same intractable mess? It is more likely to happen in a fast-paced startup. The faster the pace, the sooner it happens. So how do you balance moving fast without being trapped in analysis paralysis and keep technical debt at a manageable level? You design for change. Ignore the refrain that prevention is better than cure. Instead of preventing the mess, you should embrace it and mitigate it when it happens. That’s what we have done at Slang Labs. In this article, I discuss: - Startup Reality: forces and constraints in a startup. - Engineering Philosophy: our philosophy to manage that reality. - Slang Architecture: evolution of Slang microservices and SDKs guided by our philosophy. [Continue reading » ](https://medium.com/@scgupta/microservices-software-architecture-design-and-engineering-at-a-startup-c2df9587debd)
scgupta
939,580
Server Backend concepts
CRUD Operations: CRUD is an acronym that comes from the world of computer programming and refers to...
0
2021-12-29T16:21:08
https://dev.to/rased/server-backend-concepts-2dnb
CRUD Operations: CRUD is an acronym that comes from the world of computer programming and refers to the four functions that are considered necessary to implement a persistent storage application: create, read, update and delete. Create: The create function allows users to create a new record in the database. In the SQL relational database application, the Create function is called INSERT. In Oracle HCM Cloud, it is called create. Remember that a record is a row and that columns are termed attributes. A user can create a new row and populate it with data that corresponds to each attribute, but only an administrator might be able to add new attributes to the table itself. Read: The read function is similar to a search function. It allows users to search and retrieve specific records in the table and read their values. Users may be able to find desired records using keywords, or by filtering the data based on customized criteria. For example, a database of cars might enable users to type in "1996 Toyota Corolla", or it might provide options to filter search results by make, model and year. Update: The update function is used to modify existing records that exist in the database. To fully change a record, users may have to modify information in multiple fields. For example, a restaurant that stores recipes for menu items in a database might have a table whose attributes are "dish", "cooking time", "cost" and "price". One day, the chef decides to replace an ingredient in the dish with something different. As a result, the existing record in the database must be changed and all of the attribute values changed to reflect the characteristics of the new dish. In both SQL and Oracle HCM cloud, the update function is simply called "Update". Delete: The delete function allows users to remove records from a database that is no longer needed. Both SQL and Oracle HCM Cloud have a delete function that allows users to delete one or more records from the database. Some relational database applications may permit users to perform either a hard delete or a soft delete. A hard delete permanently removes records from the database, while a soft delete might simply update the status of a row to indicate that it has been deleted while leaving the data present and intact. JWT: JSON Web Token, is an open standard used to share security information between client and server. JWTs differ from other web tokens in that they contain a set of claims. Claims are used to transmit information between two parties. What these claims are depends on the use case at hand. A JWT is a string made up of three parts, separated by dots (.), and serialized using base64. In the most common serialization format, compact serialization. Mongoose: Mongoose is a Node. js-based Object Data Modeling (ODM) library for MongoDB. It is akin to an Object Relational Mapper (ORM) such as SQLAlchemy for traditional SQL databases. The problem that Mongoose aims to solve is allowing developers to enforce a specific schema at the application layer. Relational database (MySql): Relational database is a program used to maintain a relational database.RDBMS is the basis for all modern database systems such as MySQL, Microsoft SQL Server, Oracle, and Microsoft Access. Express: Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. The primary use of Express is to provide server-side logic for web and mobile applications, and as such it's used all over the place. Companies which use Express as a foundation of their internet presence include:
rased
939,721
Hyperscript - the hidden language of React
JSX is the starting point React uses JSX to make things easier for the developers. So when...
0
2021-12-29T18:38:31
https://dev.to/fromaline/hyperscript-the-hidden-language-of-react-3d1f
react, javascript, webdev, tutorial
## JSX is the starting point React uses JSX to make things easier for the developers. So when you write something like this. ```jsx <div id="foo"> Hello! </div> ``` Babel with a react preset transforms it to this. ```js React.createElement("div", { id: "foo" }, "Hello!"); ``` Check out this example in [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20ie%206&build=&builtIns=false&corejs=3.6&spec=false&loose=false&code_lz=MYewdgzgLgBAJgSwG4wLwwDyJQuqBEAZiCPgHwASApgDY0gCEGA9NmUA&debug=false&forceAllTransforms=true&shippedProposals=false&circleciRepo=&evaluate=false&fileSize=true&timeTravel=false&sourceType=module&lineWrap=true&presets=env%2Creact&prettier=false&targets=&version=7.16.6&externalPlugins=&assumptions=%7B%7D). `React.createElement` is a function that creates a virtual node. It's a well-known fact, and you probably already know it. So what's the point? ## Preact way If you've used Preact before, you may notice it has [an unobvious export](https://github.com/preactjs/preact/blob/master/src/index.js#L4) in its source code. ```js export { createElement, createElement as h, } from './create-element'; ``` To make things clear, the `createElement` function from Preact serves the same needs as `React.createElement`. So the question is, why is it exported as `h` as well? The reason is dead simple. It's exported as `h` because it's [a hypescript function](https://github.com/hyperhype/hyperscript/tree/9237f590f3bc82b841ba6e7c4df946f21dff0045). So what exactly is hypescript? ## Hyperscript is the key Hypescript is a kind of language to create HyperText with JavaScript and was started by Dominic Tarr in 2012. He was inspired by [markaby](https://github.com/markaby/markaby), the "short bit of code" to write HTML in pure Ruby. Markaby allows doing things like that. ```ruby require 'markaby' mab = Markaby::Builder.new mab.html do head { title "Boats.com" } body do h1 "Boats.com has great deals" ul do li "$49 for a canoe" li "$39 for a raft" li "$29 for a huge boot that floats and can fit 5 people" end end end puts mab.to_s ``` And the `h` function allows doing essentially the same thing, but with different syntax. ```js h = require("hyperscript") h("div#foo", "Hello!") ``` It also supports nesting and CSS properties. ```js h = require("hyperscript") h("div#foo", h("h1", "Hello from H1!", { style: { 'color': 'coral' } }) ) ``` Check out [an interactive demo](https://hyperhype.github.io/hyperscript/) to see how it works. ## Get your hands dirty Now when we know what the `h` function does and why we need it, let's write our own version of it. Complete example can be found on [codesanbox](https://codesandbox.io/s/h-edxqp). First, let's make up a `render` function, that creates real DOM elements from our virtual nodes. ```js const render = ({type, children, props}) => { const element = document.createElement(type); if (props) { for (const prop in props) { element.setAttribute(prop, props[prop]); } } if (children) { if (Array.isArray(children)) { children.forEach(child => { if (typeof child === 'string') { element.innerText = child; } else { element.appendChild(render(child)); } }) } else if (typeof children === 'string') { element.innerText = children; } else { element.appendChild(render(children)); } } return element; } ``` Than, let's create the `h` function. ```js const h = (type, children, props) => { let handledType = typeof type === 'string' ? type : 'div'; return { type: handledType, props, children } } ``` Finally, let's create an actual content with our `h` function, render it with our `render` function and mount the result to the DOM. ```js const div = render( h('div', [ h('h1', 'Hello!', { id: 'foo' }), h('h2', 'World!', { class: 'bar' }) ], ) ); document.querySelector('#app').appendChild(div); ```
fromaline
939,728
Test Post
A post by John S.
0
2021-12-29T17:18:42
https://dev.to/outofgamut/test-post-52bh
outofgamut
939,741
cross-env for front-end (Nextjs) on Windows 10 11
If you are doing front-end projects on windows 10 or 11. specially, you want to run some of sample...
0
2021-12-29T17:48:53
https://dev.to/gemcloud/front-end-nextjs-on-windows-10-11-3idj
nextjs, webdev, startup
If you are doing front-end projects on windows 10 or 11. specially, you want to run some of sample codes from GitHub! Do not forget to install "npm i cross-env" on your project. ``` > npm i cross-env ``` The "cross-env" helps us save a lot of time. otherwise the codes threw errors! for example: 1. read some of files from folder (markdown for blog etc.) on Nextjs 2. use "NODE_ENV=development" on project.json, see below command about lingui. ``` "__NoWorkOnWindow_lang:extract": "NODE_ENV=development lingui extract --clean", "lang:extract": "cross-env NODE_ENV=development lingui extract --clean", ``` Happy Coding!
gemcloud
939,801
Software Developer Vs Software Engineer — Which Suits You Best?
Have you ever wondered if software development and software engineering are the same thing? According...
0
2021-12-29T19:48:14
https://www.daxx.com/blog/development-trends/software-developer-vs-software-engineer
softwaredeveloper, softwareengineer, softwareskills
Have you ever wondered if software development and software engineering are the same thing? According to the Computer Science Degree Hub, these two jobs are different in terms of their functions. Software developers do the small-scale work, writing a program that performs a specific function or set of functions, while software engineers apply engineering principles to database structure & development process. Keep reading to learn more about these two jobs and find out which one better suits your business needs. ## Table of Contents - Who is a Software Developer? - Who is a Software Engineer? - What are the Main Differences Between a Software Engineer and a Software Developer? ## Who is a Software Developer? Software developer is a tech expert who develops, designs and builds desktop and mobile programs and web applications. They are the driving creative force that deals with design and program implementation. Their popularity has recently gone up because of user and business needs and process automation. They are responsible for the entire development process. This job requires collaborating with the client to create a theoretical design. Software developers use various source debuggers and visual development environments to modify, write, and debug software for client applications. Their responsibilities include documenting and testing client software and writing code to create applications that either stand alone or boost access to servers and services. ### Top Skills For a Software Developer - Data Structure and Algorithms Data Structure and Algorithms is one of the most important skills for modern software developers. Most employers are looking for experts who are familiar with basic data structures, like an array, linked list, map, and set. These are the fundamentals that help developers build applications. - Git and GitHub More than half of all organizations use Git and GitHub source code management, so this hard skill is essential for software developers. - Cloud Computing All software developers should be highly skilled in cloud computing, since most companies are choosing Cloud to save money and improve their scalability. Tech experts who are proficient in services like Google Cloud Platform are in demand in 2022. - IDEs (like Visual Studio Code) Apart from knowing programming languages and databases, software developers should know source-code editors like Visual Studio Code to be able to debug, perform code refactoring and syntax highlighting. - The Ability to Learn Being a software developer is a lifelong process of continuous learning and improvement. Knowing several programming languages is good, but progress is not always a guarantee, and the skills that are relevant today can soon become outdated. To stay in demand, developers need to devote time to building their skill set each day, analyze their code with a critical eye, and always seek new opportunities. Your average software developer will be judged by their position, level of experience, and their familiarity with certain programming languages and databases. This list is not exhaustive — a developer must also possess a number of soft skills and competencies to be considered a valuable expert on the job market. ### How to Test a Software Developer’s Skills? Although CVs give you a basic understanding of software developer’s abilities, there are some additional ways to test their skills. - Check out their portfolio A portfolio is the first thing that helps recruiters understand a developer’s skill level. A portfolio is useful to access the candidate’s experience and see their source code before inviting them to an interview. - GitHub account GitHub is a place where software developers boast about their ability to write readable code. You would want to look at certain things like the number of followers a developer has, when a developer joined GitHub and the number of repositories they follow. - Life coding Potential employers can assess the way a candidate thinks and communicates while they are coding, and it gives a good understanding of how a developer applies logic and even works under pressure. ## Who is a Software Engineer? A software engineer is a person who applies engineering principles to database structure & development process — that is, the product life-cycle. Engineering principles relate to the separation of concerns, modularity, abstraction, anticipation of change, generality, incremental development, and consistency. An engineer also ensures that a program interacts the way it should with the hardware in question. Software engineers apply mathematical analysis and the principles of computer science in order to design and develop computer software. Software engineers operate on a bigger scale than software developers, creating new tools for software development, while software developers write software by using the already existing tools. ### Top Skills For a Software Engineer Many software engineers are highly experienced in at least one or two programming languages, however nowadays, they have to be skilled in most modern languages to attract employers and continue to be in great demand. The list may include, but is not limited to: - Computer programming, coding; - Software engineering; - Object-Oriented Design; - Strong interpersonal and communication skills; - Problem-solving skills; - The ability to work in teams. ### How to Test a Software Engineer’s Skills? Testing a software engineer’s skills is similar to that of a software developer, since both jobs require an in-depth understanding of code. There are many platforms that help employers evaluate a candidate’s knowledge of the fundamental principles and topics of software engineering like algorithm analysis, linear data structures and computer science fundamentals. The most popular ones are **Codility**, **CodeSignal**, **TestGorilla**, **Coderbyte** for Employers, **Vidcruiter** and **HackerEarth**. ### What are the Main Differences Between a Software Engineer and a Software Developer? Although these job titles are sometimes used interchangeably, few people know how they differ in terms of their scope, skills and responsibilities. The core difference between the two jobs is that software developers are the creative force that deals with design and program implementation, while software engineers use the principles of engineering to build computer programs and applications. In general, software engineers deal with a bigger variety of tasks. All software engineers are, to some extent, developers, but few software developers may be considered software engineers. **Frequently asked questions about the difference between a software developer and a software engineer** - Who earns more: a software developer or a software engineer? According to ZipRecruiter, an [average software developer salary](https://www.daxx.com/blog/development-trends/it-salaries-software-developer-trends) in the US **$86,523/year ($42/hour)**, while a software engineer earns **$99,729/year ($48/hour)**. - Are software engineers and developers the same? The core difference between the two jobs is that software developers are the creative force that deals with design and program implementation, while software engineers use the principles of engineering to build computer programs and applications. - Is a software engineer a developer? Software engineers operate on a bigger scale, and create new tools for software development, while software developers write software by using pre-existing tools. All software engineers are, to some degree, developers, but few software developers may be considered software engineers.
martakravs
939,816
Highlighting: sync-contribution-graph
A couple of weeks ago, I nearly scrolled past this gem on my twitter feed: sync-contribution-graph,...
0
2021-12-29T20:39:25
https://dev.to/mtfoley/highlighting-sync-contribution-graph-6o8
github, javascript, bash, git
A couple of weeks ago, I nearly scrolled past this gem on my twitter feed: [sync-contribution-graph](https://github.com/kefimochi/sync-contribution-graph), by @kefimochi. Go have a look! You can use this tool to have your GitHub contribution graph accurately reflect contributions from other accounts you make use of. For example, outside of work I use the handle [mtfoley](https://github.com/mtfoley), but I have a separate account I use for my job. I like the idea that I could use this to accurately reflect my activity level, and that no private information about that work handle is revealed. The way it works is pretty straightforward. When you configure it with a username and a time frame (year), it performs an HTTP request to the appropriate URL, and parses the HTML in the response for the dates/counts of contributions (these correspond to those little green squares). Based on this data, it creates appropriate `git` shell commands. The shell commands are saved to a file that can optionally be run immediately. Here's a snippet that's the meat of it in [src/index.js](https://github.com/kefimochi/sync-contribution-graph/blob/main/src/index.js): ```javascript import { parse } from "node-html-parser"; import axios from "axios"; import fs from "fs"; import shell from "shelljs"; // Gathers needed git commands for bash to execute per provided contribution data. const getCommand = (contribution) => { return `GIT_AUTHOR_DATE=${contribution.date}T12:00:00 GIT_COMMITER_DATE=${contribution.date}T12:00:00 git commit --allow-empty -m "Rewriting History!" > /dev/null\n`.repeat( contribution.count ); }; export default async (input) => { // Returns contribution graph html for a full selected year. const res = await axios.get( `https://github.com/users/${input.username}/contributions?tab=overview&from=${input.year}-12-01&to=${input.year}-12-31` ); // Retrieves needed data from the html, loops over green squares with 1+ contributions, // and produces a multi-line string that can be run as a bash command. const script = parse(res.data) .querySelectorAll("[data-count]") .map((el) => { return { date: el.attributes["data-date"], count: parseInt(el.attributes["data-count"]), }; }) .filter((contribution) => contribution.count > 0) .map((contribution) => getCommand(contribution)) .join("") .concat("git pull origin main\n", "git push -f origin main"); fs.writeFile("script.sh", script, () => { console.log("\nFile was created successfully."); if (input.execute) { console.log("This might take a moment!\n"); shell.exec("sh ./script.sh"); } }); }; ``` I made some suggestions in the setup workflow on the repo and submitted a [PR to update the README](https://github.com/kefimochi/sync-contribution-graph/pull/8). I hope you find this and other work by @kefimochi to be of interest!
mtfoley
939,853
Take care of your soft skills
I’m not sure how unpopular this opinion is but I believe that for most of the positions I have...
0
2021-12-31T15:08:05
https://www.manuelobregozo.com/blog/take-care-of-your-soft-skills
softskills, interview, behavioralinterview
--- title: Take care of your soft skills published: true date: 2021-12-29 00:00:00 UTC tags: softskills, interviews, behavioralinterview canonical_url: https://www.manuelobregozo.com/blog/take-care-of-your-soft-skills --- I’m not sure how unpopular this opinion is but I believe that for most of the positions I have covered, soft skills were way more important than technical aspects. As long as you are willing to learn, and motivated enough you can always get better technically speaking. But [soft skills](https://en.wikipedia.org/wiki/Soft_skills), that is a different topic, where it doesn't matter how much you read about it, it is mainly always related to behaviors and manners. I am not telling that this can’t be taught or learned, but it's definitely harder. If we look back and think about our previous experiences, we tend (or at least I do) to remember them as good or bad based on the people and project as a whole and not linked to a specific technology. The fun fact is that, on average, companies normally assess soft skills deeply (usually called cultural fit or behavioral interview) at the end of the round of interviews. While for me this should be the first filter. Hate to call it "filter” but it makes it easier to understand. Both types of interviews are really hard to make and assess, people doing interviews should be trained, and paid to do it. This is a different skillset to have and doing these types of sessions can produce context switching chaos, causing the interviewee to have a bad first impression. More to come on this subject soon, so stay tuned! Whether you are a senior or a junior developer if you were to measure the time you spend coding and not coding, what would be the result? With not coding I mean, any agile meetings you can think of, whiteboard discussing with colleagues, reading and understanding problems, helping others, etc. Well, in my case taking a hard guess based on what I have seen in my calendar and doing high overviews estimation I would say I spend only 40% of my time coding solutions, sometimes less. And the rest of the time is somehow linked to activities where my soft skills are way more important than technical skills. And that's the main reason why whenever I get the question about a career path or what people should learn when taking their first steps into this field, is to focus on the processes, communication, and the people. The technicalities will come with time, you will eventually feel more comfortable and get better at it. Learn about how to give and receive feedback, lean communication, learn English if you are not a native speaker, feel comfortable about sharing that you don't know anything about X, how to put the team over your personal opinions, how to listen to others, how to respect and support unrepresented groups, among other things to keep in mind. Be the person that you would like to work with, not the one you will possibly avoid. ## **Content Online** I do believe that writing/blogging about technical things is important and necessary. But since there is already much technical content I took a different path to where I think I can give more to the community and decided to start writing about other skills or social related topics, also known as soft skills. On a daily basis, we focus too much on technical aspects and we tend to forget or to put aside other aspects of this combination of social and behavioral abilities. During the last few years, after having so many people getting burnouts, or maybe being open to sharing it, it has thankfully become a serious topic, and just like that, I started to see more people caring about subjects such as open communication, proactivity, teamwork, mentoring, frustration, work-life balance, just to name a few. It is not always easy, and it will never be! Even more now with this globalization/internalization booming in the market we often see this cultural barrier (on top of different personalities) making relationships at work even harder. How do you deal with your day-to-day relationship, are you afraid of asking for help? This can be more important than you think in order to have good relationships with your pairs. During a round of interviews I was asked: - How would I react in case I have to ask something technical to other colleagues, would I feel bad for not knowing that specific thing? As you can imagine, my answer was clearly not, but in other situations, I would have just spent a useless amount of time trying to find an answer on my own. But today things are different, I am not afraid of what people might think about me because of the questions I ask. Selling yourself can be also considered as part of a nice soft skill to have, negotiations and believing in yourself and the way you explain things to others could be a game-changer in your professional life. Or even when having 1:1 type of meetings with our managers, the ability to communicate what are our goals and expectations, might change the way you progress through your career. A story I can relate to this is that one time I got mad or thought it was unfair that a person was getting way more money than me and we were actually doing the same work on a daily basis. Well, that was a wrong assumption. Thing is, I was only looking at one side of the coin and looking and the technical expertise. But truth be told, that person was way better than me when managing expectations, communications, presentations skills, and most of all negotiating his own salary. Like many other situations that you can imagine, I just thought about sharing some of them to show the impact your social or soft skills can have on your daily work.
manuelobre
939,861
Generative Art With Python
Let's Create Art With Code Join us as we learn how to generate art with Python! Even if...
12,727
2021-12-29T21:41:24
https://dev.to/iceorfiresite/generative-art-with-python-347h
python, programming, tutorial
#Let's Create Art With Code Join us as we learn how to [generate art with Python](https://www.iceorfire.com/post/generative-art-with-python)! Even if you aren't artistic you can give Jackson Pollock a run for his money. # Support Me If you find these tutorials helpful, please consider [buying me a coffee](https://www.buymeacoffee.com/z4F8QVS6w). Thanks!
iceorfiresite
940,093
Holidays, Entrepreneurship and SLOs with Nobl9
It's finally here, the end of season 1 of the podcast is upon us! To celebrate, Santa is bringing...
0
2021-12-30T20:57:22
https://devinterrupted.com/podcast/holidays-entrepreneurship-and-slos-with-nobl9/
leadership, podcast, techtalks, startup
It's finally here, the end of season 1 of the podcast is upon us! To celebrate, Santa is bringing something special - entrepreneurship advice for all the would-be founders of the world, [ages 1 to 92.](https://www.youtube.com/watch?v=hwacxSnc4tI&ab_channel=WalterTan) Brian Singer, co-founder & CPO of Nobl9, sits down with Dev Interrupted to help us close out season 1 with a conversation on what it takes to found your own company. Having founded a pair of companies, one of which he sold to Google, Brian has a deep understanding of what it takes to successfully found and scale a startup. More than that, he knows what VCs are looking for. In addition to our conversation on entrepreneurship, we also discuss Service Level Objectives, the ins and outs of Nobl9’s SLO platform and why SLOs and error budgets will become commonplace approaches in the industry, much in the same way we practice Agile today. From the entire team at Dev Interrupted, we want to give a heartfelt thank you to everyone who has supported us and continued on this journey with us. Producing this podcast every week has been an absolute pleasure and we are so thankful for the outpouring of support we have received this past year. Expect big things - and even bigger stories - in season 2 of the podcast. Have a wonderful New Year, we’ll return on **January 8th** with a HUGE episode for the official start of season 2. {% spotify spotify:episode:5ht1sKI7v43cLhZhywCqkE %} ## Episode Highlights Include: * Why VCs don’t like single founder companies * Scaling beyond the first 20 employees * What are [SLOs](https://nobl9.com/)? * Understanding when [tech debt matters](https://linearb.io/blog/technical-debt-ratio/?__hstc=75672842.b37abbbdf4f34a742895a6b2675da07e.1632418321637.1640804382855.1640838597523.186&__hssc=75672842.1.1640838597523&__hsfp=1615045989) * The reason sales is the #1 skill for an entrepreneur ## Join the Dev Interrupted Community With over 2000 members, the Dev Interrupted Discord Community is the best place for Engineering Leaders to engage in daily conversation. No sales people allowed. [Join the community >>](https://discord.com/invite/devinterrupted) ![https://discord.com/invite/devinterrupted](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qous1521acseykryclzp.png)
conorbronsdon
940,133
Unlock The Worth Of Logo By Following The Best Practices
It is not a surprise to see what value a logo holds for the business these days. It can impact...
0
2021-12-30T06:34:17
https://dev.to/jack46986117/unlock-the-worth-of-logo-by-following-the-best-practices-4nan
discuss, design, beginners, opensource
![Unlock The Worth Of Logo By Following The Best Practices](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6j1oaxnu6g3cwa9uhjlz.jpg) It is not a surprise to see what value a logo holds for the business these days. It can impact customers in such a powerful way that they end up drawing to the brand it portrays. You must comprehend that it is essential for the business to have a professional custom logo design to appear credible. There is a challenging competition we see in the market these days, and as a matter of fact, a logo can help you with it. It is the foremost aspect of the company that the customers interact with, so you must ensure it looks good. If it has a lousy design, customers assume that the brand it represents is unprofessional and does value their customers. You must always strive to have the best logo as it becomes the face of your business in the industry. There are so many elements in the logo that makes it look professionally good and on them is color. The perfect colors in your logo can help highlight your business and its strengths to grab the right customers. Just like that, the lousy combination of colors can make your logo poorly designed. You have to be attentive while working with the colors as they are crucially essential for your logo. We all know how colors are so impact to one's emotions and behavior. They have a whole perspective of portraying things in an individual manner. How the colors naturally exist is how they portray the message. The yellow is bright because of how the sun shines during the day. The green color shows calmness and a relaxing sensation because of how peaceful the grass looks. Each color holds its importance and has to be used appropriately at the right place. **Learn What Each Color Means Before Using Them In Your Custom Logo Design** Here, you will find some of the most used colors in logos briefly described how they are used and what they portray. The wrong combination of colors in the [custom logo design] (https://www.logomagicians.com/) can make it lose its importance. You need to know what color indicates what because the better you make their use, the more professional your logo will look in the outcome. This may come as a surprise, but there has to be comprehensive research before using a particular color in a logo to ensure that it does not look unappealing. ![Learn What Each Color Means Before Using Them In Your Custom Logo Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ou9chsgefpiclylj52qu.jpg) **Red Logos** You must know that red is a color that indicates anger, love, and passion universally. This color can gain your customers' attention to help you stand out from the competition. If you have a youthful, loud, or modern brand, then red is the color you should be going for. The scientists say that the babies see red as the first color after black and white. Ever noticed why wee the red fruits on trees to be so sharp? It is simply because of their striking nature. As humans get angry, their faces turn red; this color is shown to portray the angriness. This is how it works emotionally. We use the colors to understand the emotion behind love, passion, anger, and sadness. You need to know what you need your customers to feel, and this is how you can see if red is the color that will help you do it. **Orange Logos** Before using this color in your custom logo design, let us dive in a little deep to see what it represents. It is a playful and refreshing color. It will also be helping you to stand out from the crowd because of its shining and sharp nature. Although it has less usage than the red logos, it still ensures to look distinctive. If you want your brand to appear classic or serious, then totally avoid going orange. This is something that this color would not keep up to do. People have orange to leave a lasting impression before anything changes. Notice how the leaves turn orange when the weather changes? It is surprising how the human brain is capable of having the right emotion for the right color. Sometimes they do not even know that how do they make that happen. The brand uses this changing nature of orange, which shows some type of change in their work. This is how deep the effect of the logo goes. You can create different combinations with this color, keeping orange-dominated to deliver the right message. Customers appreciate to the see the right message being conveyed using the colors. **Yellow Logos** These logos reflect their friendly and bright nature in logos. If you have an energetic brand that wants to portray how youthful it is by the color, then yellow is your best choice. You should know that customers do not see yellow as the right color for businesses that want to appear more mature to luxury. This is why you need a good time on research to understand whether this color will do you wonders or not. This color can be mixed with other colors to give a refreshing and bright sensation to customers. Moreover, we see this color having cultural respect in many parts of the world. You will need solid research to see that your customers associate with it, so you make them appreciate you using this color accordingly. It will help if you understand that what your brand portrays to use this color to have more value. Yellow can show a light and soft feel to the brands, and on the other hand, we see that gold is heavy and strong. This indicates how well you can use this color to make its appearance look relevant and appropriate. **Conclusion** An [animated logo design](https://animetus.com/services/logo-animation/) can have a yellow color clearly and carefully used to represent the brand with the whole appeal. The competition in the market is not a joke these days, and with the proper usage of colors, you can do good. You will indeed need the research skills to see that how your customers will be impacted to see the right color combination in the logo.
jack46986117
940,243
JSON WEB TOKEN(JWT)
JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for...
0
2021-12-30T10:08:38
https://dev.to/delwarjnu11/json-web-tokenjwt-4ofk
JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. So in the tutorial, I introduce how to implement an application “Reactjs JWT token Authentication Example” with details step by step and 100% running sourcecode. – I give you an Epic of the application, a fullstack excutive flow from frontend to backend with overall architecture diagram. – I give you a layer diagram of Reactjs JWT Application. – I give you an implementatin of security backend sourcecode (SpringBoot + Nodejs JWT RestAPIs). – I guide you step by step how to develop a Reactjs JWT Authentication application. – Finally, I do an integrative testing from Reactjs JWT Authentication application to Backend Security RestAPIs For the Reactjs JWT Authentication tutorial, we have 2 projects: – Backend project (using SpringBoot or Nodejs Express) provides secured RestAPIs with JWT token. – Reactjs project will request RestAPIs from Backend system with the JWT Token Authentication implementation. User Registration Phase: – User uses a React.js register form to post user’s info (name, username, email, role, password) to Backend API /api/auth/signup. – Backend will check the existing users in database and save user’s signup info to database. Finally, It will return a message (successfully or fail) to User Login Phase: – User posts user/password to signin to Backend RestAPI /api/auth/signin. – Backend will check the username/password, if it is right, Backend will create and JWT string with secret then return it to Reactjs client. After signin, user can request secured resources from backend server by adding the JWT token in Authorization Header. For each request, backend will check the JWT signature and then returns back the resources based on user’s registered authorities.
delwarjnu11
940,301
Topic: JS Promise vs Async await
As JavaScript is an Asynchronous(behaves synchronously), we need to use callbacks, promises and async...
0
2021-12-30T11:51:32
https://dev.to/zahidulislam144/topic-js-promise-vs-async-await-3dp1
javascript
As JavaScript is an Asynchronous(behaves synchronously), we need to use callbacks, promises and async await. You need to learn what is async-await, what is promise, how to use promise and async-await in javascript and where to use it. ## **Promise** - **What is Promise?** A Promise is a rejected value or succeeded value of asynchronous function and when the operation is not fulfilled or fulfilled and thus promise is created. Promise have three states. 1. pending: initial state, neither fulfilled nor rejected. 2. fulfilled: meaning that the operation was completed successfully. 3. rejected: meaning that the operation failed. ## How to use Promise? See the example below ``` const calculate = (a,b,c) => { return new Promise((resolve, reject) => { setTimeout(() => { if(a<0 || b<0 || c||0){ reject("no number can be negative.") } resolve('Calculation: ', a+b+c); }, 1000); } } calculate(5,7,8); .then((reolveOfCalculate)=>{ console.log(reolveOfCalculate); }.catch((rejectOfCalculate)=>{ console.log("rejectOfCalculate"); }) ``` From above example, I am trying to give a little explanation about promise. A promise is created in calculate function. When the operation of the function is fulfilled then, it calls a callBack function and the success value is kept in **resolve** argument. Similarly, a callBack function is called, and the failure value is kept in **reject** argument when the operation is not fulfilled. The success and rejected value is consoled by taking an argument respectively **reolveOfCalculate**, **rejectOfCalculate**. Promise can be written in chaining. See below... ``` const calculate = (a,b,c) => { return new Promise((resolve, reject) => { setTimeout(() => { if(a<0 || b<0 || c||0){ reject("no number can be negative.") } resolve('Calculation: ', a+b+c); }, 1000); } } calculate(5,7,8); .then((reolveOfCalculate)=>{ console.log(reolveOfCalculate); return (reolveOfCalculate, 3, 2); }.then((chaining1)=>{ console.log(chaining1); }).catch((rejectOfCalculate)=>{ console.log("rejectOfCalculate"); }) ``` ## **Async await** Async await is the lighter version of promises. Because, basically promises works behind the await methods. The await operator is used to wait for a Promise. It can only be used inside an async function within regular JavaScript code. ## Return value: Returns the fulfilled value of the promise, or the value itself if it's not a Promise. ``` const calculate = (a,b,c) => { return new Promise((resolve, reject) => { setTimeout(() => { if(a<0 || b<0 || c||0){ reject("no number can be negative.") } resolve('Calculation: ', a+b+c); }, 1000); } } ``` ``` const add = () => { const sum1 = await calculate(1,2,3); const sum2 = await calculate(sum1,2,3); return sum2; } add().then((result)=>{console.log(result)}).catch((error)=>{console.log(error)}; ``` ## Why to use Async await: Async await is cleaner than promises. So, most of the programmers suggest to use async await to make our code readable and cleaner.
zahidulislam144
941,146
Why We Use React Js Instead of Angular Js?
Introduction The framework you choose to use is crucial in the development's success....
0
2021-12-31T08:51:15
https://dev.to/bhaviksadhu/why-we-use-react-js-instead-of-angular-js-54il
programming, angular, react
##Introduction The framework you choose to use is crucial in the development's success. AngularJS and ReactJS remain the most popular frameworks for developing React js with Java Point or JavaScript. You can utilize both of these frameworks to build mobile and web-based applications. We should investigate the [differences between AngularJS and ReactJS](https://www.techavidus.com/blogs/angular-vs-react?utm_source=referral&utm_devto=devto&utm_campaign=content_sharing) to figure out which is the most ideal decision for your requirements ##What exactly is React JS? React is a JavaScript library that was developed by Facebook that lets you build component components for your user interface. To offer a flexible and efficient solution for users, the React JavaScript framework employs server-side rendering. ##What is the exact meaning of Angular? Angular can be utilized to build dynamic web applications. The developers must employ HTML as an example language in addition to HTML syntax to define the components of the application in a concise manner. Angular is a fully-featured JavaScript framework that assists in the creation of single-page, dynamic web-based applications. It also helps to create the structure of programming. ##The Benefits Of React One most important benefit of react js are that it doesn't have a long learning curve. Why is react is superior to the angular? To grasp another important benefit to using it. Understanding the Document Object Model (DOM) and how it works is essential. It is the Document Object Model (DOM) is a model that defines the layout of a website page. ##The Benefits of Angular It is a comprehensive Model-View-Controller (MVC) framework powered and maintained by Google. The benefits that come with React over the benefits of Angular If you ask about the advantages of AngularJS Most developers will refer to its implementation of the MVC pattern as the primary benefit. Why should you use angular to react in the face of other frameworks that require that you divide your app into MVC components? By forcing you to create your own MVC pipeline to connect the parts, Angular manages the components for you. ##The disadvantages of both: One drawback that comes with ReactJS is that it's an application that is a library instead of a fully-fledged framework such as AngularJS. It is likely that you'll require different libraries to handle certain aspects of your application. Additionally, certain developers are not satisfied by ReactJS's documentation. ReactJS documentation Angularjs technology offers a series of powerful benefits for your mobile or web application but also has corner limits. The steep learning curve for the framework is the biggest drawback, However, the challenge of learning the framework has been exaggerated a bit. **Data Linking:** Data binding is the process that synchronizes data from the business technology and the interface for users is referred to as data binding. Angular 2 is different from React Js in that it utilizes both one-way and two-way data binding. Changes in the data impact the view while changing the view can cause changes to the data. React utilizes one-way binding this means that when designing a React application, developers often create child components inside more-ordered parent elements. Another binding method enhances the stability of code and makes it easier to debug within a React instead of an Angular application. The one-way binding of Angular, in contrast, is what allows the framework to be more flexible and simpler to use. **DOM and Performance:** In fact, each of React Along Angular is a fantastic front end. They are also excellent for large-scale application development. Let's say you want to change the user's name within their personal profile. Every developer must be able to update their architecture to include new libraries and modules. For starters, you have to install updates from versions one at a time. React can be described as a program that has full backward compatibility. You can also add new versions of the libraries into the application, as well as update older ones. If you plan to build your project slowly with new capabilities, React may be the most suitable choice due to its total backward compatibility. ##Which one is more fun to Work With, Angular or React? **Can You Have Fun Working with Angular?** An Angular is an excellent documentation tool and a multitude of built-in functions, allowing users to build complicated applications without the need for third-party applications. However, it has a steeper learning curve which results in a longer ramp. People who are familiar with traditional statically typed OOP languages like C++, C#, or Java might be able to work in Angular since TypeScript has a syntax similar to these languages. **Does it feel enjoyable to work with React?** Despite the fact that React expects you to gain proficiency with various outsider programming packages to build a multifaceted application, and the documentation is substantially less yet it's of good quality and has numerous models. ##What is the reason we use React? For debugging isolated issues, we can make use of ReactJs, which allows developers to achieve stability for their apps. Moreover, the components-determined design of React allows us to reuse components, which saves the hour of developers and furthermore cash while creating. We're currently working on an app for the web that can aid car owners with scheduling maintenance. It also tracks the number of visits to the car which makes the process of purchasing and selling a vehicle completely safe. We decided to develop an application that is progressive to provide this service for mobile-based users. In terms of single-page websites like this, each of AngularJS and ReactJS can be great alternatives. But, they're completely different tools. There is a possibility to the effect that React is superior over Angular or reverse. ##Conclusion AngularJS and ReactJS have their differences, ReactJS developers and AngularJS developers are able to agree on one perspective: when you have some knowledge of both of them, you'll have the option to develop top-quality applications utilizing it. When you use react with angular Js, the framework you select will heavily be determined by the needs of your project, as well as individual preferences. ReactJS is a good choice for developers with less experience, simply due to its simplicity However, AngularJS development provides a complete solution for front-end development, which can benefit big-scale projects.
bhaviksadhu
955,372
Emulating the Sega Genesis - Part III
Originally published at jabberwocky.ca Written December 2021/January 2022 by...
16,249
2022-01-14T18:13:41
https://jabberwocky.ca/posts/2022-01-emulating_the_sega_genesis_part3.html
rust, emulator, sega, genesis
*Originally published at [jabberwocky.ca](https://jabberwocky.ca/posts/2022-01-emulating_the_sega_genesis_part3.html)* ###### *Written December 2021/January 2022 by transistor_fet* A few months ago, I wrote a 68000 emulator in Rust named [Moa](https://jabberwocky.ca/projects/moa/). My original goal was to emulate a simple [computer](https://jabberwocky.ca/projects/computie/) I had previously built. After only a few weeks, I had that software up and running in the emulator, and my attention turned to what other platforms with 68000s I could try emulating. My thoughts quickly turned to the Sega Genesis and without thinking about it too much, I dove right in. What started as an unserious half-thought of "wouldn't that be cool" turned into a few months of fighting documentation, game programming hacks, and my sanity with some side quests along the way, all in the name of finding and squashing bugs in the 68k emulator I had already written. This is Part III in the series. If you haven't already read [Part I](https://dev.to/transistorfet/emulating-the-sega-genesis-part-i-1ao5) and [Part II](https://dev.to/transistorfet/emulating-the-sega-genesis-part-ii-16k7), you might want to do so. Part I covers setting up the emulator, getting some game ROMs to run, and implementing the DMA and memory features of the VDP. Part II covers adding a graphical frontend to Moa, and then implementing a first attempt at generating video output. Part III will be about debugging the various problems in the VDP and CPU implementations to get a working emulator capable of playing games. For more details on the 68000 and the basic design of Moa, check out [Making a 68000 Emulator in Rust](https://dev.to/transistorfet/making-a-68000-emulator-in-rust-1kfk). * [Previously](#previously) * [Fixing The Colours](#fixing-the-colours) * [Drawing A Blank](#drawing-a-blank) * [What About Those Interrupts](#what-about-those-interrupts) * [And Now For Something (A Little) Different](#and-now-for-something--a-little--different) * [Back to the Genesis](#back-to-the-genesis) * [VRAM Discrepancies](#vram-discrepancies) * [You Can't Write There, Sir](#you-can-t-write-there--sir) * [Fixing Sprites](#fixing-sprites) * [Not All The Data](#not-all-the-data) * [Scrolling The Scrolls](#scrolling-the-scrolls) * [Fixing Line Scrolling](#fixing-line-scrolling) * [Rewriting](#rewriting) * [Conclusion](#conclusion) Previously ---------- After about two weeks of work on adding Sega Genesis support to my emulator, I had implemented memory operations for the video display processor (VDP), and written a draw loop to generate the video frames according to the [SEGA documentation](https://segaretro.org/images/a/a2/Genesis_Software_Manual.pdf). The result of all that work was this: <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic1-broken-oct-26.png" title="Sonic 1 broken SEGA screen" /> </p> This is Sonic 1 attempting to show the SEGA logo at startup. It's better than Sonic 2 which was just a black screen, a few log messages, and then... nothing... The few other games I tried were no better. When I had started this project, I thought it probably wouldn't be too hard to get something as simple as the SEGA logo working, but I was wrong. After spending a day or two fiddling with quick fixes that didn't fix much of anything, I committed my work in progress to git, so that I could track and undo any changes I made, and started in to some serious debugging. The following is my journey of debugging, on and off over the next six weeks, until I managed to get Sonic 2 running well enough to play. Fixing The Colours ------------------ The most obvious thing that was wrong was the colours, so I looked into this first. Since I couldn't be sure that all the data was getting into the VDP correctly, I needed to simplify the output a bit, so I wrote an alternate `draw_frame()` function to display just the patterns instead of the scroll tables. It would draw each pattern in memory across the screen from left to right, top to bottom so that I could inspect them better. They might not look like a coherent picture, being only 8x8 pixels each and arranged in an unintended order, but it should at least show something. The result was this: <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/patterns-broken-colours.png" title="Display all patterns in memory" /> </p> There is definitely some kind of pattern data being displayed because the patterns are not a solid colour, but the colours are clearly wrong. I'm expecting some blue colours since it should be printing the SEGA logo. For about a day I was doubting and testing the transfer of data into CRAM. I had found a minor bug earlier in that code. After staring at the values in CRAM for a while, I noticed that the colour values were actually correct. There were values of 0xEEE and 0xE00 and a few others, so it had to be a problem with reading the CRAM to get the u32 colours value. The code to convert CRAM values into colours was: ```rust let rgb = read_beu16(&self.cram[((palette * 16) + colour) as usize..]); (((rgb & 0xF00) as u32) >> 4) | (((rgb & 0x0F0) as u32) << 8) | (((rgb & 0x00F) as u32) << 20) ``` There had definitely been some problems with those complex shift operations, but the tricker problem turned out to be the index into the CRAM that was wrong. Since the CRAM is an array of u8, which was chosen in order to reuse the same transfer and DMA code with VRAM, I needed to multiply the index by 2 before reading the word at that location. Now the colours actually make sense: <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/patterns-fixed-colours.png" title="Displaying all the patterns for the SEGA logo" /> </p> Switching back to displaying the scrolls I'm now getting a white screen, but not much else. *sigh* In Sonic 1, parts of the SEGA logo were displayed if I only drew Scroll A or Scroll B, but displaying both together didn't work. I needed to add the mask colour, which is always colour 0 in each palette. I modified the `.blit()` method to not draw anything if the colour 0 is used (later changed to 0xFFFFFFFF to avoid a conflict with the colour black, represented by 0), and now I was getting something. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic1-sega-logo-slow.gif" title="It's working, but it's very slow" /> </p> Now it's actually showing the SEGA logo! The scrolls are finally working, even if they still don't look right and the animation is painfully slow. Drawing A Blank --------------- While Sonic 1 seemed to at least try to display something, Sonic 2 and a few other games wouldn't display anything at all. With the various debug messages turned on, the logs showed it was initializing various devices and then would get caught in a loop where it would read the status word of the VDP over and over again. Clearly it was looking for a specific bit value in the status word before it would move on, but I didn't know which one. The [status word](https://wiki.megadrive.org/index.php?title=VDP_Ports&t=20120714071022#Read) is returned when reading (instead of writing) from the VDP's control port. It contains a number of status flag bits and is one of the few ways the CPU can get feedback from the VDP, with interrupts being the other. In my existing code, the FIFO and NTSC bits were set statically, and the DMA bit was being set and reset during DMA operations, so it probably wasn't related to those. Given that this problem happens right away, it's probably not looking at the sprite flags either. I reckon it's something to do with the `HBLANK`/`VBLANK` bits, or possibly the `V Interrupt Happened` bit. The `HBLANK` and `VBLANK` bits are set when the video output signal is in its blanking phases. On a CRT, it takes time after a line has been drawn for the electron beam to move back to the start of the next line, and be ready to output the next line of data. It also takes time (a lot more time) after the entire screen has been drawn for the beam to move back to the top of the screen again to start the next refresh. Since the video signal's data is directly output to the CRT as soon as it's received (the joys of analogue signals), the video signal itself needs to incorporate these blanking delays where no data is sent. These blanking periods just so happen to be convenient times for the CPU to update or change data in the VDP, when those changes wont affect the output. This is especially important during the vertical blanking period, when the positions of everything on the screen can be updated at once before the next frame is drawn to prevent artifacts and glitches in the image. I was moving fast to get something working, so I quickly implemented the vertical blanking bit by setting it just before getting to the end of the frame, at 14_218_000 ns, and then resetting the bit at 16_630_000 ns when the frame is drawn and the vertical interrupt is triggered. This worked for the time being, but it turned out to cause another error that slowed the animation down by half, which I didn't notice until after I had the scrolling working. It wasn't until I could actually play the games that I noticed the problem, and by that point I had forgotten about this bit. It took me a day or two of debugging before I finally tracked down the problem to the `VBLANK` bit. After the vertical interrupt occurs, some games would busy wait until the vertical blanking bit was set before actually running the game loop. Sonic 2 is one such game, but Sonic 1 doesn't do this check. Since the bit is only set about 2ms before the next vertical interrupt, the game's frame updater would only start 2ms before the next interrupt, and would still be updating the frame at that point, so it would ignore the second vertical interrupt. As a result, it would take two frames of time (2 vertical interrupts) before one frame of the game would be drawn, and only one cycle of the game loop would execute. Sonic was moving at exactly half speed. Doubling the amount of simulated time fixed the issue (which didn't make any sense at first). I even went to the trouble of implementing more accurate instruction timing in the 68000 in order to see if it was caused by the fact that all the instructions had previously been running in 4 clock cycles. Shown below is the more recent code with the fixed blanking behaviour. The following code is in the updated VDP's `.step()` function, including the `VBLANK` bit handling. The `HBLANK` code looks similar but with different timing values. ```rust self.state.v_clock += diff; if (self.state.status & STATUS_IN_VBLANK) != 0 && self.state.v_clock >= 1_205_992 && self.state.v_clock <= 15_424_008 { self.state.status &= !STATUS_IN_VBLANK; } if (self.state.status & STATUS_IN_VBLANK) == 0 && self.state.v_clock >= 15_424_008 { self.state.status |= STATUS_IN_VBLANK; ... // Vertical Interrupt and Frame Update Code } if self.state.v_clock > 16_630_000 { self.state.v_clock -= 16_630_000; } ``` <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-sega-almost-working.png" title="Finally the SEGA logo displays correctly" /> </p> Finally! The SEGA logo in Sonic 2 is (almost) displaying correctly. There are a few glitches in the logo but that's because I hadn't implemented the reverse patterns yet. Adding support for that fixed the logo right up. What About Those Interrupts --------------------------- While Sonic 2 was now advancing enough to show the scrolls, it was very slow, the same as Sonic 1, from the start of the program through displaying the logo and then finally getting to the title screen. It was taking half a minute or more. My first suspicion was to check the interrupts, since it's usually the vertical interrupt that triggers the progression of time in these games. It's a reliable signal to use for knowing how long to show the logo screen for, or when to read the controller input, calculate movement, and then update the screen. Turning on the debugging output for the interrupts showed that they weren't occurring anywhere near as fast as they should be. It would take seconds before an interrupt occurred, and they would occur randomly rather than at a regular pace. I'm not all that surprised given that I knew there were issues with the implementation, and I had run into problems with them when working on Computie support, but I hadn't been sure how I wanted to fix them. Now I *needed* to fix them. In the original implementation, there was a trait for `Interruptable` devices with a function that would be called by the interrupt controller when an interrupt occurred, which would trigger the interrupt handler. That works in theory, but an interrupt might not be handled right away if interrupts are disabled, and the callback might not be re-called when interrupts were re-enabled. There was also no mechanism for acknowledging an interrupt, and the 68k implementation's handling of the interrupt priority mask was buggy. The result was that interrupts would only occur when everything happened to line up, which wasn't very often. For the 68000, an interrupt can occur with a priority between 1 and 7. A higher number is a higher priority, and interrupts below a certain priority number can be disabled using a priority mask value stored in the `%sr` register. When an interrupt occurs, the CPU will check that priority number against the priority mask. If the requested interrupt number is strictly higher than the mask, then the `%sr` and `%pc` registers will be pushed onto the stack, the priority mask will be changed to the current number (to prevent a duplicate handling of the same interrupt), and the handler will be run. If the interrupt priority equals or is lower than the mask, the CPU will keep running whatever it had been running before, at least until the priority mask changes, or a higher priority unmasked interrupt occurs. For devices like the serial controller in Computie, the interrupt signal will be asserted and stay asserted until the cause of the interrupt is manually acknowledged by writing a certain value to the serial controller. For the Genesis, on the other hand, the interrupts behave more like one-shots where there is no manual acknowledgement, and the signal should be de-asserted as soon as it's acknowledged, essentially. As for the CPU, if an interrupt is masked when the signal was assert, and then unmasked while the signal is still asserted, it will run the handler (ie. the interrupt signals are level triggered, not edge triggered). If the signal goes away before the interrupt is unmasked, the handler will never be run. In hardware, interrupts will only be checked at a certain point in the CPU's cycle, usually between the execution of instructions, so it's actually pretty reasonable for the emulated CPU to manually check for interrupts at the end of an instruction cycle. All it has to do is check the interrupt controller object in `System`. The `Interruptable` trait wasn't needed anymore. Devices call the interrupt controller to set an interrupt, and the CPU calls the interrupt controller to check if any are active. It's not a terribly complicated problem, but it's easy to get wrong in subtle ways, such that it might work for some devices but not others. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-sprites-broken.gif" title="Finally the SEGA logo displays correctly" /> </p> Now it runs at what seems like the right speed! Lets ignore, for a minute, the other glaring issues... And Now For Something (A Little) Different ------------------------------------------ At this point, I had the colours and interrupts sorted out, the scrolls were being displayed somewhat correctly, and the sprites were sort of working, but multi-cell sprites were still broken. Everything I had tried to fix the sprites didn't work, and I had no idea if it was because of the VDP implementation or a bug in the CPU. And to make matters worse, Sonic was falling through the floor during the gameplay. And here I got stuck. I had been doing nothing but debugging for a week at this point, three weeks after starting on the Genesis and about five weeks since I had started the emulator. I had made good progress but this last week was a grind. There were multiple issues, both in the VDP and the CPU. I had already fixed the few things that really stood out, but I was running out of threads to pull on, and getting frustrated. I needed to try something else. I had not yet proven out the 68000 implementation, so some of the problems I was encountering could be in there and not in the VDP code. There was no easy way to tell where the problem was without tracing a lot of assembly code to figure out what it was *supposed* to do, looking for a one bit change somewhere in the CPU registers or in memory. I needed a way to test the 68000 better, and why not try implementing another system? The Macintosh 512k also used the 68000 and it's a fairly simple computer, in terms of I/O. It had a very basic video display circuit made from generic logic that looped through memory addresses and shifted the bits into the video output stream. The display only supported black and white so each pixel was a single bit that was either on or off. The ROMs that were embedded on the motherboard are available at [archive.org](https://archive.org/details/mac_rom_archive_-_as_of_8-19-2011), so I started making some devices and running the ROMs to see if I could find some bugs in the 68000 emulation alone. At the same time, I looked into implementing the Z80 that the Genesis would also need. Some games seemed to get stuck waiting for the non-existent Z80 to respond, so I thought I might as well start a Z80 implementation too. It would be something different to work on when I was stuck on everything else. At least then I'd make some progress, which would encourage me to keep going. In order to develop a Z80 implementation, I needed some Z80 code to run on it, and any I/O devices that the code needed. I could write my own Z80 code of course, but that wouldn't test the implementation well enough, beyond just basic functioning of the instructions. I needed code for an existing platform, with all its expectations of how the real system behaves embedded in its logic, and that meant implementing devices for an existing platform. I looked around for the simplest Z80 platform I could find, which turned out to be the TRS-80. I'm not the biggest fan of the TRS-80, but I did have a "Model I" in my computer collection at one point (that I sadly had to sell), so it wasn't entirely foreign to me. I could get away with just implementing the video display and the keyboard in order to run the BASIC interpreter that comes in its ROM. Over the next month, I mostly worked on these sub-projects, as well as on another Computie hardware iteration. The TRS-80 implementation came together fairly smoothly apart from a bug in the Z80 implementation's shift operation that took me a day or two of tracing the [Level I BASIC ROM's assembly code](http://48k.ca/L1Basic.html) to fix. (Thanks to George Phillips for the well documented assembly code). The Macintosh implementation didn't go as smoothly however. I did manage to find and fix a few bugs in the 68000, and I got far enough to display the Dead Mac screen, but I got stuck just before the end of the ROM's initialization where it opens the default device drivers. At some point, it attempts to write to a location in the ROM. In hardware that shouldn't have an affect, except that I have some code in Moa to raise an error when that happens, since it's likely a bug. Ignoring that error didn't make it get any farther. I couldn't for the life of me find out what was wrong, but at one point, using another [emulator](https://github.com/TomHarte/CLK), I was able to confirm that if the ROMs ran on a system that didn't mirror the RAM and ROM address exactly as the hardware does, the ROM wont boot. *facepalm* Effort went into making sure the Macintosh was not cloned like the IBM PC, so I was fighting against those effort as well. After a while I decided to give the Genesis a try again. Back to the Genesis ------------------- After getting stuck on the Macintosh implementation, I picked up the Genesis again. I had spent almost an entire month away. In that time, I had worked on another hardware revision for Computie, and wrote the article "Making a 68000 Emulator In Rust". I also had improved the Moa debugger, implemented the Z80 entirely, filled in a number of missing 68k instructions, and finished implementing all the 68k instruction decoding (although a few instructions are still not implemented because they aren't use by any code I've tried to run). I also fixed some bugs in existing instructions, such as MOVEM which copies data to or from multiple registers at a time. Perhaps some things could be fixed? On the surface though, the results were the same as last time. The scrolls were mostly working, but the sprites were broken, and Sonic was still falling through the floor to his death. I had added the Z80 coprocessor into the system, now that it was implemented (I might as well), but I had left the Z80 address space as one big 64 KB `MemoryBlock`. The Z80 alone didn't changed anything in Sonic 2, or in Sonic 1, which was still getting stuck at the title screen as it a had before. I needed a way to isolate the drawing of sprites so I could better figure out what was wrong, and it was only at this point it occurred to me to search for demo and test ROMs that might help. That immediately turned up [ComradeOj's demos](https://www.mode5.net/), particularly Tiny Demo, which scrolls some text across the screen, and GenTest v3 which contains a number of screens with different graphics to test possible issues, including a display of some static sprites. I also came across the [BlastEm](https://www.retrodev.com/blastem/) emulator in C, which has a builtin debugger. I was able to modify and compile a local version which dumps out the contents of VRAM at a specific point in a ROM's execution. With this, I could verify that the data in VRAM in Moa was correct and the DMA and transfer code was in fact working correctly. I ended up not digging into the BlastEm code much beyond this, but the validation it provided was extremely helpful. VRAM Discrepancies ------------------ <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/tinydemo-broken.png" title="TinyDemo running inside moa but broken" /> </p> The above image is the results of running TinyDemo. Clearly the text is all garbled but I haven't a clue what could be causing it. At least it was a very small ROM, with straight-forward assembly code. The first thing I could do was to try to isolate where the problem was. Was it caused by getting data into VRAM, or was it somewhere else. I started by running the demo in BlastEm and dumping the VRAM at the point in the ROM just after the VDP is initialized, at address `0xDE`. I went to the same point in Moa and again dumped the contents of VRAM to compare them. From BlastEm, the start of VRAM where the patterns are stored looks like this: ``` 0000: 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0010: 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0020: 0x0110 0x0110 0x0110 0x0110 0x0110 0x0110 0x0111 0x1110 0030: 0x0110 0x0110 0x0110 0x0110 0x0110 0x0110 0x0000 0x0000 0040: 0x0011 0x1111 0x0011 0x0000 0x0011 0x0000 0x0011 0x1110 0050: 0x0011 0x0000 0x0011 0x0000 0x0011 0x1111 0x0000 0x0000 0060: 0x0110 0x0110 0x0110 0x0110 0x0110 0x0110 0x0011 0x1100 0070: 0x0001 0x1000 0x0001 0x1000 0x0001 0x1000 0x0000 0x0000 0080: 0x0011 0x1111 0x0011 0x0000 0x0011 0x0000 0x0011 0x1110 0090: 0x0011 0x0000 0x0011 0x0000 0x0011 0x0000 0x0000 0x0000 00a0: 0x0111 0x1110 0x0001 0x1000 0x0001 0x1000 0x0001 0x1000 00b0: 0x0001 0x1000 0x0001 0x1000 0x0001 0x1000 0x0000 0x0000 00c0: 0x0011 0x0000 0x0011 0x0000 0x0100 0x0000 0x0000 0x0000 00d0: 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 00e0: 0x0111 0x1110 0x0110 0x0011 0x0110 0x0011 0x0111 0x1110 00f0: 0x0110 0x1000 0x0110 0x0110 0x0110 0x0111 0x0000 0x0000 ``` And from Moa, it looks like this: ``` 0000: 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0010: 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0020: 0x0110 0x0110 0x0110 0x0110 0x0110 0x0110 0x0111 0x1110 0030: 0x0110 0x0110 0x0110 0x0110 0x0110 0x0110 0x0000 0x0000 0040: 0x0011 0x1111 0x0011 0x1111 0x0011 0x1111 0x0011 0x1111 0050: 0x0000 0x0011 0x0011 0x1111 0x0011 0x1111 0x0011 0x0011 0060: 0x1100 0x0110 0x0110 0x0110 0x0110 0x0110 0x0000 0x0000 0070: 0x1100 0x1100 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0080: 0x0011 0x1111 0x0011 0x1111 0x0011 0x1111 0x0011 0x1111 0090: 0x0000 0x0011 0x0011 0x1111 0x0011 0x1111 0x0000 0x0000 00a0: 0x1111 0x1110 0x0110 0x1100 0x0000 0x0000 0x0000 0x0000 00b0: 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 0x0000 00c0: 0x0011 0x1111 0x0011 0x1111 0x0011 0x0100 0x0000 0x0011 00d0: 0x1111 0x1111 0x1111 0x1111 0x1111 0x1111 0x1111 0x1111 00e0: 0x1111 0x1110 0x0110 0x0110 0x1111 0x0110 0x1111 0x0110 00f0: 0x0110 0x0110 0x1011 0x0000 0x0110 0x0111 0x1110 0x0000 ``` It's almost the same, but if you look closely there are a few discrepancies. Of course I was expecting it to be caused by the transfer code, but I traced the assembly for TinyDemo to see where the data in VRAM was coming from. There's a loop that simply copied data from RAM address `0xFF0000` into VRAM address `0x0000`. I dumped the contents of RAM at that location and sure enough, the difference occurred there too, so it was something further up the chain. Finally I was making some progress now that I could narrow down the problems better. Tracing back in the disassembled output quickly led to the `decompress` function, which loads and decompresses the raw binary data in the ROM into an in-memory representation that the VDP can use. ```asm68k ... 1f2: e24d lsrw #1,%d5 ; start of decompress loop 1f4: 40c6 movew %sr,%d6 1f6: 51cc 000c dbf %d4,0x204 1fa: 1f5d 0001 moveb %a5@+,%sp@(1) 1fe: 1e9d moveb %a5@+,%sp@ 200: 3a17 movew %sp@,%d5 202: 780f moveq #15,%d4 204: 44c6 movew %d6,%ccr 206: 6404 bccs 0x20c 208: 12dd moveb %a5@+,%a1@+ 20a: 60e6 bras 0x1f2 ; jump to start of outer loop 20c: 7600 moveq #0,%d3 20e: e24d lsrw #1,%d5 210: 40c6 movew %sr,%d6 212: 51cc 000c dbf %d4,0x220 216: 1f5d 0001 moveb %a5@+,%sp@(1) 21a: 1e9d moveb %a5@+,%sp@ 21c: 3a17 movew %sp@,%d5 21e: 780f moveq #15,%d4 220: 44c6 movew %d6,%ccr 222: 652c bcss 0x250 224: e24d lsrw #1,%d5 226: 51cc 000c dbf %d4,0x234 22a: 1f5d 0001 moveb %a5@+,%sp@(1) 22e: 1e9d moveb %a5@+,%sp@ 230: 3a17 movew %sp@,%d5 232: 780f moveq #15,%d4 234: e353 roxlw #1,%d3 236: e24d lsrw #1,%d5 238: 51cc 000c dbf %d4,0x246 23c: 1f5d 0001 moveb %a5@+,%sp@(1) 240: 1e9d moveb %a5@+,%sp@ 242: 3a17 movew %sp@,%d5 244: 780f moveq #15,%d4 246: e353 roxlw #1,%d3 248: 5243 addqw #1,%d3 24a: 74ff moveq #-1,%d2 24c: 141d moveb %a5@+,%d2 24e: 6016 bras 0x266 250: 101d moveb %a5@+,%d0 252: 121d moveb %a5@+,%d1 254: 74ff moveq #-1,%d2 256: 1401 moveb %d1,%d2 258: eb4a lslw #5,%d2 25a: 1400 moveb %d0,%d2 25c: 0241 0007 andiw #7,%d1 260: 6710 beqs 0x272 262: 1601 moveb %d1,%d3 264: 5243 addqw #1,%d3 266: 1031 2000 moveb %a1@(0000000000000000,%d2:w),%d0 26a: 12c0 moveb %d0,%a1@+ 26c: 51cb fff8 dbf %d3,0x266 270: 6080 bras 0x1f2 ; jump to the start of the outer loop ... ``` The above snippet only shows the main loop of the decompress function and not the beginning and ending parts of the function. Instructions `0x266` and `0x26a` are where a byte of data is written to the location in RAM where the decompressed data goes, and which will then be loaded into VRAM verbatim. I knew from the above dumps that the first byte that differs occurs at offset 0x46, and dumping the registers shows the address `0xFF0000` in register `%a1`, which is incremented each time the loop occurs. To get to the point of failure, I just need to set a breakpoint at `0x266` and continue until register `%a1` contains `0xFF0046`, and then dump all the register values to look for a difference between Moa's register values and BlastEm's. Aha! The value of `%d6` is different. Moa has 0x2710 while BlastEm has 0x2700. Looking at the disassembly, the only use of `%d6` is to temporarily hold the contents of the flags register (`%ccr` which is the lower byte of status register `%sr`). The flag register values are also different! The `Extend` bit, which is the 5th bit in the status register is the only difference between the two emulators. I was already suspicious of the flags, since they are rather complicated to simulate and can behave differently for different instructions. Of all the flags, the `Extend` which isn't used by many instructions is probably the one I'm not emulating correctly, so I seem to be on the right track. Stepping through the program in BlastEm shows that the `Extend` flag is set after the `lsrw #1,%d5` instruction, which occurs a few times in the function. The [Motorola Documentation](https://www.nxp.com/files-static/archives/doc/ref_manual/M68000PRM.pdf) for the LSR instruction shows that both the `Extend` flag and `Carry` flag should be set to the bit value shifted out (the least significant bit). The rust code for the `LSd` instruction, which sets the flags, is shown below. ```rust self.set_logic_flags(pair.0, size); if pair.1 { self.set_flag(Flags::Carry, true); self.set_flag(Flags::Extend, true); } ``` I must have assumed that the `.set_logic_flags` function would clear the `Extend` flag when I originally wrote this code, as it does for the other four flags. Most logic operations don't affect the `Extend` flag though, so the `.set_logic_flags()` function is only clearing the lower 4 bits (the `Extend` flag being the 5th bit). After the call, the `Extend` and `Carry` flags are set to true only if the bit shifted out, which is stored in`pair.1`, is true. If the `Extend` flag was set to true from a previous instruction, it wouldn't be cleared. That was enough of a discrepancy to cause the garbled text, and a whole lot more. Effing flags... While the `Extend` flag is never directly tested in a comparison in this function, there are some `ROXd` instructions (where d is the direction (L)eft or (R)ight). Unlike the `ROd` instruction, which rotates bits within the same value, the `ROXd` instruction rotates through the `Extend` flag, so the value in `Extend` will be put into the number (either the left or right end), and the bit rotated out of the opposite end will be put into `Extend`. So an error in the `Extend` flag could definitely cause some problem with the `decompress` code. Adding a line of code to clear the `Extend` flag before the `.set_logic_flags()` function is called is enough to fix it. Now the text in the demo is showing legibly. It's still nothing like what it looks like in BlastEm, which has a moving background that stretches the text vertically, but I'm still calling it a win. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/tinydemo-sorta-working.png" title="TinyDemo running inside Moa" /> </p> And looking at Sonic 2, it's still very garbled but Sonic is no longer falling to his death! The `Extend` flag in the shift and rotate instructions was the cause of whichever comparison lead to Sonic not being on firm ground. I didn't even have to dig into the source of that problem in the Sonic 2 ROM to fix it, which was a relief. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-dma-broken.png" title="Sonic 2 scrolls still broken" /> </p> You Can't Write There, Sir -------------------------- Switching gears, I tried GenTestV3, which would immediately fail when run because it attempted to write to what should have been a read only memory area (the ROM data itself). I had added a way to mark a `MemoryBlock` as read only, which would raise an error when the `.write()` function is called on that block, as a means of catching errors. It had helped catch a few things when working on the Macintosh support, so I had added it to the Genesis ROMs as well. Since I was getting an error when the attempted write occurred, I knew exactly where the fault was, address `0x2976`, and I also knew what the values of the registers at that point were: ```objdump ... 292c: 7000 moveq #0,%d0 292e: 7200 moveq #0,%d1 2930: 7400 moveq #0,%d2 2932: 7600 moveq #0,%d3 2934: 7800 moveq #0,%d4 2936: 7a00 moveq #0,%d5 2938: 7c00 moveq #0,%d6 293a: 7e00 moveq #0,%d7 293c: 207c 0000 0000 moveal #0,%a0 2942: 227c 0000 0000 moveal #0,%a1 2948: 247c 0000 0000 moveal #0,%a2 294e: 287c 0000 0000 moveal #0,%a4 2954: 2a7c 0000 0000 moveal #0,%a5 295a: 2e7c 0000 0000 moveal #0,%sp 2960: 4ed6 jmp %fp@ 2962: 303c 7fff movew #0x7fff,%d0 2966: 207c 00ff 0000 moveal #0xff0000,%a0 296c: 30fc 0000 movew #0,%a0@+ 2970: 51c8 fffa dbf %d0,0x296c 2974: 4ed2 jmp %a2@ 2976: 297c 4000 0000 movel #0x40000000,%a4@(4) ; invalid write here 297c: 0004 297e: 383c 7fff movew #0x7fff,%d4 2982: 38bc 0000 movew #0,%a4@ 2986: 51cc fffa dbf %d4,0x2982 298a: 4ed2 jmp %a2@ ... ``` And the register values: ``` Breakpoint reached: Attempt to write to read-only memory at 4 with data [64, 0] @ 18201056 ns 0x0000297e: 383c 7fff movew #00007fff, %d4 Status: Running PC: 0x0000297e SR: 0x2700 D0: 0x00000000 A0: 0x00000000 D1: 0x00000000 A1: 0x00000000 D2: 0x00000000 A2: 0x00002592 D3: 0x00000000 A3: 0x00000000 D4: 0x00000000 A4: 0x00000000 D5: 0x00000000 A5: 0x00000000 D6: 0x00000000 A6: 0x00002588 D7: 0x00000000 SSP: 0x00000000 USP: 0x00000000 Current Instruction: 0x0000297e MOVE(Immediate(32767), DirectDReg(4), Word) 0x00000000: 0x00ff 0xfffe 0x0000 0x0200 0x0000 0x30e2 0x0000 0x30ee 0x00000010: 0x0000 0x3076 0x0000 0x308e 0x0000 0x309a 0x0000 0x30a6 0x00000020: 0x0000 0x30b2 0x0000 0x30be 0x0000 0x30ca 0x0000 0x30d6 0x00000030: 0x0000 0x306a 0x444f 0x4e27 0x5420 0x4c4f 0x4f4b 0x2041 ``` The register `%a4` contains `0x00000000`, plus an offset of 4, so it's trying to write to address `0x00000004`, the reset vector. That can't possibly be right. In BlastEm, I tried setting the same address as a breakpoint and, would you look at that, the breakpoint isn't reached! That code isn't even running in BlastEm when GenTest is run. If you notice from the snippet above, `jmp` instructions are being used to return to the calling function, and `bra`nch instructions are being used to call them, rather than using the stack. So the return address is not on the stack, but in register `%a2`, which contains `0x2592` which is the instruction *after* the one that called this function. We're on to something here. ```objdump 256c: 6700 dcae beqw 0x21c 2570: 60d8 bras 0x254a 2572: 7400 moveq #0,%d2 2574: 3e7c 0000 moveaw #0,%sp 2578: 2c7c 0000 0000 moveal #0,%fp 257e: 4df9 0000 2588 lea 0x2588,%fp 2584: 6000 03a6 braw 0x292c ; jump to a different function (shown above) 2588: 45f9 0000 2592 lea 0x2592,%a2 258e: 6000 03e6 braw 0x2976 ; jump to the troublesome function 2592: 297c 6000 0002 movel #1610612738,%a4@(4) 2598: 0004 259a: 4bf9 0000 8156 lea 0x8156,%a5 ``` Address `0x258e` contains a branch instruction to the exact address that the erroneous write occurs on, and before that, the return register `%a2` is loaded with the return value. What about the instruction before that? It's a branch to `0x292c` which appears in the previous snippet, which seems to be a function that sets all the register values to `0`! Wait, why would it do that? The register values were almost all `0` when the error occurred, except for the two registers used as return values, so it did run that code, but why would it clear everything just before using an now zero'd register as an address. I set a breakpoint for `0x2572`, which looked like the start of the current function, given that there's a branch instruction just before. The `%a4` register, interestingly enough, contains `0xc00000`, which would make sense as the intended value of `%a4` where the erroneous write occurred, if all the registers hadn't been cleared just before. Most of the other registers are `0` except for `%a2` which contains `0x2554`, possibly the return value of the caller. ```objdump ... 253e: 297c 6000 0002 movel #1610612738,%a4@(4) 2544: 0004 2546: 6000 0396 braw 0x28de 254a: 45f9 0000 2554 lea 0x2554,%a2 2550: 6000 04f8 braw 0x2a4a 2554: 1e03 moveb %d3,%d7 2556: 0007 00ef orib #-17,%d7 255a: 0c07 00ef cmpib #-17,%d7 ; the value of %d7 should ; be 0xff, but it's 0xef 255e: 6700 0012 beqw 0x2572 ; this is where the problem ; occurs (shouldn't jump but does) 2562: 1e03 moveb %d3,%d7 2564: 0007 00bf orib #-65,%d7 2568: 0c07 00bf cmpib #-65,%d7 256c: 6700 dcae beqw 0x21c 2570: 60d8 bras 0x254a ... ``` There's a jump to the start of our function that shouldn't run at `0x255e`, which... isn't quite what I was expecting. I was somehow expecting the previous code to somehow make sense, but alright, it's maybe taking a jump that shouldn't happen (even though it seems like it should *never ever* happen), so why is it jumping when it shouldn't. I set a breakpoint for `0x2554` in both emulators to see if that code would run and this time, BlastEm runs that code. Stepping through the code in both emulators shows the status register values are different just after the comparison at `0x255a`. *groan* Not the flags again. Looking closer at the code though, the values of `%d7` are different between the emulators as well. The comparison in Moa is setting the flags correctly for the data used, but the data values are different, and so BlastEm doesn't make the branch where Moa does. Ok, so maybe it's not the flags this time. So why are the values of `%d7` different. Well it's set just a few instructions ahead with the lower byte value of `%d3`, which in Moa is `0`. In BlastEm, it's 0xff. Aha! So where is `%d3` set? It's not set in the code just above the comparison, but there is a branch to `0x2a4a` just before which looks like a register-returning function call, and the code at that location does change `%d3`. ```objdump 2a4a: 7600 moveq #0,%d3 2a4c: 7e00 moveq #0,%d7 2a4e: 13fc 0040 00a1 moveb #0x40,0xa10009 2a54: 0009 2a56: 13fc 0040 00a1 moveb #0x40,0xa10003 2a5c: 0003 2a5e: 4e71 nop 2a60: 4e71 nop 2a62: 1639 00a1 0003 moveb 0xa10003,%d3 2a68: 0203 003f andib #0x3f,%d3 2a6c: 13fc 0000 00a1 moveb #0,0xa10003 2a72: 0003 2a74: 4e71 nop 2a76: 4e71 nop 2a78: 1e39 00a1 0003 moveb 0xa10003,%d7 2a7e: 0207 0030 andib #0x30,%d7 2a82: e50f lslb #2,%d7 2a84: 8607 orb %d7,%d3 2a86: 4ed2 jmp %a2@ ``` Tracing through the debuggers shows that this is the code where BlastEm gets 0xff into register `%d3` and it's doing it by reading the controller input. `0xa10003` is the byte address of the data port for controller 1, and `0xa10009` is the control port for controller 1. I had taken a stab at implementing the weird [TH counting](https://segaretro.org/Sega_Mega_Drive/Control_pad_inputs) that the controllers need to do, but I hadn't tested it. I had only hooked up the Start button to a key press, which was all I had needed up until this point, to get through the title screen to the game play. Here, from the code, it seemed as if the correct behaviour, at least according to how BlastEm worked, was for the controllers to return `0xff` when no buttons are pressed, rather than `0`. Changing that one thing is Moa got to the first screen of GenTest asking which test to run! Success! Well, I still needed to fix the controllers properly, since button presses still didn't work, but this is at least the cause of GenTest not running. There turned out to be quite a few minor bugs in the TH counting code. The count was incrementing twice as often as it should have, and the button states needed to be inverted (1 means the button is not pressed and 0 means it is). I also needed to reset the counter when the control port was written to, for the count to be in sync with what the ROM was expecting. Not all ROMs progressed through the entire count, if they only needed to read a few buttons. Eventually I got it sorted out and buttons were working but it took a while to get them right. The latest code for the controllers is [here](https://github.com/transistorfet/moa/blob/main/src/peripherals/genesis/controllers.rs) Fixing Sprites -------------- I had been back at it for about 4 or 5 days now and I had already ticked off two major issues. I could now control the characters in game play, even though I couldn't see much of what was going on still. The elephant in the room was those sprites not working, so with my enthusiasm high, I pressed on to tackle the sprites. Fixing the `Extend` flag bug fixed Sonic falling through the floor to his death, so that was a significant step forward, but multi-cell sprites were still being drawn incorrectly. Luckily the GenTest ROM has a page that displays a static multi-pattern sprite, both forward and reversed. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/gentest-sprites-broken.png" title="GenTest sprites broken" /> </p> The forward sprite (Knuckles) works fine, but the reversed sprite (Sonic) is messed up. If you look closely, you can see the vertical columns of cells seem to line up correctly, but the horizontal arrangement of the columns is mixed up. This one turned out to be a bit subtle. I had tried fiddling with reversing the cell drawing order in multicell sprites but to no avail. It turned out when switching the revere-sprite code I was changing both the order of the cells, and also reversing the positions they were drawn in, rather than switching only one. I was also adjusting both coordinates instead of just the horizontal arrangement. At the time I didn't have a way of just drawing one sprite in one location to inspect it closely enough to figure out what was wrong, but the GenTest ROM made it much clearer what was wrong. I also had an off by one error with reversed sprites where I needed to subtract one from the size in order to get the right vertical row of patterns to use. First, the existing code is shown below. Note: Multi-cell sprites are drawn top to bottom, left to right, unlike everything else in the Genesis, so the outer loop is for the horizontal direction, and the inner loop is the vertical direction. The variables that appear are defined as follows: - `pattern_name` is the 16-bit pattern specifier - `(h_pos, v_pos)` is the pixel position on screen where the sprite should be drawn - `(size_h, size_v)` is the size in cells of the sprite - `(h_rev, v_rev)` are bools of whether the sprite should be reversed in a given direction - `self.is_sprite_on_screen(x, y)` returns whether those pixel positions are on-screen (sprites can be entirely off the screen, in which case they wont be drawn) ```rust for ih in 0..size_h { for iv in 0..size_v { let h = if !h_rev { ih } else { size_h - ih }; let v = if !v_rev { iv } else { size_v - iv }; let (x, y) = (h_pos + h * 8, v_pos + v * 8); if self.is_sprite_on_screen(x, y) { let iter = self.get_pattern_iter( (pattern_name & 0xF800) | ((pattern_name & 0x07FF) + (h * size_v) + v) ); frame.blit(x as u32 - 128, y as u32 - 128, iter, 8, 8); } } } ``` Changing the following lines is enough to fix it. It needs to take an extra 1 off the h and v values when the sprite is reversed, and also use the loop's values to calculate the position where the cell should be drawn instead of using the previously calculated cell positions, which have already been reversed. ```rust let h = if !h_rev { ih } else { size_h - 1 - ih }; let v = if !v_rev { iv } else { size_v - 1 - iv }; let (x, y) = (h_pos + ih * 8, v_pos + iv * 8); ``` And now the sprites work! That was surprisingly simple given how broken they looked before. I had been close, but it only takes an off by one error to make the output mangled beyond recognition sometimes. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/gentest-sprites-fixed.png" title="GenTest sprites fixed" /> </p> The intro sprites in Earthworm Jim are working now too. I had tried to use that game for testing sprites before I had taken that break, but it wasn't as helpful as the GenTest sprite screen. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/earthwormjim-sprites.png" title="Earthworm Jim's sprites on the SEGA logo are working" /> </p> Not All The Data ---------------- How is Sonic 2 looking now that the sprites have been fixed. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-dma-broken.png" title="Sonic 2 scrolls still broken" /> </p> Well... it honestly doesn't look any different. In fact this is the same image from after the `Extend` flag was fixed, but before the sprites were fixed. I literally could not tell the difference between the image before and after fixing the sprites, they were so identical, so I didn't even bother adding another screenshot. No wonder I couldn't fix the sprites before, when I was using Sonic 2 to test with. The garbled sprites in Sonic 2 were caused by something else entirely. Are there any other test screens in the GenTest ROM that looked messed up? Sure enough, all the video output patterns are broken. I'll use the colour bleed test as an example. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/gentest-pattern-broken.png" title="GenTest colour pattern broken" /> </p> Well that's definitely not what it should look like. Inspecting the VRAM shows shows that only about half the data is loaded that should be loaded by comparison to BlastEm. I found the spot in the ROM where the data is loaded into the VDP using a DMA transfer. The source data in RAM actually is complete this time, even though the VRAM data is only partially present, so this time it is an issue with transferring data into the VDP. Playing around with the debugger in BlastEm I noticed something in the output for the VDP state: ``` **DMA Group** 13: 00 | 14: 46 | DMA Length: $4600 words 15: 00 | 16: 88 | 17: 7F | DMA Source Address: $FF1000, Type: 68K ``` It says the DMA length is 0x4600 *words* (not bytes). Crap... I had assumed that the DMA count was in bytes, not words. Could it really be that simple a problem? Yup... I was subtracting 2 instead of 1 from the count every iteration of the DMA loop, causing it to end half way through the intended transfer size. It really was that simple <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/gentest-pattern-fixed.png" title="GenTest colour pattern fixed" /> </p> And now Sonic 2 looks like this: <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-scrolling-broken.png" title="Sonic2 dma fixed but scrolling still broken" /> </p> Much better! It almost looks right except for the foreground that's out of place. I haven't even attempted to implement the horizontal and vertical scrolling functionality of the VDP yet, so that must be what's going on. This is finally coming together. Scrolling The Scrolls --------------------- It had been less than a week since I had returned to it, and I had fixed all the glaring issues that were mangling the graphics. It was finally time to implement something new from the Sega docs, that I had left for later. Later was now! It was time to implement the scrolling features. As mentioned before, the scrolls are much bigger than can fit on the screen at once. In order to be able to quickly update what's shown on the screen without changing all the cell data, the scroll planes can be moved relative to the screen to change what part of the scroll plane will appear on the screen. Each scroll can be moved independently of each other to create a parallax effect. The vertical and horizontal scrolling work a bit differently from each other. For one, the vertical scroll direction has its own special memory, the VSRAM, where as the horizontal scroll data is stored in a table in VRAM, with the starting address of the table set by a VDP register. For the vertical scroll position, either a single offset can be used to move the whole plane, or every two cells can have a different vertical offset. Each offset is an unsigned number between 0 and 1023 (which is the maximum number of pixel of the largest possible scroll size of 128 cells). Since VSRAM is 80 bytes, that means there can be 40 16-bit words, 20 for each of the two scrolls interleaved with each other, which covers the maximum 40 cell width of the screen. For the horizontal scroll position, either a single offset can move the whole plane, or every cell can have a different offset, or every line can have a different offset. For the cell offset setting, only a maximum of 30 offsets for each scroll are needed, but they are stored in a table with the same size as used by the per-line scrolling mode. The per-line scrolling mode needs 896 bytes for the NTSC version's 224 line output (960 bytes for the full 240 line resolution of PAL). Like the vertical offsets, each offset is a 16-bit word and ranges from 0-1023, and the offsets for Scroll A and Scroll B are interleaved in the horizontal scroll table. ```rust pub fn get_hscroll(&self, hcell: usize, line: usize) -> (u32, u32) { let scroll_addr = match self.mode_3 & MODE3_BF_H_SCROLL_MODE { 0 => self.hscroll_addr, 2 => self.hscroll_addr + (hcell << 5), 3 => self.hscroll_addr + (hcell << 5) + (line * 2 * 2), _ => panic!("Unsupported horizontal scroll mode"), }; let scroll_a = read_beu16(&self.vram[scroll_addr..]) as u32 & 0x3FF; let scroll_b = read_beu16(&self.vram[scroll_addr + 2..]) as u32 & 0x3FF; (scroll_a, scroll_b) } pub fn get_vscroll(&self, vcell: usize) -> (u32, u32) { let scroll_addr = match (self.mode_3 & MODE3_BF_V_SCROLL_MODE) { 0 => 0, _ => vcell >> 1, }; let scroll_a = read_beu16(&self.vsram[scroll_addr..]) as u32 & 0x3FF; let scroll_b = read_beu16(&self.vsram[scroll_addr + 2..]) as u32 & 0x3FF; (scroll_a, scroll_b) } ``` <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-scrolling-fixed.gif" title="Sonic2 with vertical and horizontal scrolling offsets sort of working" /> </p> There are some weird glitches in Scroll B but Scroll A seems to work fine. It only moves a whole cell at a time, so Scroll A appears jerky compared to the sprites. It's especially noticeable at the edge of the bridge. The bridge is made of sprites, which can be positioned to the exact pixel, but the ground where the bridge is supposed to be attached to will only move when a whole cell has changed. Fixing Line Scrolling --------------------- It had been about a week and a half since I took up the Genesis again. With the help of the test ROMs and BlastEm, I had made pretty quick work of a whole bunch of little bugs, going from what was still a very garbled output to having the games playable. I wasn't done yet though. After spending a week working on Computie when my new PCBs arrived, I returned to the Genesis to work on the per-line scrolling. I also dabbled a bit with audio support, adding a dummy device for the YM2612 audio FM synthesizer chip, which is mapped to the Z80 coprocessor's address space, and fixing the Z80 banked memory area, so that it could access the 68k ROM or RAM data. With that, I was able to get the Z80 coprocessor working well enough that Sonic 1 would get past the title screen and into the game. I was bothered that per-line scrolling wasn't working, and that the scrolls moved in a jerky fashion. I needed to fix it but it would require more than a few simple changes. Since the per-cell scrolling was working, I chose to write a completely different version of the `draw_scrolls` function just for per-line scrolling. I could integrate them later if possible but it would be easier to completely rewrite it without breaking what I already had. I was still hoping to use the pattern iterator I had written, but I would need to change it to take the line number on initialization, so that I could output only one line of a pattern at a time. I then used another loop inside the horizontal and vertical cell loops to iterate over each of the 8 lines in a pattern, using a different offset for each line of the pattern. My first attempt used the same loop to draw both scrolls at once, but the results were this: <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-scrolling-at-once.gif" title="Sonic 2 with both scrolls drawn at once" /> </p> There is clearly an issue caused by the scrolling since moving until the screen is on a cell boundary shows the foreground plane (Scroll A) completely on top of the background plane (Scroll B), but when the offset is between cells, Scroll B is getting drawn on top. Separating the drawing of each scroll (at the cost of duplicating the loops) fixed this problem, but there is still an issue with these strange black artifacts showing on the screen. <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sonic2-scrolling-reversed.png" title="Sonic 2 with black glitches" /> </p> It took me a while of fussing around with the code before I realized that I had the line and column coordinates backwards when passing them to the scroll fetching functions. That's a little embarrassing. I was sending the cell_x value to the horizontal scroll offset when it should have been getting the cell_y value (ie. the horizontal offset is based on what *line* is currently being drawn, so you give it the line number and it gives you the x offset). Swapping these around and reorganizing the loops fixed this. Now the scrolling is smooth! Rewriting --------- There were still some issues with the left hand and bottom edges of the screen where the foreground is not drawn to the edge because the scroll offset is not on a cell boundary. Changing the existing code to add an extra cell was not as trivial as it would appear. Shifting the cells over caused the sprites to be misaligned with the background, and starting the iterators one cell early would mean starting at `-1`, which would require changing to signed numbers, and possibly calculating an invalid offset due to the presence of negatives, or adding many checks to prevent that. I also didn't have drawing priority working because I didn't have all the cell and sprite priority bits calculated at the same time, to determine which to display, and the code was awfully messy at this point. It was time to rewrite all the display code. I had learned so much and run into so many issue by this point. I had a better understanding of how it was all supposed to work now, and I could incorporate all those lessons in the next version. In order to recreate the video output more accurately, I opted to more faithfully simulate what the hardware VDP would be doing. Since it's generating a video signal on the fly, it draws the image pixel by pixel, line by line, exactly in step with the CRT. If I did it this way, it would also allow me to implement the priority bits to decide on which pixel from the different planes should be drawn to the screen, since everything would be in the same loop. There would be a lot more duplicated calculations and slower performance as a result, but since the existing performance wasn't an issue, it should still be fast enough to emulate at full speed. To make it easier to debug in the short term, I duplicated the code to calculate the cell indices for the scrolls. Later, I can break this up into multiple functions to reuse code, and also store some of the calculated values across iterations to avoid recalculating, but I wanted everything in one loop to make it easier to adjust while I debugged it. I did break out the vertical drawing loop from the horizontal one, which will eventually be used to step through the drawing line by line, instead of drawing the whole frame before the vertical interrupt, but this isn't yet implemented. ```rust pub fn draw_frame(&mut self, frame: &mut Frame) { self.build_sprites_lists(); for y in 0..(self.screen_size.1 * 8) { self.draw_frame_line(frame, y); } } pub fn draw_frame_line(&mut self, frame: &mut Frame, y: usize) { let bg_colour = ((self.background & 0x30) >> 4, self.background & 0x0f); let (hscrolling_a, hscrolling_b) = self.get_hscroll(y / 8, y % 8); for x in 0..(self.screen_size.0 * 8) { let (vscrolling_a, vscrolling_b) = self.get_vscroll(x / 8); let pixel_b_x = (x - hscrolling_b) % (self.scroll_size.0 * 8); let pixel_b_y = (y + vscrolling_b) % (self.scroll_size.1 * 8); let pattern_b_addr = self.get_pattern_addr(self.scroll_b_addr, pixel_b_x / 8, pixel_b_y / 8); let pattern_b_word = self.memory.read_beu16(Memory::Vram, pattern_b_addr); let priority_b = (pattern_b_word & 0x8000) != 0; let pixel_b = self.get_pattern_pixel(pattern_b_word, pixel_b_x % 8, pixel_b_y % 8); let pixel_a_x = (x - hscrolling_a) % (self.scroll_size.0 * 8); let pixel_a_y = (y + vscrolling_a) % (self.scroll_size.1 * 8); let pattern_a_addr = self.get_pattern_addr(self.scroll_a_addr, pixel_a_x / 8, pixel_a_y / 8); let pattern_a_word = self.memory.read_beu16(Memory::Vram, pattern_a_addr); let mut priority_a = (pattern_a_word & 0x8000) != 0; let mut pixel_a = self.get_pattern_pixel(pattern_a_word, pixel_a_x % 8, pixel_a_y % 8); if self.window_addr != 0 && self.is_inside_window(x, y) { let pixel_win_x = x - self.window_pos.0.0 * 8; let pixel_win_y = y - self.window_pos.0.1 * 8; let pattern_win_addr = self.get_pattern_addr(self.window_addr, pixel_win_x / 8, pixel_win_y / 8); let pattern_win_word = self.memory.read_beu16(Memory::Vram, pattern_win_addr); // Scroll A is not displayed where ever the Window is displayed, so we replace Scroll A's data priority_a = (pattern_win_word & 0x8000) != 0; pixel_a = self.get_pattern_pixel(pattern_win_word, pixel_win_x % 8, pixel_win_y % 8); }; let mut pixel_sprite = (0, 0); let mut priority_sprite = false; for sprite_num in self.sprites_by_line[y].iter() { let sprite = &self.sprites[*sprite_num]; let offset_x = x as i16 - sprite.pos.0; let offset_y = y as i16 - sprite.pos.1; if offset_x >= 0 && offset_x < (sprite.size.0 as i16 * 8) { let pattern = sprite.calculate_pattern(offset_x as usize / 8, offset_y as usize / 8); priority_sprite = (pattern & 0x8000) != 0; pixel_sprite = self.get_pattern_pixel(pattern, offset_x as usize % 8, offset_y as usize % 8); if pixel_sprite.1 != 0 { break; } } } let pixels = match (priority_sprite, priority_a, priority_b) { (false, false, true) => [ pixel_b, pixel_sprite, pixel_a, bg_colour ], (true, false, true) => [ pixel_sprite, pixel_b, pixel_a, bg_colour ], (false, true, false) => [ pixel_a, pixel_sprite, pixel_b, bg_colour ], (false, true, true) => [ pixel_a, pixel_b, pixel_sprite, bg_colour ], _ => [ pixel_sprite, pixel_a, pixel_b, bg_colour ], }; for i in 0..pixels.len() { if pixels[i].1 != 0 || i == pixels.len() - 1 { let mode = if pixels[i] == (3, 14) { ColourMode::Highlight } else if (!priority_a && !priority_b) || pixels[i] == (3, 15) { ColourMode::Shadow } else { ColourMode::Normal }; frame.set_pixel(x as u32, y as u32, self.get_palette_colour(pixels[i].0, pixels[i].1, mode)); break; } } } } #[inline(always)] fn get_pattern_addr(&self, cell_table: usize, cell_x: usize, cell_y: usize) -> usize { cell_table + ((cell_x + (cell_y * self.scroll_size.0 as usize)) << 1) } fn get_pattern_pixel(&self, pattern_word: u16, x: usize, y: usize) -> (u8, u8) { let pattern_addr = (pattern_word & 0x07FF) << 5; let palette = ((pattern_word & 0x6000) >> 13) as u8; let h_rev = (pattern_word & 0x0800) != 0; let v_rev = (pattern_word & 0x1000) != 0; let line = if !v_rev { y } else { 7 - y }; let column = if !h_rev { x / 2 } else { 3 - (x / 2) }; let offset = pattern_addr as usize + line * 4 + column; let second = x % 2 == 1; let value = if (!h_rev && !second) || (h_rev && second) { (palette, self.memory.vram[offset] >> 4) } else { (palette, self.memory.vram[offset] & 0x0f) }; value } fn build_sprites_lists(&mut self) { let sprite_table = self.sprites_addr; let max_lines = self.screen_size.1 * 8; self.sprites.clear(); self.sprites_by_line = vec![vec![]; max_lines]; let mut link = 0; loop { let sprite = Sprite::new(&self.memory.vram[sprite_table + (link * 8)..]); let start_y = sprite.pos.1; for y in 0..(sprite.size.1 as i16 * 8) { let pos_y = start_y + y; if pos_y >= 0 && pos_y < max_lines as i16 { self.sprites_by_line[pos_y as usize].push(self.sprites.len()); } } link = sprite.link as usize; self.sprites.push(sprite); if link == 0 { break; } } } ``` <p align="center"> <img src="https://jabberwocky.ca/posts/images/2022-01/sega-genesis-sonic2-demo.gif" title="Sonic 2 finally working" /> </p> Finally... It's working pretty good, it scrolls smoothly, it sorts out the priority so Sonic appears behind the trees. It works better than this gif even shows. I recorded it at 15 frames a second instead of 30 or 60, to keep the file size small, so when Sonic gets his fast boots, it seems like the sprite isn't animated, but it's actually just moving too fast to be recorded. For those who are curious, out of each 16.6ms interval between updating a frame, the old display code was running in around 2ms, and the new code is running in around 6ms, so the new code is significantly slower (but still well within the time available). This is in part because I'm calculating which cell to draw for each of the planes, and fetching the scroll values, for each pixel on the screen. This could be improved upon by storing the pattern data for the current cells for each plane between iterations and only updating them when the cell changes. That said, doing so will only make a small improvement in performance, while also making the code harder to read. Conclusion ---------- This project definitely turned into more than I was expecting when I started. I had hoped to get some pretty graphics after only a few weeks of work, (the initial implementation only took about that long), but that didn't happen and it quickly became my white whale. I *had* to finish it. The real journey was the eight weeks of switching between debugging and working on other projects while the problems percolated in the back of my brain. But I did it. I got it to a playable (albeit still buggy) state. Special thanks to [ComradeOj](https://mode5.net/) for the demo ROMs, and [Mike Pavone](https://twitter.com/mikepavone) and the other contributors for [BlastEm](https://www.retrodev.com/blastem/) ([github mirror](https://github.com/libretro/blastem)). Without these, it would have taken a lot more time to get this working. There is still a lot to do, and I will likely work on this project on and off for a while to come. Audio needs to be added, and a lot of games don't quite run correctly because of one reason or another. Thanks for joining me and I hope you learned something as well, or at least got to enjoy some nostalgic thoughts of the Sega Genesis. If there's anything you'd like to me to write more about or you have any feedback about these posts, I'd love to hear it on twitter or by email. Happy Emulating!
transistorfet
949,985
What is VOID Operator - Daily JavaScript Tips #3
The void operator returns undefined value; In simple words, the void operator specifies a...
0
2022-01-10T04:16:59
https://codewithsnowbit.hashnode.dev/what-is-void-operator-daily-javascript-tips-3
javascript, node, webdev, tutorial
The `void` operator returns `undefined` value; In simple words, the `void` operator specifies a function/expression to be executed without returning `value` ```js const userName = () => { return "John Doe"; } console.log(userName()) // Output: "John Doe" console.log(void userName()) // Output: undefined ``` [Live Demo](https://jsfiddle.net/psfhozqn/) *** Thank you for reading - Follow me on Twitter - [@codewithsnowbit](https://twitter.com/codewithsnowbit) - Subscribe me on YouTube - [Code With SnowBit](https://www.youtube.com/channel/UCNTKqF1vhFYX_v0ERnUa1RQ?view_as=subscriber&sub_confirmation=1)
dhairyashah
954,331
Copy public IP address to system clipboard
curl ifconfig.me | xclip -sel clipboard Enter fullscreen mode Exit fullscreen...
0
2022-01-13T18:09:04
https://dev.to/csinclair/copy-public-ip-address-to-system-clipboard-4ejd
linux, cli, networking
```bash curl ifconfig.me | xclip -sel clipboard ```
csinclair
954,848
JS Intro
There are 8 fundamental data types in JavaScript: strings, numbers, Bigint, booleans, null,...
0
2022-01-14T07:35:34
https://dev.to/shinyo627/js-intro-397i
javascript, tutorial, beginners
- [There are 8 fundamental data types in JavaScript: strings, numbers, Bigint, booleans, null, undefined, symbol, and object.](https://www.codecademy.com/resources/docs/javascript/data-types?page_ref=catalog) - First seven data types except object are primitive data types. - BigInt is necessary for big numbers because they are unreliable with Number type #### example below: ``` console.log(9999999999999999); // 10000000000000000 console.log(9999999999999999n); // 9999999999999999n ``` - Objects, including instances of data types, can have properties, stored information. The properties are denoted with a . after the name of the object, for example: 'Hello'.length. - Objects, including instances of data types, can have methods which perform actions. Methods are called by appending the object or instance with a period, the method name, and parentheses. For example: 'hello'.toUpperCase(). - We can access properties and methods by using the ., dot operator. - Built-in objects, including Math, are collections of methods and properties that JavaScript provides. - Properties of an object can be either a value, or a method (a function only accessible to an instance of the object). A method is an attribute, but that does not make an attribute a method. A method is function, so performs some task. .length is a value, only. - String.prototype.trim() = method removes whitespace from both ends of a string and returns a new string, without modifying the original string --- **What does it mean by an instance of a data type?** ``` a = 42 ``` Above we assign an integer value (a number) to the variable, a. When we poll the type of a we are actually polling the type of 42. a is not an object, but a reference to an object. 42 is identified by the interpreter as being a number type so gives it a wrapper object of that type. ``` typeof 42 => 'number' typeof a => 'number' ``` So a refers to an instance of a number type.
shinyo627
954,979
Why practicing DRY in tests is bad for you
This post is a bit different from the recent ones I’ve published. I’m going to share my point of view...
0
2022-01-14T08:58:43
https://dev.to/mbarzeev/why-practicing-dry-in-tests-is-bad-for-you-j7f
testing, react, javascript, webdev
This post is a bit different from the recent ones I’ve published. I’m going to share my point of view on practicing DRY in unit tests and why I think it is bad for you. Care to know why? Here we go - ## What is DRY? Assuming that not all of us know what DRY means here is a quick explanation: “Don't Repeat Yourself (DRY) is a principle of software development aimed at reducing repetition of software patterns” (from [here](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself)). We don’t like duplications since “Duplication can lead to maintenance nightmares, poor factoring, and logical contradictions.” (from [here](http://wiki.c2.com/?DontRepeatYourself)). An example can be having a single service which is responsible for fetching data from the server instead of duplicating the code all over the code base. The main benefit is clear - a single source of logic, where each modification for it applies to all which uses it. ## Where does DRY apply in tests? In tests we thrive to assert as much as needed in order to give us the future modification confidence we feel comfortable with. This means that there will be a lot of tests that differ in nuances in order to make sure we cover each of the edge cases well. What the previous sentence means in code is that tests tend to have a lot of repetitive and duplicated code to them, this is where the DRY principle finds its way in. Let me try and explain with examples from the React world - We are testing a custom component and we’re using the React Testing Library (and jest-dom) in order to test the component’s rendering. It may look something like this: ```javascript describe('Confirmation component', () => { it('should render', () => { const {getByRole} = render(<Confirmation />); expect(getByRole('dialog')).toBeInTheDocument(); }); }); ``` Here I’m testing that once the Confirmation component is being rendered, the element with the “dialog” role is present on the document. This is great but it is just a single test among the many cases this component has, and that means that for each test you will have the same repetitive render code, which sometimes can be complex with props for the component, and perhaps wrapping it in a context provider. So what many choose to do is to create a “helper” render function which encapsulates the rendering and then each test can call it, before starting its assertions: ```javascript function renderConfirmationComponent() { return render(<Confirmation />); } describe('Confirmation component', () => { it('should render', () => { const {getByRole} = renderConfirmationComponent(); expect(getByRole('dialog')).toBeInTheDocument(); }); }); ``` We gain the benefit of DRY, where if we want to change the rendering for all the tests, we do it in a single place. Another example of DRY in tests is using loops in order to generate many different test cases. An example can be testing an “add” function which receives 2 arguments and returns the result for it. Instead of duplicating the code many times for each case, you can loop over a “data-provider” (or "data-set") for the test and generate the test cases, something like this: ```javascript describe('Add function', () => { const dataProvider = [ [1, 2, 3], [3, 21, 24], [1, 43, 44], [15, 542, 557], [5, 19, 24], [124, 22, 146], ]; dataProvider.forEach((testCase) => { it(`should return a ${testCase[2]} result for adding ${testCase[0]} and ${testCase[1]}`, () => { const result = add(testCase[0], testCase[1]); expect(result).toEqual(testCase[2]); }); }); }); ``` And the test result looks like this: ```bash Add function ✓ should return a 3 result for adding 1 and 2 (1 ms) ✓ should return a 24 result for adding 3 and 21 (1 ms) ✓ should return a 44 result for adding 1 and 43 ✓ should return a 557 result for adding 15 and 542 ✓ should return a 24 result for adding 5 and 19 (1 ms) ✓ should return a 146 result for adding 124 and 22 ``` >BTW Jest even encourages you to do that with its built-in API, like [test.each](https://jestjs.io/docs/api#testeachtablename-fn-timeout) and [describe.each](https://jestjs.io/docs/api#describeeachtablename-fn-timeout). Here is (somewhat) the same example with that API: ```javascript test.each(dataProvider)('.add(%i, %i)', (a, b, expected) => { expect(add(a, b)).toBe(expected); }); ``` Looks great, right? I created 6 test cases in just a few lines of code. So why am I saying it’s bad for you? ## Searching The scenario is typically this - a test fails, you read the output on the terminal and go searching for that specific failing test case. What you have in your hand is the description of the test case, but what you don’t know is that this description is a concatenation of strings. You won't be able to find “should return a 3 result for adding 1 and 2” in the code because it simply does not exist. It really depends on how complex your test’s data-provider is, but this can become a real time waster trying to figure out what to search for. ## Readability So you found you test and it looks like this: ```javascript dataProvider.forEach((testCase) => { it(`should return ${testCase[2]} result for adding ${testCase[0]} and ${testCase[1]}`, () => { const result = add(testCase[0], testCase[1]); expect(result).toEqual(testCase[2]); }); }); ``` You gotta admit that this is not intuitive. Even with the sugar (is it really sweeter?) syntax Jest offers it takes you some time to wrap your head around all the flying variables and string concatenations to realize exactly what’s been tested. When you do realize what’s going on, you need to isolate the case which failed by breaking the loop or modifying your data-provider, since you cannot isolate the failing test case to run alone. One of the best “tools” I use to resolve failing tests is to isolate them completely and avoid the noise from the other tests, and here it is much harder to do. Tests should be easy to read, easy to understand and easy to modify. It is certainly not the place to prove that a test can be written in a one-liner, or with (god forbid) a reducer. ## State leakage Running tests in loops increases the potential of tests leaking state from one another. You can sometimes find out that after you’ve isolated the test which fails, it suddenly passes with flying colors. This usually means that previous tests within that loop leaked a certain state which caused it to fail. When you have each test as a standalone isolated unit, the potential of one test affecting the others dramatically reduces. ## The cost of generic code Let’s go back to our React rendering example and expand it a little. Say that our generic rendering function receives props in order to render the component differently for each test case, and it might also receive a state “store” with different attributes to wrap the component with. If, for some reason, you need to change the way you want to render the component for a certain test case you will need to add another argument to the rendering generic function, and your generic function will start growing into this little monster which needs to support any permutation of your component rendering. As with any generic code, there is a cost of maintaining it and keeping it compatible with the evolving conditions. ## Wrapping up I know. There are cases where looping over a data-provider to create test cases, or creating “helper” functions is probably the best way for achieving a good code coverage with little overhead. However, I would like you to take a minute and understand the cost of going full DRY mode in your tests given all the reasons mentioned above. There is a clear purpose for your tests and that is to prevent regressions and provide confidence when making future changes. Your tests should not become a burden to maintain or use. I much prefer simple tests, where everything which is relevant to a test case can be found between its curly brackets, and I really don’t care if that code repeats itself. It reassures me that there is little chance that this test is affected somehow by any side effect I’m not aware of. As always, if you have any thoughts or comments about what's written here, please share with the rest of us :) *Hey! If you liked what you've just read check out <a href="https://twitter.com/mattibarzeev?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">@mattibarzeev</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> on Twitter* :beers:
mbarzeev
955,341
Working on my 2nd Project: JavaScript Tic Tac Toe!
Hey Guys, Been a week or 2 but I'm back like I said I would regarding the 2nd Project. This will be...
0
2022-01-14T16:48:19
https://dev.to/mikacodez/working-on-my-2nd-project-javascript-tic-tac-toe-4704
javascript, webdev, beginners, programming
Hey Guys, Been a week or 2 but I'm back like I said I would regarding the 2nd Project. This will be a short post just to document my experience so far of the last 12 days. The challenge of pushing the game out so far has been a bitter/sweet one. On one hand I was able to get everything coded pretty quickly with the assistance of some youtube videos and mentors but by the time it came to submission of my deadline there was an error in the code not allowing the game to function properly to its full potential. And for me that sucked bigtime. At this moment I'm unable to go and make any more changes on the submission as that will count as cheating on my end and could affect my grade so I will have to wait till marking has been completed on the project till I can make that change again. All in all this project has been a good learning experience for me in terms of Javascript use. Some of the concepts have now been solidified for me and I know how to use Javascript lines of code to my advantage and the syntax no longer seems foreign to me anymore which is the point! One thing I can definitely suggest for other newbies out there learning Javascript is that they should try to work on a small project right after they have done a course on it or read through some material as that would be the only way things make sense and stick in your head. Because there is no way your going to understand: ``` function handleSubmit(event) { event.preventDefault(); let p1 = form.elements['password'].value; let p2 = form.elements['confirm-password'].value; if (p1 !== p2) { let errorDiv = document.getElementById('errors'); errorDiv.innerHTML = "<p>Please ensure your passwords match.</p>"; errorDiv.style.display = 'block'; ``` If you are not applying it practically to the project you're using. Making mistakes and then going back and making those changes to make the code work is one of the true and better ways to make concepts stick in your head and help you progress. As for now I will revisit this blog post and paste the link for the finished game but at this present moment I'm not happy with the outcome so I will postpone publishing it for now. Until then I'm going to be working on my 3rd Project which is in Python! As you know I did a bit of Python training before my Bootcamp so the language isn't alien to me. However, applying it practically may be another story so I'm going to try my best to produce the best results this time. Hope you enjoyed this read and stay tuned for the next post. Follow me on Twitter: @CodezMikazuki Mika/Malcz.
mikacodez
955,963
How to apply the AWS Community Builder Program
The AWS Community Builders program offers technical resources, mentorship, and networking...
0
2022-01-15T09:48:07
https://dev.to/santhakumar_munuswamy/how-to-apply-the-aws-community-builder-program-43gl
aws, machinelearning, datascience, artificalintelligene
The **AWS Community Builders program** offers technical resources, mentorship, and networking opportunities to AWS technical enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. Are you Interested to join AWS builders Program? you should apply the program to build relationships with AWS product teams, AWS Heroes, and the AWS community. Throughout the program, AWS subject matter experts will provide informative webinars, share insights — including information about the latest services — as well as best practices for creating technical content, increasing reach, and sharing AWS knowledge across online and in-person communities. The program will accept a limited number of members per year. All AWS builders are welcome and encouraged to apply. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xab22gwy2navwhq5iqxp.JPG) The AWS Community Builders application form was once again open to new applicants; live now https://bit.ly/3ndpF9O ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4l0g0nw8yzvjaqc7iwdg.JPG) We have many category for applying the AWS community builder program. There are list of categories:- - Machine Learning - Mobile and Web Apps - Storage - Containers - Developer Tools - Management, Governance and Migration - Serverless - Gravitron / Arm Development - Data (Databases, Analytics, Blockchain) - Security, Identity & Compliance - Game Tech - Networking and Content Delivery If you want to become an AWS Community Builders application form, please join now in the cycle of Jan 2022. The application form is open until midnight PST on January 24th, so don’t wait too long to apply. **What are the benefits of joining the AWS Community Builders program?** - Access to AWS product & Services and information about new services and features - Mentorship from AWS subject matter experts on a variety of topics - AWS Promotional Credits and other helpful resources to support content creation - Opportunities to connect with and learn from like-minded developers you can check out more details here https://lnkd.in/gMt96NuQ **AWS community builder swag** You can watch AWS community builder swag kit unboxing video here https://youtu.be/9xjmY71qcmU <iframe width="560" height="315" src="https://www.youtube.com/embed/9xjmY71qcmU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
santhakumar_munuswamy
956,794
Leetcode 1326 Minimum Number of Taps to Open to Water a Garden
from typing import List class Solution: def minTaps(self, n: int, ranges: List[int]) -&gt;...
0
2022-01-15T22:35:31
https://dev.to/kardelchen/leetcode-1326-minimum-number-of-taps-to-open-to-water-a-garden-4ilh
```python from typing import List class Solution: def minTaps(self, n: int, ranges: List[int]) -> int: # find intervals starting from different locations intervals = [] for center, width in enumerate(ranges): # if the element is 0, then this element is useless if width != 0: # restrict the interval in [0, n] interval = [max(0, center - width), min(center + width, n)] intervals.append(interval) # if every element is 0, then return -1 if not intervals: return -1 # find the furthest location it can reach from a current location d = {} for interval in intervals: if interval[0] not in d: d[interval[0]] = [] d[interval[0]].append(interval[1]) for k in d: d[k] = max(d[k]) # current location loc = 0 # result res = 0 # if it doesn't reach the furthest location (n) while loc != n: # find the furthest location (max(...)) it can reach (<= d[k]) from our position (k) maxLocation = -1 s = set() for k in d: if k <= loc: s.add(k) maxLocation = max(maxLocation, d[k]) if maxLocation == -1: return -1 # delete every element we iterate previously (we won't search elements again) for k in s: del d[k] res += 1 loc = maxLocation return res ```
kardelchen
956,915
Introducing Myself.....
Hey I'm Muhammed Fuhad. Iam New To This Development Community. First of all Iam introducing...
0
2022-01-16T02:52:38
https://dev.to/fuhadkalathingal/introducing-myself-4hob
Hey I'm Muhammed Fuhad. Iam New To This Development Community. First of all Iam introducing myself.Iam 13 years old boy intrested in coding. Currently iam learning the basics of coding.this are the all things about me now introduce yourself developers...
fuhadkalathingal
957,006
Front-End Web and Mobile Development on AWS
AWS offers a wide range of tools and services to support development workflows for iOS, Android,...
0
2022-01-16T06:16:13
https://dev.to/nirmalnaveen/front-end-web-andmobile-developmenton-aws-5f7f
aws, webdev, cloud, mobile
AWS offers a wide range of tools and services to support development workflows for iOS, Android, React Native, and web front-end developers. There is a set of services that make it easy to build, test, and deploy an application, even with minimal knowledge of AWS. With the speed and reliability of the AWS infrastructure, mobile and web applications can scale from prototype to millions of users to provide a better user experience and better solutions for the whole integrated system. Amazon services are primarily aimed at developing web and mobile applications: - Ease of use and minimal energy to start. Amazon services allow you to develop an application using existing iOS/Android IDEs and web frameworks. This makes it easy to add UI components for a user-friendly interface and use the CLI tool chain to easily customize the back end. - Provide access to the features you need. You can use Auth, Analytics, API, Storage, Predictions, XR, and others to create rich server infrastructures. GraphQL can be used to access and integrate data in flexible ways. Amazon services offer the ability to test mobile applications on hundreds of real devices. A scalable approach allows you to grow your business quickly with built-in AWS best practices for security, availability, and reliability; your application can easily scale from one request per second with microsecond latency around the world. Let's look at which of the Amazon services will help in the development and operation of web and mobile solutions, as well as speed up the whole process and make it more stable. **AWS Amplify** AWS Amplify is a collection of tools and services that enable developers of mobile applications and web interfaces to build secure and scalable end-to-end systems on AWS. With Amplify, you can easily create custom workflows, develop voice interfaces, connect artificial intelligence to real-time data streams, run targeted advertising campaigns, and so on. AWS Amplify will help you develop and deliver quality applications. AWS Amplify includes an open-source platform with separate libraries for specific use cases and a wide range of tools for building cloud functionality and incorporating them into applications, as well as a web hosting service for deploying static web applications. Within minutes of configuring the appropriate service, a developer can automatically configure a best-in-class backend service for mobile and web applications, such as an authentication service, data warehouse, or API based on Amazon S3, Amazon Cognito, and other AWS services. Amplify CLI seamlessly integrates with iOS and Android IDEs, as well as many popular web development frameworks, providing a guided workflow to customize the optimal backend for your applications with a few simple commands. **AWS Amplify Console** The AWS Amplify Console is a static web hosting service that accelerates the application release cycle with an uncomplicated CI/CD process for building and deploying static web applications. You only need to provide a link to the repository with your application code in the console, and all adjustments in the frontend and backend will be deployed in a single workflow every time you commit the code. A complex application includes a frontend hosted on a single-page application framework (such as React, Angular, Vue, Gatsby, or Flutter, which is now in developer preview) and an optional cloud-based backend (such as GraphQL, REST API, file and data stores). These main features allow you to integrate web and mobile applications with Amazon: - Authentication (User Registration and Authentication) - Data storage (offline sync and conflict resolution) - API (GraphQL and REST - Accessing Data from Multiple Data Sources) - Storage (User Content Management) - Analytics (Collecting analytic data for your application) - Forecasting (Artificial Intelligence/Machine Learning, including text broadcasts) - Interactions (Conversational Chatbots) - Push notifications (Sending targeted messages) - PubSub (Post and Subscription Management) ![A pie chart showing 40% responded "Yes", 50% responded "No" and 10% responded "Not sure"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bi7qf5shocb6dgbx9y3t.jpg) ![A pie chart showing 40% responded "Yes", 50% responded "No" and 10% responded "Not sure"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6lfk8k2kr64fm27zymi.jpg) **AWS AppSync** AWS AppSync simplifies application development by letting you create a universal API to securely access, modify, and combine data from multiple sources. AppSync is a managed service that uses GraphQL so that applications can easily retrieve only the data they need. With AppSync, you can build scalable applications, including those requiring real-time updates, using a range of data sources such as NoSQL data stores, relational databases, HTTP APIs, and native data sources with AWS Lambda. For mobile and web applications, AppSync also provides access to local data when devices go offline and sync data when they reconnect to the Internet. In this case, the client can customize the order of conflict resolution. AWS AppSync is available in different regions. You can develop your application in a familiar IDE (for example, Xcode, Android Studio, VS Code), and use the intuitive AWS AppSync or AWS Amplify CLI management console to automatically generate APIs and client code. AWS AppSync integrates with Amazon DynamoDB, Amazon Aurora, Amazon Elasticsearch, AWS Lambda, and other AWS services, allowing you to build complex applications with nearly unlimited performance and storage that can change based on your business needs. AWS AppSync provides real-time subscriptions to millions of devices and offline access to application data. Once the device is reconnected, AWS AppSync syncs only the updates at the time the device was disconnected, not the entire database. AWS AppSync offers configurable server-side conflict detection and resolution. It is also possible to perform complex queries and generalizations across multiple data sources with a single network call using GraphQL. AWS AppSync makes it easy to protect your application data by using multiple authentication modes at the same time and also allows you to determine the severity of the threat and perform granular access control at the data definition level directly from your GraphQL schema. ![A pie chart showing 40% responded "Yes", 50% responded "No" and 10% responded "Not sure"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fiwljdf6o658b9vwg740.jpg) **Amazon API Gateway** Amazon API Gateway is a fully managed developer service for building, publishing, maintaining, monitoring, and securing APIs at scale. Applications access the data, business logic, or functionality of your backend services through the API. API Gateway allows you to create RESTful and WebSocket APIs, which are the main component of real-time two-way communication applications. API Gateway supports containerized and serverless workloads and Internet applications. API Gateway takes care of all the tasks associated with accepting and processing hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization, and access control, request throttling, and API monitoring and versioning. Working with API Gateway does not require minimum fees or start-up investments. This only pays for the received API calls and the amount of data transferred, and you can use API Gateway's tiered pricing model to reduce application costs as you scale your API usage. REST API and WebSocket are very important features for web and mobile application development: - REST API. Allows you to create RESTful APIs optimized for serverless workloads and HTTP servers using HTTP APIs. HTTP APIs are the best way to create APIs that only require an API proxy. If your API requires API proxy functionality and API management capabilities in a single solution, API Gateway also provides REST APIs. - WebSocket API. Allows you to create real-time two-way communication applications such as chat apps and streaming panels using the WebSocket API. API Gateway maintains a persistent communication to handle messages passed between your backend service and your clients. API Gateway provides a tiered pricing model for API requests. At just $0.90 per one million API requests at the highest level, a developer can reduce their costs by increasing the number of API requests per region across all of your AWS accounts. By the way, it is possible to monitor performance metrics and information about API calls, data latency, and error rates in the API Gateway control panel. This will allow you to visually control calls to your services using Amazon. It is easy to set up API access using AWS Identity and Access Management (IAM) and Amazon Cognito. By using OAuth tokens, you leverage the built-in support for OIDC and OAuth2 API Gateway. To support custom authorization requirements, you can run the Lambda Authorization Tool from AWS Lambda. ![A pie chart showing 40% responded "Yes", 50% responded "No" and 10% responded "Not sure"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wnw4vs5fnx43e87ogjcl.jpg) **AWS Device Farm** AWS Device Farm is an application testing service that improves the performance of mobile and web applications. It uses a variety of desktop browsers and real mobile devices, so the developer doesn't need to create his own test framework. The service allows you to run tests simultaneously in multiple browsers for desktop computers or use real mobile devices. This speeds up the testing process, which also generates videos and logs to quickly identify bugs in your application. - Automated testing. Test applications in parallel on multiple physical devices in the AWS Cloud. With the built-in Amazon infrastructures, the user is able to test their applications without any scripting. - Testing on devices that potential users work with. It is possible to run tests on a wide variety of physical devices. Unlike emulators, physical devices allow you to more accurately determine how users interact with your application while taking into account factors such as memory size, processor usage, location, and firmware or software changes made by the manufacturer or operator. The base of devices in Amazon is constantly growing. - Playback and quick troubleshooting. The service allows you to manually reproduce problems and run automatic testing in parallel. The service collects videos, logs, and performance data, which will provide detailed information about the problem and help you quickly solve it. In automated testing, problems are identified and grouped. In doing so, you can set the location, language settings, network connection settings, application data, and install the required applications. It is possible to use open source testing frameworks such as Appium, Calabash, or Espresso. Testing can also be performed manually using remote access. For automated tests and to retrieve results from IDEs, it is possible to use continuous integration environments such as Android Studio or Jenkins. For web applications, testing is available in multiple desktop browsers and in different browser versions. This allows tests to run across multiple desktop browsers, including Chrome, Firefox, and Internet Explorer, to ensure that applications work properly across multiple browsers. **Amazon Pinpoint** Amazon Pinpoint is a flexible and scalable service for inbound and outbound marketing communications. It allows you to interact with your customers through channels such as email, SMS, push notifications, or voice. Amazon Pinpoint is easy to set up, easy to use, and flexible to fit any marketing interaction scenario. This allows you to segment your campaign audience by customer type and customize your messages by filling them with relevant content. Amazon Pinpoint delivery and campaign metrics measure the success of your engagement. Amazon Pinpoint can grow with your business and scale to billions of messages per day across all communication channels. ![A pie chart showing 40% responded "Yes", 50% responded "No" and 10% responded "Not sure"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3bc8wazn6uu1kz98600.jpg) **Conclusion** We discovered in this article Front-End Web and Mobile Development on AWS Features. Hope you enjoyed reading this blog post. If you have any questions or feedback, please feel free to leave a comment. Thanks for reading! **Documentation** 1. [https://aws.amazon.com/amplify/] 2. [https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html] 3. [https://aws.amazon.com/appsync/] 4. [https://aws.amazon.com/api-gateway/] 5. [https://aws.amazon.com/device-farm/] 6. [https://aws.amazon.com/pinpoint/]
nirmalnaveen
957,193
Creating Docker Image with Dockerfile
There are many way where you can create a Docker Image and make a container of it, but using...
0
2022-01-16T11:10:15
https://dev.to/sshiv5768/creating-docker-image-with-dockerfile-1n58
docker, devops, container
There are many way where you can create a Docker Image and make a container of it, but using ``Dockerfile`` is an easy way. Let's create an **Apache Server** Image using this method. In this method we are going to follow below steps: - First choose a ``__base_image__``. In our case, base image is ``Ubuntu``. - Execute some commands while building the image. Actually it doesn't build whole new image it converts base image into **Apache server** image. Okay, create a new folder and give whatever name you want to give it. I am giving it ``myapache`` and switching into it. ``` $ mkdir myapache $ cd myapache ``` Create a new file named as``Dockerfile`` (not mandatory to use the same name). Add below commands into it. ``` FROM ubuntu RUN "apt-get update" RUN "apt-get install apache2 -y" ``` Let's breakdown these instructions step by step. First line says that we are using **Ubuntu** as ``__base_image__``. Second and third line says to run the given commands. ``apt-get update`` will install the latest version of packages currently installed on user's system. Next command will install **Apache server** on the ``__base_image__``. Here ``-y`` suggests automatic yes for any prompts in between the installing process. Now using this ``Dockerfile`` let's build a docker image. ``` $ docker image build -t myapache . ``` You don't need to remember above command it is already present, ``docker image --help `` will help you. **myapache** is the name of the image. One thing that i want to clarify that ``. `` is for the path of ``Dockerfile``. I am currently in the same directory that's why i used put that. So above command will build the docker image using our ``Dockerfile ``. It will run that instructions step by step. ``` $ docker image ls ``` Type above command and you will see that **myapache** image is listed there. That confirms that our image is created successfully.
sshiv5768
957,281
This week in Flutter #37
I have been impressed by the Wordle story this week. Wordle is a word game made by Josh Wardle, that...
12,898
2022-01-16T14:14:40
https://ishouldgotosleep.com/news/this-week-in-flutter-37/
dart, flutter, news
I have been impressed by the [Wordle story](https://www.macrumors.com/2022/01/11/wordle-app-store-clones/) this week. Wordle is a word game made by **Josh Wardle**, that became viral recently. After that, a lot of clones appeared on the mobile stores, trying to make money from its sudden success. And they did make money until the [AppStore banned them](https://www.macworld.co.uk/news/wordle-clones-removed-3812312/). It impressed me that people are paying for a game they can have for **free**. In case you want to feel the excitement of getting **banned** from the AppStore, you can create your own [clone of Wordle](https://ficiverson.medium.com/how-to-build-a-wordle-with-flutter-9dd435f88053) in Flutter. <small>- Michele Volpato</small> ## Development 🧑‍💻 ### 🔗 [Accessibility in Flutter?](https://tonyowen.medium.com/accessibility-in-flutter-592f2e760149) Curious to know how adding **accessibility** features work in Flutter compared to React Native? In this article, by [Tony Owen](https://github.com/tunitowen), you'll learn the basics of accessibility in Flutter and you'll find out which one is better: React Native or Flutter. Spoiler alert: it's Flutter 😎 ### 🔗 [Flutter App Architecture: The Repository Pattern](https://codewithandrea.com/articles/flutter-repository-pattern/) In this updated article by [Andrea Bizzotto](https://github.com/bizz84/), you'll learn how to use the **repository pattern** in an example Flutter app that connects to the [OpenWeatherMap API](https://openweathermap.org/api). Andrea provides different ways to use the code he shows, for instance by using several Flutter packages like [get_it](https://pub.dev/packages/get_it), [Riverpod](https://pub.dev/packages/riverpod), and [flutter_block](https://pub.dev/packages/flutter_bloc). Definitely a must-read. ### 🔗 [Flutter Navigator 2.0: Using go_router](https://www.raywenderlich.com/28987851-flutter-navigator-2-0-using-go_router) I have been waiting for this article for a while. The [previous article about Navigator 2.0](https://www.raywenderlich.com/19457817-flutter-navigator-2-0-and-deep-links) from [Kevin D. Moore](https://github.com/kevindmoore) was a little bit **complicated**. This article, on the other hand, is exquisite. It combines the previous version with the [go_route](https://pub.dev/packages/go_router) package, and I am looking forward to proposing these changes to my team. ## Backend 🗄 ### 🔗 [Everything You Need To Know About Appwrite 0.12](https://eldadfux.medium.com/everything-you-need-to-know-about-appwrite-0-12-b90725b3c0a1) Appwrite released a new version that adds **Permission Models**, a more performant pagination strategy, new dashboards, support for more error logging solutions, and more. 🎉 --- [...] Read the rest on [my website](https://ishouldgotosleep.com/news/this-week-in-flutter-37/). [Join the Flutter and Dart newsletter](https://ishouldgotosleep.com/subscribe-to-the-newsletter-devto/) and receive it weekly in your inbox.
mvolpato
957,324
start new journey
Fast of all I am FASILU 2 nd year bsc computer science student. I dicide to become a freelancer wen...
0
2022-01-16T15:42:42
https://dev.to/fasilu/start-new-journey-1moc
Fast of all I am FASILU 2 nd year bsc computer science student. I dicide to become a freelancer wen I study 12th fast I create a youtube channel but it's floped And start learn html and python for web development .I make my own website then I think iam a full stake devoloper🤗. then i joined freelancer.com and upwork and bid some job postings.but I can't get eny client🥺. then i Stoped. Today I restarted but not enough skills an also bad english 😣 .I try to improve.its take time. Then today try to contribute GitHub opensource project's😉 but I don't no how it's work . will you any help . The real problem is I don't know which one is choose. I like all technical and computer based things (I am always a confusion 🤔🤔) My real plan 1. Web development for freelance but it's not my real 2. Enthical hacking for my security 3. Mechanic engineering also I like but this time I not. 4.Electronics my favourite it's my hobby. 5.App development , AI , machine learning..etc it's all basics I needed 🙄I always think this and how I make money I know it's all not work but I want it.first time just for learning but this time learning+ make mony🤯. then i never stop. What you think _**comment**_. In this time Opensource projects is my target and I think this [dev](dev.to) is also help. I think this article writing is improve my English 😜. See you next time
fasilu
957,423
Learning Elixir/Phoenix
TL;DR; a rant about how much is missing and why is there no easy setup guides. I've starting...
0
2022-01-16T16:24:16
https://dev.to/neophen/learning-elixirphoenix-fh9
elixir, phoenix, beginners, webdev
TL;DR; a rant about how much is missing and why is there no easy setup guides. I've starting dabbling with some Elixir/Phoenix about a year ago. It looks amazing, but I was always just following tutorials. Now i'm on the road to actually write a product with this stack. If I was super serious about the product i would just write it in my stack of choice [Laravel/PHP, Inertia, Vue/JS, TailwindCss]. But i want to use/learn Elixir so i'm switching my stack to [Elixir/Phoenix, TailwindCss] **Other tools:** Code editor: [Vs Code](https://code.visualstudio.com/) Versioning: [Github](https://github.com/) **Deployment** Deployment with elixir/phoenix is a big way from being beginner friendly, luckily i stumbled accross [Fly.io](https://fly.io/) and even with their guide i ran into issues, which got resolved by the [Fly.io](https://fly.io/) team pretty fast. A year ago i've spent multiple hours to try and understand how it's done, but there are so many guides and all of them sound or look very complex, i couldn't get a single thing to deploy. I had some success with [Gigalixir](https://www.gigalixir.com/) after like 10 tries and figuring out esoteric error messages. At the moment i'm happy to start with Fly.io as their free tier looks good enough for what i need. But i've looked at their pricing for anything with a bit more oomph, and it's quite steep. And that's the same situation with Gigalixir. You can't even compare to buying a simple vm from [Hetzner](https://www.hetzner.com/cloud) or [hostinger](https://www.hostinger.lt/vps-serveriai) for 3-7Eur per month get a free plan from ploi.io or laravel forge and you're good to go. considering that you will are able to spin up a few apps on one server like that. Maybe there is an offering somewhere that does this for elixir/phoenix as well? This says a ton about how lacking this stack is for beginners, with laravel you have lots of choice, ploi.io, laravel forge, or anything that runs php, it's pretty easy to get up and running. After two days of working I have these pain points, mainly because of amazing work that laravel/vue has done with their docs. - Docs are hard to navigate especially as a beginner, I don't know where I need to look for specific info, is it elixir hexDocs, phoenix hexDocs or phoenix liveView hexDocs. Being a more niche stack the stack-overflow or general web search will usually not solve the problem you're having. - VS Code experience is horrible. I don't know but i've read some places on how to try and setup linting/formating html or get some autocompletion. it's still not working, and at this point i just want to get on with my project as i don't have much time. I really miss the simplicity of Prettier/Volar for .vue files. Again maybe i should use something else like a different editor please help? - Surface/Liveview i remmember when surface just started and i though wow this looks very similar to vue and was eager to try, but i didn't have any project. I've started now with Phoenix Components, and i don't understand why they had to make such a difficult experience and so different from anything javascrip/php, just give me some `{{ antlers }}` and `:class="interpolated" @click="action"` I remember a year ago i actually started looking into this and started writing a watcher which would convert vue template into a html.leex (now html.heex) And I still think i will need to do this as i'm missing: autocompletion/tailwind intelisence/html highlighting/html formating Anyways it's just the first two days, hopefully i can make this project, and then see if i can add something to the ecosystem Have a nice day!
neophen
957,466
Dealing with asynchronous data in Javascript : Promises
What is Promise ? Well, we use this word in our daily life so many times. We often make...
0
2022-01-16T20:33:00
https://dev.to/swasdev4511/asynchronous-javascript-promises-472f
javascript, webdev, programming, beginners
## What is Promise ? Well, we use this word in our daily life so many times. We often make promises to ourselves , sometimes we keep them and sometimes break them 😉. Does it have the exact same meaning when it comes to Programming ? Well, sort of!. To make this understand we should recall what is the purpose of handling asynchronous data. In an asynchronous manner, what we are doing, first we complete the primary task and then only we try to accomplish susequent task.Here, primary task is a kind of a promise. Now, let us find out how are they exactly behave in programming context. --- ## Promise - Promise is an object that produces a single value either **resolved** or **rejected** ( make them / break them ) - Promise has three states : **Pending** , **fulfilled** & **rejected**. They are **eager** i.e as soon as we call promise constructor, it starts the given task. - It accepts two functions : **resolve** & **reject** - Promises are thennable i.e we can execute the subsequent task using **then** method. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30tvgc2t5338xymdqqxi.png) - In the above code example, we are going to perform one of the two actions depending on the lock down status. - If there is no lockdonwn then we can go for a trip otherwise we have to sit in our home. Now, take a practical example. A user is trying to sign in a website. Here, authentication api is called if it is going to return a token then only user is allowed to see dashboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s0h6fpfes5ww6rnw4gnm.png) --- ## Promise Chaining - If we are having multiple asynchronous tasks which depend on one another then we can chain them using **then** method - As promises are thennable we can chain n number of promises one after another. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cq5kkf0ln43z46pu3uph.png) --- ## Handling Errors - Errors can be occured because of multiple factors. So it is neccessary to handle them rather than providing a broken page. - Promises do have a **catch** method. We can chain it to promise object. - The code example shown below illustrates that if there's an error then he/she will redirected to the signup page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mq8q917iv66o6dfggsjz.png) --- ## Advantages & Drawbacks - We don't need to nest the functions one into another, as it is done in the callbacks. - It improves the code readability and provides a better way of handling errors. - As promises are eager, once fired we cannot cancel them. - The programming execution doesn't stop at the moment until the promise is received. The program continues to run further line by line. - Also, still the code doesn't look synchronous. --- ## Problem In one of my Interviews I have been asked to write a promise. **<u>Problem statement</u>** : Write a promise if it is having number then it will resolve as number else reject as NOT a number ``` function getPromise(value) { return new Promise((resolve, reject) => { if (typeof value === 'number') { resolve('I am number!'); } else { reject(' I am NOT a number!'); } }); } getPromise("abc").then((res) => { console.log('Resolved -------', res); }, (err) => { console.log(" Rejected -----",err); }); ``` So to overcome the drawbacks, async-await is introduced We will see what are they and how they work in the next blog. Thanks for reading the article!!!.Feel free to drop me here if you have any doubts and suggestions!
swasdev4511
957,618
An introduction to open-wc
Upon dipping your feet into the behemoth that is web development, you will quickly realize how...
0
2022-01-17T20:47:47
https://dev.to/rajivthummalapsu/an-introduction-to-open-wc-27i3
Upon dipping your feet into the behemoth that is web development, you will quickly realize how non-static this field is. Everything is constantly changing and evolving, from updates to web protocols to constant syntax alterations. Consequently, a 1337 developer must periodically update their toolkit with the new fads and revolutions within the industry. With a sector that is never static, it is vital that the fundamentals of web development are mastered to provide a base in expanding a developer's toolkit. Perhaps one of the most important fundamentals to grasp is the basics of JavaScript. As adumbrated by tutorialspoint: > Javascript is the most popular programming language in the world and that makes it a programmer’s great choice. Once you learnt Javascript, it helps you developing great front-end as well as back-end softwares using different Javascript based frameworks like jQuery, Node.JS etc. Javascript is everywhere, it comes installed on every modern web browser and so to learn Javascript you really do not need any special environment setup. For example Chrome, Mozilla Firefox , Safari and every browser you know as of today, supports Javascript.Javascript helps you create really beautiful and crazy fast websites. You can develop your website with a console like look and feel and give your users the best Graphical User Experience. This article will provide a beginners perspective on how to begin to interact with JavaScript and will facilitate the setup process to begin exploring the language. **NodeJS & NPM** Our first journey together in the world of JavaScript will begin by installing a software named "NodeJS". What exactly is NodeJS? > In simple terms, it’s a JavaScript free and open source cross-platform for server-side programming that allows users to build network applications quickly. [Click here to learn more about NodeJS](https://medium.com/@LindaVivah/the-beginners-guide-understanding-node-js-express-js-fundamentals-e15493462be1) Follow the listed steps closely and don't hesitate to scour the internet further to diagnose any challenges you encounter. **1)** Head over to [](https://nodejs.org/en/) and select the big green button on the left that reads LTS. In my case, the version is 16.13.2 LTS. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/acy4bjyhnybk6x2ujpqc.png) Once you finish downloading, proceed to the next step. **2)** Next, hold the windows key + R and type "cmd" in the run box that pops up. You can also type "command prompt" into the search bar in your computer if you prefer. This should open up a black box. This is called the terminal or command prompt. Type node-v into your terminal. This will be used to check what version of NodeJS you have downloaded. Moreover, it will be used to check if you have actually downloaded NodeJS. ![You should get something like this](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/evw47bt1u8jvcv3c1gju.png) You should have something like this. The number following the "v" is your version number. (Don't worry if it is different than what I have you may proceed to the next step). If you are encountering issues, head over to YouTube to search for a NodeJS download tutorial. **3)** Next we are going to make sure that npm downloaded correctly. Npm is a package manner that JavaScript leverages to facilitate the runtime environment. Run the command npm-v to see what version of npm you have downloaded. Again, the same thing applies here. Don't worry if the numbers following the v are different than the following image. This indicates your version number. If you do not have npm installed, try running the following commmand in your terminal: npm install -g npm If you are still running into trouble, head over to YouTube to search for installation tutorials for npm. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/10lhy60b79hxkwxj0bpb.png) **Installing ohmyzsh** Next we will work on installing ohmyzsh. Zsh will aid tremendously in operating the terminal. Head over to [](https://dev.to/vsalbuq/how-to-install-oh-my-zsh-on-windows-10-home-edition-49g2) for any extra explanations on the following steps. 1) To start, type "windows powershell" into the search bar in your computer. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x93a14lwcckpj4s8thw6.png) Select "run as administrator" as depicted in the image. If you do not have windows powershell, I would advise downloading it from the Microsoft Store. Otherwise, you can try running the following steps from your regular terminal/command prompt. 2) Run the following command in windows powershell: dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 3) This leads us into our next step which is to install debian. Head over to the microsoft store on your windows machine and search for "debian". Install the application as depicted in the following image. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbo1mhm13bqdtpfin1yh.png) 4) Next open up debian either through the microsoft store or by searching for it in the search bar. Run the following commands in order. $ sudo apt-get update $ sudo apt-get upgrade $ sudo apt-get install zsh $ zsh $ sudo apt-get install curl sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" 5) Restart debian and make sure that you see .zsh somewhere in the terminal. This lets you know that you have installed zsh correctly. 6) Next we will be installing the node within zsh. Understanding how Node works and its primary function is important. As referenced on medium.com, > Node allows you to execute Javascript code outside of the browser, in a computing environment (such as a server or local development environment) rather then a browser environment. In laymans terms, node essentially enables an individual to execute their code in a sandbox environment. (Without causing any real damage). Think of it like practicing an idea for a painting on a sheet of printer paper before starting on the canvas. Run the following command: $ sudo apt-get install build-essential ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u91t9kbyz8ik6mtt095g.png) 7) Run the following command: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sxhwjmcpojq8nhm0y0x4.png) 8) Make sure that you have VSCODE installed before you proceed. If you do not have vscode installed, follow the steps linked in this video and then come back. --> [](https://www.youtube.com/watch?v=V3o57MU5eoE) Run the following command: code ~/.zshrc This will open up vscode with a file. Click on the .zshrc tab and paste the following code after the last line. export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sew4kw6zxfohtk9usi9g.png) Save the VSCODE file (Ctrl + S), then exit VSCODE. 9) Open up debian and run the following command: nvm install --lts ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6w7p1c72fsyxrm9bqhd.png) Next run this command: nvm use --lts Nice Work! **Starting with open-wc** Now we will take our first dive into working with open-wc. 1) Before we get started, you will need to create your github account and a repository. Follow the steps in this [video ](https://www.youtube.com/watch?v=QUtk-Uuq9nE)then proceed to the next step. 2) Next, you will need to establish a secure, SSH key based handshake with github. Follow the steps in this [article ](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) to complete this task, then proceed to the next step. 3) Open your terminal (windows key + r --> type in cmd in run box). Next you will need to navigate your github repo that you created. Find your github repo in your file explorer and right click the file to copy the path to your clipboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wphs8dkl3icm74w31mcl.png ) In your terminal, type cd then paste in the file path. This should change your directory to your github repo that you created. 4) In the terminal type npm init @open-wc. Wait for the terminal to complete its processing. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mcf0ai8ykt9xqxhdoqtn.png) 5) You should now see a gui that prompts you with some options. Navigate through the UI to select the following options: What would you like to do today? >> Scaffold a new project What would you like to scaffold?>> Web Component What would you like to add _(just click enter)_ >> Would you like to use typescript? >> No _for the following option, replace what follows hello-world with whatever you would like _ What is the tagname of your web component >> hello-world_RajivLab1 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7nm9jm448nhcgl6ojvj2.png) Write the file to disk and install dependencies. 6) Follow the instructions in the terminal. Change the directory to your web component and then run the following command: npm run start ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lecqublj3pv5k8y0b2m.png) 7) A browser should pop up with localhost in the address. Right click this page and click "view page source" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u1r3y5j4itpiqyf63zca.png) You now have access to the source code of this page. 8) Now lets go ahead and begin to decipher what this code is asking of us. Right click and select "Save as" to download the page. 9) Drag this downloaded file into vscode so that we can comment on the file and start to decipher what the code is doing. If this is your first time with JavaScript, start by copy and pasting each line of code into google to try and make sense of it. Leave a comment by typing //your_comment_goes_here to annotate what you believe the code is doing. This [article ](https://lit.dev/playground/#sample=examples/full-component)will help. ----------------------------------------------------------------- You can access my first ever interaction with this code snippet on my Github which is linked [here](https://github.com/RajivThummala-psu/ist402_lab1/blob/main/localhost.htm). Thank you for reading this article and I wish you good luck on the rest of your JavaScript journey.
rajivthummalapsu
1,002,652
Task force5.0 {Automation}
Welcome back to the taskforce 5.0 blog series In this second episode I will go through what I...
0
2022-02-27T07:35:51
https://dev.to/rkay250/task-force50-automation-47b4
episode2, devops, taskforce, coding
Welcome back to the taskforce 5.0 blog series In this second episode I will go through what I learned and experienced in my second week. ## DevOps process This last week we learnt more about software development operations. Being software developer always requires efficiency and effective team collaboration and also being responsible. ![Imn](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/go24jdmvkimybjrzva3n.png) To achieve that we learnt best use of different IT tools and operations like [git](https://en.wikipedia.org/wiki/Git), [GitLab](https://about.gitlab.com), etc. These tool doesn`t only ease collaboration but also automate the whole software development process. > Tips: DevOps is a set of practices that combines software development and IT operations. It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. ##Communication The best thing about taskforce is that we don't only focus on technical skills but also soft one. This last week we dived deep into communication and I can't Imagine how much information one can give up without saying a single word ! Communication itself is a broad topic that can't be discussed within a week , therefore we managed to learn a lot including IMessage ,structure of effective communication, principle of effective communication etc. ## The Outstanding session ![hans](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16r0jmul0abd0fnqzbx5.jpeg) I always enjoy learning from elders , this week was blessing to us as we had great communication session with Mr Hans. We're glad that he shared a lot of his expertise in management and communication. To conclude this one was another fruitful, educational and insightful week. Remember "next episodes are coming soon" and you can find the first episode from :point_right: [here](https://dev.to/rkay250/task-force-50-1mhh)
rkay250
957,742
Swift Notes: Enums
This is a collection of code-snippets that I've accumulated on my journey of learning iOS Development...
0
2022-01-17T04:07:43
https://dev.to/mikewestdev/100daysofswift-days-1-2-complex-data-types-2cln
swift
This is a collection of code-snippets that I've accumulated on my journey of learning iOS Development with Swift; I'm hoping it can serve as a cheat-sheet or resource for others learning. ## Enums Swift is smart and will iterate the rawValues of your enum from 0 or according to whatever initial value(s) you set ```ts enum WinterMonth: Int { case December = 1 case January /// 2 case February /// 3 } print(WinterMonth.January.rawValue) /// 2 ``` Iterate over all values of an enum with the CaseIterable Protocol ```ts enum WinterMonth: String, CaseIterable { case December case January case February } var monthsArray = WinterMonth.allCases ``` Comparing enums using Swift's type system ```ts /// Switch Statement enum WinterMonth { case December case January case February } let currentMonth = WinterMonth.January switch currentMonth { case .December: print("Happy Holidays!") case .January: print("Happy New Year!") default: print("It is \(currentMonth)") } ``` Enum containing different data types ```ts enum Result { case success(Value) case failure(Error) } extension Result { /// The error in case the result was a failure public var error: Error? { guard case .failure(let error) = self else { return nil } return error } /// The value in case the result was a success result. public var value: Value? { guard case .success(let value) = self else { return nil } return value } } ``` ```ts
mikewestdev
957,771
Discovering LIT
I thought I knew some basics of Javascript and CSS, but I was wrong. To begin, I'm writing this blog...
0
2022-01-17T04:59:48
https://dev.to/jfz5219/discovering-lit-egg
I thought I knew some basics of Javascript and CSS, but I was wrong. To begin, I'm writing this blog for one of my IST lab assignments. I first had to complete a list of thing. List: - Download VSCode, NPM, and NodeJS - Create GitHub account - Link SSH NPM is important because it provides tools that Node.js files may need. It also helps us interact with and create repos. I already had VSCode and NPM, so all I needed was NodeJS. To download NodeJS, I search type it into Google and clicked the first link. I wanted to download the the recommended version and stuck with Ver. 16.13.2. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wv317tqrejpc2m4k8o12.JPG) I had a GitHub account and just needed to create a repository for this lab assignment. Then, I went on a tangent on what a SSH was and how to find my keys. I ended up using this [tutorial](https://medium.com/devops-with-valentine/2021-how-to-set-up-your-ssh-key-for-github-on-windows-10-afe6e729a3c0) on setting up my SSH keys for GitHub. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrl2lb57m4k6suvuof5r.JPG) With everything installed and ready I ran over the LIT tutorials provided by our professor. This was when I realized I knew nothing about Javascript. The tutorial started off easy, but got harder as more components came together. I was kinda lost at the last page of the tutorial, but I think it was enough for me to complete the lab. **Lab 1: Create a Hello World Element** I looked over the simple sample code given to us as a reference and documented what I understood and didn't. Deleting and checking what the code would produce helped me understand some components. After studying the code, I renamed all the element components to "Hello World". I knew I was correct once it produced the same results as the sample code. However, all of this was done in the LIT tutorial website, I still had to run my code using NPM. In the end, I couldn't completely finish the assignment because it didn't run and I had errors. I'm still trying to figure out what happened and if there's no result, I will head to my classmates, professor, or TAs for help. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/874qwtzsl2krnbnxyjme.JPG) **Update Jan 19** With the help of some of my classmates, I was able to figure out what went wrong. I should have looked over the professors [tutorial](https://www.youtube.com/watch?v=r_mio0e6v1g) because I completely skipped over some steps for the "open-wc". Once everything was installed, then I could run "npm start". I still came across some errors after running my code and I'll go over some of them. First, my "HelloWorld" class was not capitalized so it looked like this (picture below). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/92buki69hbsleoa6fb7p.JPG) Second, I did not import the source correctly. I should have added two dots to get to my hello-world.js directory, instead I only had one. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4us7nbs1oqgtv9w1s98l.JPG)(WRONG, it should be src="../hello-world.js") After patching up everything and running "npm start", the outcome looked like this. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eefj8vflurxb5lfqyk10.JPG) You can check out my code on [GitHub](https://github.com/jfz5219/ist402).
jfz5219
957,781
Developing Express and React on the same port
Without CRA. I was quite annoyed at how difficult it was to integrate React with Express. The...
0
2022-01-17T05:52:38
https://dev.to/codingjlu/developing-express-and-react-on-the-same-server-55p7
node, express, webdev, react
_Without CRA._ I was quite annoyed at how difficult it was to integrate React with Express. The process goes something like: 1. Setup your Express app in one directory 2. Use CRA to generate the frontend in another directory 3. Develop backend 4. Use a proxy for the frontend and mess with CORS 5. Done. Production? Squash together... mess Sounds simple? Not for me. This was even... hacky. Creating React apps with Express shouldn't be hard. It's been a long time since React and Express came out, and nothing could be better? Oh well. I'll just use NextJS or something. Until things get messy. Websockets? Uncheck. Elegant routing? Double uncheck. I just find it hard to work with. The server-side is hard(er than Express; perhaps I'm just lazy though) to configure too. So, we're now officially stuck. So what's the option? Back to Express... I set off to make a template. A template to use Express and React&mdash;with no mess. Let's dive in. To start off, let's fork the template. ```sh git clone https://github.com/codingjlu/express-react-template.git ``` Then move to the directory: ```sh cd express-react-template ``` Now we'll have install all the dependencies. Run the `install` command: ```sh npm install ``` Then we have to install all the dependencies for the frontend. ```sh cd client ``` ```sh npm install ``` Now that everything's installed we can start the development server: ```sh npm run dev ``` Now once stuff stops printing in the console you can open up http://localhost:3000. Boom! Our React app is now up and running, powered by Express. You should see something like this: ![Starter Express React App](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j1huqr5uplddbq4wk4qx.png) We're using React Router, so if you click on the link we should see an instant change to the new location. Cool! We have also defined a API endpoint using POST at `/hello`. You should see some JSON like this: ![json response](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9x4o9kth9iaeumgmvhr.png) Yay! Our Express React app is up and running, with no junk. We can customize our server and client. Let's go to `index.js` in the root of our file and replace ```js app.post("/hello", (req, res) => { res.json({ hello: "world" }); }); ``` With ```js app.get("/hello/:name", (req, res) => { const { name } = req.params; res.json({ message: `Hello, ${name}!` }); }); ``` Now visit http://localhost:3000/hello/Bob and see what happens. Experiment and change Bob to something else. When you're comfortable with Express (which you probably already are) you can go ahead and change the server however you like. You can create new files, edit the file, perform backend operations, and more! We can also customize the client-side (React). Let's make a button on the home page that let's you say hello. Go to your code and and open up `/client/src/routes/home.js` and let's add a button to our `<Home />` component. Inside of the parentheses after return change the value to: ```jsx <> <Hello><Link to="/hello">Hello</Link>, world!</Hello> <button onClick={() => alert("Hello!!")}>Say hello</button> </> ``` Now save the file and reload localhost on your browser. Click on the button and you should get a hello message! Sounds great? Great! For the template, the frontend stack uses Styled Components. You can change it to something else, but you might have to edit the webpack config (like if you wanted to do modules or something). Importing images (including SVG) also works. That's it. If you used the template please tell me what cool things you made with it! The template repository is https://github.com/codingjlu/express-react-template.
codingjlu
957,899
How to fix QuickBooks Script Error? [Experts’ Tips]
There are situations when you try to get the right of entry to an internet web page from QuickBooks...
0
2022-01-17T07:29:13
https://dev.to/alexpoter0356/how-to-fix-quickbooks-script-error-experts-tips-4j3o
ux
There are situations when you try to get the right of entry to an internet web page from QuickBooks however the specific web page does not get loaded. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8245wiewvocdz46ige4v.jpg) Additionally, an error message articulating "An issue has taken position within the script of this web page” is shown on the display screen. This is known as [QuickBooks Script Error](https://www.quickbooksphonenumber.com/blog/fix-quickbooks-script-error/) and it can be irritating for you. In this blog, we can talk about the explanations in the back of QuickBooks Script Error which can mean you can spot the problem simply. Further, we can highlight the possibilities adopted via the solution and learn how to repair the “script error message gained it move away”. ## Learn about the causes of QuickBooks Script Errors 1. The error occurs while you attempt to import a coping with an account that doesn’t exist. 2. The error appears while uploading an invoice or bill with an account that doesn’t fit with accounts due or to be had property. 3. The account quantity or account identity as of now exists. ## Solutions to fix QuickBooks Script Errors **If the mistake happens while exporting** 1. - First of all, you wish to have a 32-bit model of Internet Explorer 2. - Further, transparent all cookies and cache. Make sure to follow the stairs underneath to do the similar - Click on the tools button to be had on the proper nook in Internet Explorer - Select Safety and tap on Delete surfing historical past - Tick within the Cookies and website knowledge checkbox and tap on Delete 3. Presently shut Internet Explorer opens it again 4. Go again to QuickBooks online and then start-up exporting the record data **If the mistake happens while uploading** 1. First and foremost, open Internet Explorer and seek the Tools menu. 2. If you couldn’t find Tools choice, then press the Alt key. Hidden menu choices will appear for your presentation screen. 3. Choose Internet Options. 4. Then, tap on the Advanced tab. 5. Clear Display a notification about each script error. 6. Finally, click on Ok. ## Bottomline Yes, we do agree that it is complicated to handle these issues that have nothing to do with your application but with the internet. We hope that this blog helped you fix the QuickBooks Script Error from your device without losing any data. If the issue persists, it is always best to get in touch with [QuickBooks support](https://www.quickbooksphonenumber.com).
alexpoter0356
958,016
Cross-Platform Game Apps: Trends and Top Picks For the Year 2021
The year 2020 will be remembered as the year of the pandemic. Everything came to a standstill that...
0
2022-01-17T11:03:57
https://dev.to/rvtechnologies/cross-platform-game-apps-trends-and-top-picks-for-the-year-2021-4jhf
gamedev, gameapps, mobilegameapp, android
The year 2020 will be remembered as the year of the pandemic. Everything came to a standstill that year, and humanity faced difficult circumstances. Despite the fact that the rampaging pandemic killed many enterprises across several industries, the gaming industry was spared. Games are still the most popular sort of app for mobile device users in a century dominated by Facebook, Instagram, and Twitter, where influencers are supposed to be the new voice of brands. [**Mobile game developers**](https://rvtechnologies.com/mobile-game-development-company/) worked around the clock to create more innovative and engaging gaming experiences. Let&#39;s get down to business. For starters, let&#39;s have a look at the most recent [2021 mobile gaming statistics](https://techjury.net/blog/mobile-gaming-statistics/#gref) that will astound you: - Games account for 21% of Android and 25% of iOS app downloads. - Games are responsible for 43% of all smartphone users. - Within a week of purchasing a phone, 62% of consumers installed a game on it. - Android is used by 78% of gamers. - Globally, there are about 2.2 billion active mobile gamers. - Puzzle games account for 57.9% of all games played. - Women are more likely than men to spend money on in-game content. - By the end of 2021, revenue from mobile gaming is set to surpass last year&#39;s revenue of $77.2 billion. Gamers today have access to a wide range of devices, from popular smartphones to high-end consoles. Mobile game app development companies are focusing on [**creating games**](https://rvtechnologies.com/some-key-mobile-game-app-development-trends-to-look-forward/) that work across all main devices to ensure that the game reaches the broadest possible audience. ## **How Do Cross-Platform Games Work?** To create games from common code and art assets, cross-platform game app development necessitates a number of software tools. All developers can work on the same code base. And platform-specific elements for the game to be released are created using game engines, software libraries, scripting languages, and software development kits (SDKs). This is accomplished by keeping different elements of the game distinct. Consider it like the layers of a cake. The game logic is layered over the game engine, which is layered over the graphics API, which is layered over the OS, which is layered over the hardware. App developers can port games without having to redo everything from scratch if you keep things separated like this. Instead of spending time on the details of each game platform, cross-platform mobile game app creation allows creators to focus more on game mechanics and strategic gameplay. ## **Why Cross-Platform Game App Development?** The advantages of cross-platform game app development are numerous. Cross-platform developers may write code once and then publish the game on all mobile platforms, including iOS, Android, and Steam. This not only saves time but also reduces additional overheads and expenses, resulting in significant cost savings. - Uniformity: Across numerous devices, our cross-platform games keep a consistent design, feel, and experience. - Better Monetization: Cross-platform mobile games have a much greater retention rate and are comparatively better for monetization. This allows them to reach a wider audience, allowing them to make more money from a single game. - Greater customer base: The more platforms you target, the larger your client base will be. This gives you the added benefit of being able to tap into a larger market. There may, however, be certain drawbacks. Native platform development gives you a one-of-a-kind connection to the platform, which usually means better performance. There may be some delays when using non-native code. It&#39;s critical to concentrate on the user experience and to test across all platforms. ## **Top 5 Mobile Game Development Tools Preferred by Developers for the year 2021** ### **1. Unity** Some of the industry&#39;s biggest mobile game app developers use[**Unity as one of their favorite gaming engines**](https://rvtechnologies.com/8-reasons-why-unity-is-the-first-choice-of-most-game-developers/). Unity is compatible with all major platforms, including Windows, Android, iOS, Linux, game consoles, and others. This platform allows for easy collaboration among their team and creates outstanding games with 2D and 3D development features. Unity allows you to import assets from a variety of 3D software, such as Maya or Blender, and also provides a large library of components that can be purchased directly from their store. Game: 300: Rise of an Empire - Seize Your Glory Game, Angry Birds 2, Pokémon Go Pricing: - Free for students and for personal use - Plus edition: $40/ month - Pro edition: $150/ month - Enterprise edition: $200/ month ### **2. Unreal Engine** If you are a newbie, Unreal Engine is the platform for you because its user-friendly features eliminate the need for programming knowledge. Unreal Engine is a complete product suite that does not require the use of any third-party plugins in order to create a game. It includes tools like Blueprint, which allows app developers to create prototypes without coding, Sequencer, which gives you access to animation and cinematic capabilities, photoreal rendering in real-time, lifelike animations, pre-designed templates, and a wealth of learning resources. Games: Assassin&#39;s Creed Chronicles, Batman: Arkham Knight, Lineage II: Revolution Pricing: - Free publishing and creators licenses - Custom license to a custom price - Enterprise program for $1.000 per seat/year ### **3. Solar2D** Solar2D, originally known as &quot;Corona SDK, The 2D Game Engine,&quot; is a cross-platform game engine that makes use of the Lua scripting language, which is simple to pick up and use. Developers may use its 2D features, and the numerous plugins offered by Corona Marketplace. Newbies can benefit hugely from its large active community and documentation. It also includes a real-time simulation to let you visualise how your app will look once modifications are made. Apps developed are supported across iOS, Android, Mac OS X, Windows, Linux, and other gaming consoles. Games: Grow Beets Clicker and Gunman Taco Truck Pricing: - Core functionalities are free to use ### **4. AppGameKit** AppGameKit is a user-friendly platform that uses a system called AGK Basic and provides solutions for all types of developers, from beginners to experts. One of its most valuable features is that 3D, Augmented Reality, 2D Sprites, and even VR are all included in the end result. Platforms that are supported include Windows, iOS, and Android. Games: Skrobol, Bouncing Brendan, and Echoes III Pricing - Classic App Game Kit: $49,99. - Unlimited App Game Kit: $66,98. - Customised packs as per individual requirements. ### **5. Amazon Lumberyard** Amazon has ventured into the **mobile game app development** arena with the release of Lumberyard, a game creation engine. Games created using it can then be launched on a variety of platforms thanks to its cross-platform capabilities. With Lumberyard&#39;s Twitch integration, developers can engage their users with visually rich content. Platforms that are supported include iOS, Android, PC, Xbox One, and PlayStation4 among others. Games: Crucible and Breakaway. Pricing: - Free ## **Conclusion:** We&#39;ve described the most common engines that can assist in the creation of stunning top Android / iOS games. Consider your budget (although some platforms are free), ideas, requirements, and expectations when selecting a platform for game app development.
rvtechnologies
958,332
Best Visual Studio Code Extensions for Developers
1. GitLens — Git supercharged Usage - GitLens simply helps you better understand code....
0
2022-01-17T19:52:06
https://dev.to/samithawijesekara/best-visual-studio-code-extensions-for-developers-1o42
productivity, vscode, webdev, programming
## 1. GitLens — Git supercharged ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hnjce5ip3yn3xi7p6bc.PNG)Usage - GitLens simply helps you better understand code. Quickly glimpse into whom, why, and when a line or code block was changed. Jump back through history to gain further insights as to how and why the code evolved. Effortlessly explore the history and evolution of a codebase.([Install Now](https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens)) <br><br> ## 2. Live Server ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnuy1gcul8y633m7swrj.PNG)Usage - Launch a local development server with live reload feature for static & dynamic pages.([Install Now](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer)) <br><br> ## 3. Bracket Pair Colorizer ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vl52uqphdjg3c7h3jdx.PNG)Usage - This extension allows matching brackets to be identified with colors. The user can define which characters to match, and which colors to use.([Install Now](https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer)) <br><br> ## 4. Mithril Emmet ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/augax3r09emgpruhk5bx.PNG)Usage - “Emmet is a plugin for many popular text editors which greatly improves HTML and CSS workflow”. Short and to the point. Emmet can increase your workflow when building sites Emmet also used to be called Zen Coding for those of you that see the syntax is familiar.([Install Now](https://marketplace.visualstudio.com/items?itemName=FallenMax.mithril-emmet)) <br><br> ## 5. Material Theme ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65aui5qd40xpc4lhzpfq.PNG)Usage - Material Theme is a VS code IDE theme.([Install Now](https://marketplace.visualstudio.com/items?itemName=Equinusocio.vsc-material-theme)) <br><br> ## 6. Material Icon Theme ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xe3i7dilt4k3zxmjrsk.PNG)Usage - Material Theme Icon is set VS code files and folder icons matching to the file extension and folder name.([Install Now](https://marketplace.visualstudio.com/items?itemName=PKief.material-icon-theme)) <br><br> ## 7. Prettier - Code formatter ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xechm954ef4ov4zsdb23.PNG)Usage - Prettier is an opinionated code formatter. It enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.([Install Now](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode)) <br><br> ## 8. ES7 React/Redux/GraphQL/React-Native snippets ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9vop4d2v35f88efvabmm.PNG)Usage - By using this plugin you can easily add the codes from snippets.([Install Now | See All Snippets](https://marketplace.visualstudio.com/items?itemName=dsznajder.es7-react-js-snippets)) <br><br> ## 9. Vscode-Styled-Components ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cjtgxqj3lds6d6z7ejot.PNG)Usage - The styled-components extension adds highlighting and IntelliSense for styled-component template strings in JavaScript and TypeScript.([Install Now](https://marketplace.visualstudio.com/items?itemName=styled-components.vscode-styled-components)) <br><br> ## 10. CodeSnap ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0whatjeyjr2a2833gj4.PNG)Usage - Take beautiful screenshots of your code in VS Code!([Install Now](https://marketplace.visualstudio.com/items?itemName=adpyke.codesnap)) <br><br> ## 11. Better Comments ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/in5n0ssd2j7zslhei151.PNG)Usage - The Better Comments extension will help you create more human-friendly comments in your code.([Install Now](https://marketplace.visualstudio.com/items?itemName=aaron-bond.better-comments)) <br><br> ## 12. Auto Rename Tag ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8zanq615wuou8jsyvvt.PNG)Usage - When you rename one HTML/XML tag, automatically rename the paired HTML/XML tag.([Install Now](https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag)) <br><br> ## 13. HTML Snippets ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sivre162s7l9uum823d1.PNG)Usage - This extension adds rich language support for the HTML Markup to VS Code.([Install Now](https://marketplace.visualstudio.com/items?itemName=abusaidm.html-snippets)) <br><br> ## 14. CSS Peek ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/igl5pcxzf6tiuhgxb655.PNG)Usage - Allow peeking to css ID and class strings as definitions from html files to respective CSS. Allows peek and goto definition.([Install Now](https://marketplace.visualstudio.com/items?itemName=pranaygp.vscode-css-peek)) <br><br> ## 15. PowerShell ![Best Visual Studio Code Extensions for Developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5co2qkth8m7o1hzs0hk.PNG)Usage - This extension provides rich PowerShell language support for Visual Studio Code (VS Code). Now you can write and debug PowerShell scripts using the excellent IDE-like interface that Visual Studio Code provides.([Install Now](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell)) <br><br>
samithawijesekara
958,358
This is my first article on DEV.to
So, this is the first article here on DEV.to. I just don't know yet why have I started it. I already...
0
2022-01-17T17:11:11
https://dev.to/peterteszary/this-is-my-first-article-on-devto-5bie
devjournal, devto, firstpost
So, this is the first article here on DEV.to. I just don't know yet why have I started it. I already have two blogs. One is for my business and the other one is for fun. The non-business one is similar to a diary. But I just like to try out new things, so that is why I've decided to click around DEV.to. Maybe I will start with some short tweets here to see if I can keep up with the writing. The reason to keep this blog up to date and post regularly is to have some consistency in my learning methodology. I am coming from a WordPress world, and my main goal is to become a full-stack developer someday. I will tell you about this in a later article. So I would like to dive deeply into programming. Also, I have a programmer background as well, because I have studied a lot for myself, from Udemy and Youtube courses. Also, I had a 2-years programming school, that I did not finish. Only the first year yet. But I decided to go back this September, so we will see. Until that, I would like to keep up with studying as much as it is possible. So I hope I can document this journey here. So I hope that I will see you and myself around. That was my first post. I guess there will be a lot more. We'll see.
peterteszary
958,620
Changing AntD locale dynamically
Hello devs, it's new year and here i'm struggling with React and AntD. I'm trying to change AntD...
0
2022-01-17T21:08:53
https://dev.to/dcruz1990/changing-antd-locale-dynamically-3e15
help, react
Hello devs, it's new year and here i'm struggling with React and AntD. I'm trying to change AntD locale dynamically. As documentation refers, AntD has a <ConfigProvider> context that wraps <App>, its receives 'lang' as a prop. So here i'm doing this dumb thing: ```Javascript import i18n from './i18n' ReactDOM.render( <React.StrictMode> <ConfigProvider locale={i18n.languages[0]}> <App /> </ConfigProvider> </React.StrictMode>, document.getElementById('root'), ) ``` And of course, when i change the language nothing happens, the docs says that we have to set up a local state or so, but i'm really lost there. Any idea?
dcruz1990
958,767
Install MYSQL on Ubuntu server 18.04
Install MYSQL on Ubuntu server 18.04 MySQL is an open-source database management system,...
0
2022-01-18T02:49:30
https://dev.to/ilhamsabir/install-mysql-on-ubuntu-server-1804-47bg
mysql, devops, webdev, ubuntu
# Install MYSQL on Ubuntu server 18.04 MySQL is an open-source database management system, commonly installed as part of the popular LAMP (Linux, Apache, MySQL, PHP/Python/Perl) stack. It uses a relational database and SQL (Structured Query Language) to manage its data. The short version of the installation is simple: update your package index, install the mysql-server package, and then run the included security script. ## Prerequisites To follow this tutorial you will need: - One Ubuntu 18.04 server set up by following this initial server setup guide, including a non-root user with sudo privileges and a firewall. ## Installing MySQL To install it, update the package index on your server with apt: ``` sudo apt update ``` Then install the default package: ``` sudo apt install mysql-server ``` This will install MySQL, but will not prompt you to set a password or make any other configuration changes. Because this leaves your installation of MySQL insecure, we will address this next. ## Configuring MySQL For fresh installations, you’ll want to run the included security script. This changes some of the less secure default options for things like remote root logins and sample users. On older versions of MySQL, you needed to initialize the data directory manually as well, but this is done automatically now. Run the security script: ```sh sudo mysql_secure_installation ``` This will take you through a series of prompts where you can make some changes to your MySQL installation’s security options. The first prompt will ask whether you’d like to set up the Validate Password Plugin, which can be used to test the strength of your MySQL password. Regardless of your choice, the next prompt will be to set a password for the MySQL root user. Enter and then confirm a secure password of your choice. Note that even though you’ve set a password for the root MySQL user, this user is not configured to authenticate with a password when connecting to the MySQL shell. If you’d like, you can adjust this setting by following Step 3. Regardless of how you installed it, MySQL should have started running automatically. To test this, check its status. ``` systemctl status mysql.service ``` you’ll see output similar to the following: ```output ● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: en Active: active (running) since Wed 2018-04-23 21:21:25 UTC; 30min ago Main PID: 3754 (mysqld) Tasks: 28 Memory: 142.3M CPU: 1.994s CGroup: /system.slice/mysql.service └─3754 /usr/sbin/mysqld ``` ## How To Allow Remote Access to MySQL One of the more common problems that users run into when trying to set up a remote MySQL database is that their MySQL instance is only configured to listen for local connections. This is MySQL’s default setting, but it won’t work for a remote database setup since MySQL must be able to listen for an external IP address where the server can be reached. To enable this, open up your mysqld.cnf file: ``` sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf ``` Navigate to the line that begins with the bind-address directive. It will look like this: ``` . . . lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 . . . ``` By default, this value is set to 127.0.0.1, meaning that the server will only look for local connections. You will need to change this directive to reference an external IP address. For the purposes of troubleshooting, you could set this directive to a wildcard IP address, either *, ::, or 0.0.0.0: ``` . . . lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 . . . ``` Then restart the MySQL service to put the changes you made to mysqld.cnf into effect: ``` sudo systemctl restart mysql ``` If you have an existing MySQL user account which you plan to use to connect to the database from your remote host, you’ll need to reconfigure that account to connect from the remote server instead of localhost. To do so, open up the MySQL client as your root MySQL user or with another privileged user account: ``` sudo mysql ``` Create user for remotely server: ``` CREATE USER 'user_name'@'%' IDENTIFIED BY 'password'; ``` ``` GRANT ALL PRIVILEGES ON *.* TO 'user_name'@'%' WITH GRANT OPTION; ``` ``` FLUSH PRIVILEGES; ``` ``` exit ``` Now user for accessing remotely db server is on. ## Allow UFW Lastly, assuming you’ve configured a firewall on your database server, you will also need to open port 3306 — MySQL’s default port — to allow traffic to MySQL. If you only plan to access the database server from one specific machine, you can grant that machine exclusive permission to connect to the database remotely with the following command. Make sure to replace remote_IP_address with the actual IP address of the machine you plan to connect with: ``` sudo ufw allow from remote_IP_address to any port 3306 ``` ``` sudo ufw allow 3306 ``` And now access your mysql server ``` mysql -u user -h database_server_ip -p ``` Or use apps like HeidiSQL, SequelPro, Navicat , Workbeanch , etc.
ilhamsabir
958,857
Dogs not barking
Gregory (Scotland Yard detective): Is there any other point to which you would wish to draw my...
0
2022-02-01T11:46:30
https://dev.to/maddevs/dogs-not-barking-594h
webdev, testing
> *Gregory (Scotland Yard detective): Is there any other point to which you would wish to draw my attention?* > *Holmes: To the curious incident of the dog in the night-time.* > *Gregory: The dog did nothing in the night-time.* > *Holmes: That was the curious incident.* *The Adventure of Silver Blaze* ![Dogs not barking](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mf5vtfvjgiokf0n54yb.jpg) What relation does this expression have to IT and when is it used? We use it for [cases when something deviates from a usual situation](https://www.mikepope.com/blog/AddComment.aspx?blogid=2392). Normally, it is a signal about some kind of a problem, when the dogs “not barking” but they should have. Or when there are no alerts about issues while normally, they are present. In other words, we use this expression for something suspicious, something that normally isn’t there. **For example:** * Suddenly, you stop getting notifications about errors. So, your alerting tools, “the dogs”, stopped barking. But it doesn’t mean that suddenly, everything is fixed and runs smoothly. It rather means that something has happened. So, look for an issue: a bug, an error, or even a failure of a monitoring or alerting system. * Users stopped complaining about delays, app crashes, bugs? It doesn’t mean that the product for some reason started working perfectly, especially if earlier, it wasn’t the case. It means that the notifications do not reach you. If everything is suspiciously calm, look for a failure. * The traffic has increased suddenly. It means that something is going on. And it doesn’t always mean that something good is going on. So, find out what is behind the traffic growth and act accordingly. [![Why it projects are late and exceed budgets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nogz8fj6yqd0imb8ijv6.png)](https://maddevs.io/customer-university/why-it-projects-are-late-and-exceed-budgets/?utm_source=devto) ## Some examples of what might cause the dogs to stop barking It is time to have a look at what might cause the dogs to stop barking. Or, in other words, to have a look at what might go wrong. We use a set of alerting tools to be updated on any failure. So, if there are no alerts, it means that one or another alerting tool stopped working or its settings aren’t alright. Examples? Here we go. ### Users are 100% happy with your new product Yep, we rely on Sentry for alerting user behaviour. Let me elaborate more on this tool though. It will help me to explain how many things depend on it. Even if your code seems to be clean, even if you did your best to cover it all in tests, testing everything is impossible. There are PLENTY of new and old browsers across multiple devices: smartphones, PC, tablets, game consoles, IoT devices, smart watches… Yep, you can run tests on many of them but I doubt whether you can test just everything. Moreover, your way of using your app is your way. Users might have a completely different approach to using it. They will for sure do something you would never expect. Bugs might appear just because the sequence of tasks is handled in a way that seems illogical to you. But it is logical for another user. And this sequence might cause an error or make the entire app fail. [![Dogfooding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0tcgl65ln8dargk7lrr.png)](https://maddevs.io/insights/blog/dogfooding/?utm_source=devto) Do all users report on the bug? If they are willing to do so, can they describe what bothers them? In most cases, to describe a bug, an error, they would need to have at least some technical knowledge. Well, try to guess what % of users have it. So, the majority of errors will either be not reported or users will limit their reports with “This product doesn’t work”. Guess why they think the product doesn’t work and what they did to make it crushed. Here is where Sentry and similar alerting tools come in handy. It eliminates the need of relying on customers to assess and test our products. It collects all errors in real-time mode. Then, depending on the settings, it sends alerts about errors to your Slack chat or email. So, Sentry alerts us about errors that users have encountered and provides us with information about the errors. By the way, as I have mentioned, it is normal that users find bugs. It is not because your product is bad. It is because every person approaches the app in a different way. **So, alerts are inevitable, especially if you have just pushed a new version of the app to production. Now, imagine that you have just released the product, and it works perfectly. You know that it works perfectly because there are no alerts - Sentry is silent. ![Alerting tools](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fk03z1t98c2lmsyl1gmm.jpg) Don’t hurry to celebrate though. Do you remember I have said that errors are inevitable just because some sequences of tasks are managed by users in a way that you couldn’t predict? So, if you don’t get any alerts, it might be the case when your “Dogs not barking”. Check whether your alerting system is integrated properly. Make sure you have inserted the code into your application correctly. If you are frantically checking Slack for alerts, check your email, too - you might have set up Sentry to send alerts to your email, and vice versa. In other words, if you don’t get any alerts on errors when the product is released, look for an issue. [![Software bugs occur](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qv7f7j53j957i7prgvek.png)](https://maddevs.io/insights/blog/why-do-software-bugs-occur/?utm_source=devto) ### Changes are implemented, and not a single notification is received One of the monitoring tools we use is Datadog. It gives us alerts about issues in infrastructure, applications, and services. We love the Datadog alerts because they are very specific, actionable, and contextual. They enable us to minimize service downtime. Also, alerts provide enough information to prioritize the issues. But what would you do if your team has implemented changes to, say, a website, and you haven’t gotten a single alert? Well, considering that the tool issues alerts on bugs, website changes, and its performance, the silence isn’t a good sign here. This is just one example. Along with the website metrics, the tool monitors many other things: backend, frontend, business analytics, etc. For example, with it, you can monitor how your app performs in front of users, or trace API requests from end to end, and many more things. Considering that Datadog not just alerts on bugs and errors but provides a lot of monitoring metrics if you don’t get any notifications from the tool, it means that something doesn’t work. So, check settings, plugins, etc. This tool shall send notifications. If it doesn’t, your “dogs not barking”. [![Software testing.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqxj2wpui1rtv0jjyptd.png)](https://maddevs.io/insights/blog/qa-software-testing-life-cycle/?utm_source=devto) ### A new feature is introduced, and website traffic hasn’t changed Your team has introduced a new website feature. What effect do you normally expect from it? It would be something with the website traffic: a new feature might attract more visitors, it might make the existing visitors use the website more, or - and it can happen, too - it might push the visitors off. Whatever the scenario is, it influences the website traffic. If a change is implemented, users will notice it, and they will want to check the change. In our case, we are talking about a new feature. So, the website traffic change is inevitable. What does Google Analytics (or whatever you use to monitor website traffic) say? Has it noticed any changes in the number of: * Pageviews * Sessions number and their length * The number of users who visited the website? If there are no changes, your “dogs not barking” again. Something is wrong either with the service settings or with the service itself. ## Some words to wrap up Conclusions? We always check what is going on when we are alerted. But when we suddenly stop getting alerts, it doesn’t mean that everything works smoothly. In most cases, it means that something broke down. Find the failure asap and fix it! [![Mad Devs services](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uznpvb6pxs56d6o82ufv.png)](https://maddevs.io/services/?utm_source=devto) _Previously published at [maddevs.io/blog](https://maddevs.io/insights/blog/dogs-not-barking/?utm_source=devto)._
maddevsio
958,860
WebRTC For Beginners - Part 2: MediaDevices
Part 2 - Media Devices Contents: Part 1: Introduction to WebRTC and creating the...
21,651
2022-01-18T03:58:40
https://ethan-dev.com/post/webrtc-part-2-media-devices
javascript, webrtc, node, tutorial
## Part 2 - Media Devices Contents: 1. Part 1: Introduction to WebRTC and creating the signaling server [Link](https://dev.to/ethand91/webrtc-for-beginners-1l14) 2. Part 2: Understanding the MediaDevices API and getting access to the user’s media devices [Link](https://dev.to/ethand91/webrtc-for-beginners-part-2-mediadevices-142d) 3. Part 3: Creating the peers and sending/receiving media [Link] (https://dev.to/ethand91/webrtc-for-beginners-part-3-creating-the-peers-and-sendingreceiving-media-4lab) 4. Part 4: Sharing and sending the user’s display and changing tracks [Link](https://dev.to/ethand91/webrtc-for-beginners-part-4-screen-share-42p6) 5. Part 5: Data Channels basics [Link] (https://dev.to/ethand91/webrtc-for-beginners-part-5-data-channels-l3m) 6. Part 5.5: Building the WebRTC Android Library [Link](https://dev.to/ethand91/webrtc-for-beginners-part-55-building-the-webrtc-android-library-e8l) 7. Part 6: Android native peer [Link](https://dev.to/ethand91/webrtc-for-beginners-part-6-android-231l) 8. Part 7: iOS native peer 9. Part 8: Where to go from here - - - - Hello, welcome to part 2 of my beginner WebRTC series :) In this part I will introduce the MediaDevices API, how to get the user’s media devices (camera and microphone) and how to get a certain video resolution etc. This part carries on from the previous part, so if you have not seen that please take the time to do so. (Or you could just clone the repo ;)) Part 1: [WebRTC For Beginners - DEV Community](https://dev.to/ethand91/webrtc-for-beginners-1l14) In order to use the Media Devices API, you must host your page on a secure domain. Also the user must allow the page to get access to their camera and microphone, this changes depending what browser is used. (Chrome asks once whilst Safari asks every session). If the page is not secure you may get an undefined returned when trying to use the MediaDevices API. Well then let’s get started. First we will prepare the static HTML file, so open public_index.html in your preferred IDE and type_copy the following: ```html <!DOCTYPE html> <html lang="en"> <head> <title>Part 2 - Media Devices</title> <meta charset="utf-8"/> </head> <body> <h2>Media Devices example</h2> <button onclick="startDefault()">Default</button> <button onclick="startVGA()">VGA</button> <button onclick="startHD()">HD</button> <button onclick="startFullHD()">Full HD</button> <button onclick="stop()">Stop</button> <hr/> <video id="localVideo" autoplay muted></video> <script src="./main.js"></script> </body> </html> ``` Next we will need to prepare the main.js file, open public_main.js and type_copy the following: (Don’t worry I will explain what is going on after) ```javascript const localVideo = document.getElementById('localVideo'); const startDefault = () => getMedia({ video: true, audio: true }); const startVGA = () => getMedia({ video: { width: 640, height: 480 }, audio: true }); const startHD = () => getMedia({ video: { width: 1280, height: 720 }, audio: true }); const startFullHD = () => getMedia({ video: { width: 1920, height: 1080 }, audio: true }); const stop = () => { if (!localVideo.srcObject) return; for (const track of localVideo.srcObject.getTracks()) { track.stop(); } }; const getMedia = async (constraints) => { try { console.log('getMedia constraints: ', constraints); const mediaStream = await navigator.mediaDevices.getUserMedia(constraints); localVideo.srcObject = mediaStream; } catch (error) { alert('failed to get media devices, see console for error'); console.error(error); } }; ``` Each function basically calls “navigator.mediaDevices.getUserMedia” with different media constraints. I will explain what the constraints mean, but first let’s run the examples. ```bash npm i # If needed npm run start ``` Now open your browser and go to: https://localhost:3000 You should get an SSL error, but hopefully you trust your own host ;) If you are using chrome you may not be able to excess the page, if so please enter “thisisunsafe”. There you should see the following page: [example — ImgBB](https://ibb.co/7Qv35qQ) Feel free to experiment with the various buttons, you can tell if you have the resolution just from the size of the video :) You may notice for example if you pick “Full HD” the resolution returned may be just “HD”. This is because if the resolution is not supported the API will automatically choose the resolution closest to the resolution wanted. What if you absolutely wanted to make sure you get a certain resolution? You would need to use “exact” as shown below: ```javascript const constraints = { video: { width: { exact: 1920 }, height: { exact: 1080 } } }; ``` This would make absolutely sure the resolution was full HD, however if the device does not support full HD it will throw an error. What if you wanted a range? You would define the constraints like so: ```javascript const constraints = { video: { width: { min: 600, max: 1300 }, height: { min: 300, max: 800 } } }; ``` One thing you will need to be careful of is that when you are sending the media to another peer, WebRTC may alter the resolution/frame rate according to the available bitrate, network condition, packet loss etc. Because of this I generally don’t recommend using the “exact” parameter, only use it if you plan to use the video locally. Well that wraps up this part, hope to see you in part 3 where we finally get to send and receive media between peers! Source Code: https://github.com/ethand91/webrtc-tutorial - - - - Bonus: Things to consider: * Is it possible to get just the camera/microphone without the other? * See if you can adjust the video frame rate via the constraints. * How would you handle the user not having a camera/mic? What if they just blocked access altogether? * If using a smartphone, can you get the back camera? MediaDevices.getUserMedia API: [MediaDevices.getUserMedia() - Web APIs | MDN](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) --- Like me work? Any support is appreciated. :) [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/ethand9999)
ethand91
958,924
Top 5 Content Writing Company
Every business is built on the content as its fundamental foundation, as it describes its company,...
0
2022-01-18T06:32:45
https://dev.to/viveksh41162642/top-5-content-writing-company-2fp8
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib3fxdzf3d4vwct7a9tm.jpg) Every business is built on the content as its fundamental foundation, as it describes its company, products, or services. Using social media is becoming a necessity for all businesses today. A significant reason for using content writing is to gain and keep social media followers. Professional content writing companies become more and more necessary for effective writing every year. Business marketing strategies such as blogging, article writing, press releases, and social media promotion are helping companies gain exposure and brand awareness more than ever. Companies or individuals who provide content writing services or who provide written content on behalf of businesses are Content Writing Companies. The companies can also offer content marketing services to generate as many leads as possible. If we want to choose the [best Content Writing Company](https://www.bettergraph.com/content-marketing/), we need to look at the following criteria: -An effective content creation team must be composed of writers who are proficient in writing, grammar, regularity, and the art of storytelling. -It is vital to invest time in creating a fantastic piece of content as well as spending time on promoting it to target the right audience to bring in traffic. -Publishers and influencers should be available to them. -They should have adequate knowledge of the Industry so that they can write convincingly about topics that are relevant to the business. Here are some of the top 5 content writing companies: **BetterGraph** Through a blend of visibility, conversions, and revenue, [BetterGraph](https://www.bettergraph.com) offers comprehensive service to its customers. The company guarantees growth for everyone involved through its sophisticated solutions. Looking at the past decades, the number of years has increased exponentially, and the number of years has increased steadily every year. They have attracted 200 members to their mission. When they began, they employed 20 people and now have an efficient team of 200 employees. **Pepper Content** They started from their dorm in BITS Pilani, and today the founders Anirudh Singla and Rishabh Shekhar were able to build one of India's largest fully-managed marketplace that brings writers, editors, and brands under one umbrella. A content writing company recently raised $4.2 million in funding and has onboarded 400+ clients in three years. So far, Pepper Content has received applications from over 30,000 freelancers worldwide. In addition, one of their services includes scriptwriting and podcast production. **Content Whale** Content Whale is based in Mumbai and has a team of young writers who specialize in four primary areas of content creation: articles and blogs, technical writing, copywriting, and website content. Having worked with 42 verticals and 13+ industries, this content writing agency has delivered content writing services in various industries. Their client list includes Quikr, MakeMyTrip, and TechMagnate. Moreover, they offer an online price estimator that can help you evaluate how much your project will cost. **Scatter** Scatter, launched in 2015, provides fully managed content writing services through its vast network of freelancers and full-time writers. They are also known for their proprietary software that offers content marketing tools such as workflow management, automated content strategy, and asset management. Aside from their Mumbai headquarters, this content writing agency has offices in Delhi, Bangalore, and the UAE. The company gained a high reputation after acquiring the Salesforce account for the localization of content for the Indian market. **Italics** Italics offers a variety of premium content across several domains to its over 200 clients in 12 countries. They've built up a strong client base through the years, consisting of Dabur, Airtel, Schlumberger, Upgrad, Samsung, Nestle, Canon, and many others. This content writing company offers 21 different types of content along with other services, including graphic design, to help a business grow. They are well known as a premier copywriting agency and pride themselves on well-researched content. Among the content writing services they offer are blog writing, social media content, content editing, SEO content writing, and website content writing. **Conclusion** Over the years, content writing in India has seen its market growth accelerate at a breakneck pace due to the growing need for digitization. There is no doubt that the demand for Indian content writers extends far beyond the country. As localization takes precedence, content writing will create an ecosystem that caters not just to English-speaking audiences but to local languages as well. For years, these professional content writing agencies have been the place where creative minds come together to create top-tier content for online and traditional media consumers.
viveksh41162642
959,111
Quick Tips to Open a Handyman Business Using Uber for Handyman
Uber for Handyman is a popular on-demand multi-service application servicing millions of global users...
0
2022-01-18T09:12:52
https://dev.to/wademathewsr/quick-tips-to-open-a-handyman-business-using-uber-for-handyman-1ebc
flutter, javascript, programming, node
Uber for Handyman is a popular on-demand multi-service application servicing millions of global users with its fascinating features and 200+ services. The popularity and the advancements of this application made multi-service businesses adopt the Uber for Handyman business model for the betterment of their businesses. Benefits of Adopting Uber for Handyman Business Model [Uber for Handyman](https://www.uberdoo.com/handyman.html) is one solution that solves multiple problems. It comes with alluring benefits that help both customers and businesses save time. Time-saving and convenient Increased revenue Service scheduling facilities Advanced filtering of services and providers Reasonable service pricings Reduced development cost and time On-demand availability Given all that, you must be aware of certain aspects or tips to start your multi-service business by developing an efficient Uber for Handyman application which is briefed below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftjgl00cfsw8qsrswz01.jpg) ** Tips to Venture Into On-demand Business With Uber for Handyman** **#1 Crucial Features to Count** While creating an Uber for Handyman, make sure all of the below-mentioned features are imparted in your solution. **Locate Handyman** The customer must be able to locate nearby handymen or service providers efficiently using the GPS-enabled functionalities. They must be able to filter according to the services they need, experienced handyman nearby, service time allotment, etc., **Multiple Payment Modes** Multiple payment gateways must be integrated into the Uber for Handyman in order to ease the service payment process of the customers. They can pay via card, cash, wallet, net banking, etc., **Advance Service Bookings** This feature will help the customers to book a service for a specified time and date according to their availability and convenience. They can also select their preferred handymen available at the time they are looking for availing the service. **Live Tracking** Users should be able to track the service providers and the order or service status in real-time using the live tracking feature. This gives the customer the current details about the order, service request, confirmation, service provider or order status and location, and many more. **#2 Services Provided by the App** All the most demanded services must be included in the app services in addition to the other services provided through Uber for Handyman. Here is a list of services that are in demand all-time irrespective of pandemic or no pandemic situation. Grocery services Food delivery services Cab-booking services Home services Parcel services Transportation services **#3 Take Customer Feedback Seriously** Most of the businesses fail to address the reviews and ratings offered for their handyman application, which is a big flaw. To know if you have satisfied your customers and what more enhancements they expect with the application, it is important to go through each and every feedback given to your Uber for Handyman application. **#4 Marketing Strategies ** In addition to building your application completely unique and usable compared to other existing handyman applications, it is also necessary to market and promote your application effectively. Since your Uber for Handyman app is designed and developed efficiently doesn't mean it will reach the audience by itself. It is necessary to carry out effective marketing strategies through ads, social media campaigns, events, etc., to make your application known to people looking for a better Uber for handyman application. **Wrap Up** If you follow all the above strategies and tips while developing an Uber for Handyman for your business, there is no doubt that you can be the conqueror in your sector. To develop such an efficient [Uber for Handyman](https://www.uberdoo.com/blog/a-brief-overview-about-the-uber-for-handyman-apps/), consider partnering with Uberdoo. They hold expertise in offering the best Uber for Handyman development services and house the best team of clone app developers. Get in touch with them to get a rough estimation for your project.
wademathewsr
968,129
Editing PDFs (Code Example)
C#: // PM&gt; Install-Package IronPdf using IronPdf; using System.Collections.Generic; var...
0
2022-01-26T08:17:31
https://ironpdf.com/examples/editing-pdfs/
--- canonical_url: https://ironpdf.com/examples/editing-pdfs/ --- **C#:** ``` // PM> Install-Package IronPdf using IronPdf; using System.Collections.Generic; var Renderer = new IronPdf.ChromePdfRenderer(); // Join Multiple Existing PDFs into a single document var PDFs = new List<PdfDocument>(); PDFs.Add(PdfDocument.FromFile("A.pdf")); PDFs.Add(PdfDocument.FromFile("B.pdf")); PDFs.Add(PdfDocument.FromFile("C.pdf")); PdfDocument PDF = PdfDocument.Merge(PDFs); PDF.SaveAs("merged.pdf"); // Add a cover page PDF.PrependPdf(Renderer.RenderHtmlAsPdf("<h1>Cover Page</h1><hr>")); // Remove the last page from the PDF and save again PDF.RemovePage(PDF.PageCount - 1); PDF.SaveAs("merged.pdf"); // Copy pages 5-7 and save them as a new document. PDF.CopyPages(4,6).SaveAs("exerpt.pdf"); ``` **VB:** ``` ' PM> Install-Package IronPdf Imports IronPdf Imports System.Collections.Generic Private Renderer = New IronPdf.ChromePdfRenderer() ' Join Multiple Existing PDFs into a single document Private PDFs = New List(Of PdfDocument)() PDFs.Add(PdfDocument.FromFile("A.pdf")) PDFs.Add(PdfDocument.FromFile("B.pdf")) PDFs.Add(PdfDocument.FromFile("C.pdf")) Dim PDF As PdfDocument = PdfDocument.Merge(PDFs) PDF.SaveAs("merged.pdf") ' Add a cover page PDF.PrependPdf(Renderer.RenderHtmlAsPdf("<h1>Cover Page</h1><hr>")) ' Remove the last page from the PDF and save again PDF.RemovePage(PDF.PageCount - 1) PDF.SaveAs("merged.pdf") ' Copy pages 5-7 and save them as a new document. PDF.CopyPages(4,6).SaveAs("exerpt.pdf") ``` IronPDF allows many PDF file editing manipulations. The most popular are merging, cloning and extracting pages. PDF may also be watermarked, stamped and have backgrounds and foregrounds applied.
ironsoftware
969,466
🎶 Background Music to Get Into the Zone
Maybe the most essential skill for a developer is having a crystal clear focus. Some call it "the...
0
2022-01-27T12:07:02
https://bas.codes/posts/concentration-music
programming, productivity, discuss, motivation
Maybe the most essential skill for a developer is having a crystal clear focus. Some call it "the zone", a state of mind in which your mind is genuinely focussed on the very problem you are dealing with right at that moment. Your surrounding plays a crucial role in that. In the office, you might easily be distracted by people coming and leaving or your co-workers' conversations. Even when working from home any distraction can snap you out of your "zone" – a dishwasher starts to beep, the postman delivers a parcel for your neighbour or what else can happen. If it were just about creating a quiet environment, you could just go with a hearing protection aid, such as the [3M Peltor](https://www.amazon.com/3M-Peltor-Optime-Earmuff-H10A/dp/B007JZCVAQ/). I have tried it, but it did not convince me. I need to hear at least "something" in order to not feel lost. Here are my personal top pics: ## White Noise by TMSoft The [White Noise App](https://www.tmsoft.com/white-noise/) by TMSoft is available for mobiles and Desktops. No subscriptions, no ads, and a great collection of background sounds. I use the creator to generate the exact noise setting I need. ## Focus@Will [Focus @ will](https://www.focusatwill.com/) is a collection of music specifically designed to concentrate. I like the Electro Bach most. Downside: Subscription! However, if you have Spotify, Prime Music or Apple Music, you'll find some of their tracks here, too. Some are even on [YouTube](https://www.youtube.com/watch?v=3PZmIKL2Uho&list=OLAK5uy_lAS8JhZjMUu938hPZKEDsNN1ZDucvU1CI). ## Music To Code By Music to Code By is a collection of lofi music by Carl Franklin. You can get 9 hours of music [for 20 bucks](https://pwop.e-junkie.com/product/MTCB-MP3/Music-to-Code-By-MP3-Collection). Each track is 25 minutes, aligning nicely with the [Pomodoro Technique](https://en.wikipedia.org/wiki/Pomodoro_Technique). ## Other Mentions - [Calm](https://www.calm.com)is an app that not only helps with focus but also contains guided meditations and sounds to improve sleep quality. It comes as a subscription but has a lot of value if you use it regularly. - [freeCodeCampRadio](https://coderadio.freecodecamp.org/)Just open the website, click play, and you will have a free stream of music designed to concentrate. - [brain.fm](https://www.brain.fm/) offers "functional" music and is similar to Focus@Will - [lofi.cafe](https://www.lofi.cafe/) - a lofi radio - [music for programming](https://musicforprogramming.net/) - another radio - [nightride.fm](https://nightride.fm/) - and still another radio - [Classical Music Only](https://classicalmusiconly.com/) is a collection of classical music ## Some good YouTube Playlists - [WoW Ambient Music](https://www.youtube.com/watch?v=xTPn_Nk_KrM) - [Solar Fields](https://www.youtube.com/watch?v=SoYkxKWNzoo) (on[AppleMusic](https://music.apple.com/us/artist/solar-fields/26115355)) - [Neotokyo](https://www.youtube.com/watch?v=JI5w1jfGSgU) by Ed Harrison (on [Apple Music](https://music.apple.com/us/album/neotokyo/308384942)) - [Billy Childs](https://www.youtube.com/watch?v=e-KJaeIahZE) (on [Apple Music](https://music.apple.com/us/artist/billy-childs/11776)) - [Anachronist](https://www.youtube.com/watch?v=n_OHjeugEv4) - [The Witcher Soundtrack](https://www.youtube.com/watch?v=I-cC3wSKAGk) - [Dreamscape](https://www.youtube.com/watch?v=_RlJig87Px0) - [Stimulus Progression](https://www.youtube.com/watch?v=AlY3jsxlzVg) The idea of special designed music dated back to 1934! - [Skyrim](https://www.youtube.com/watch?v=hBkcwy-iWt8) - [LOTR - Nazgul](https://www.youtube.com/watch?v=y1Wum6hQclU) - [Rain Sounds](https://www.youtube.com/watch?v=UzEfSjTYvDc) - [Chilhop Music](https://www.youtube.com/channel/UCOxqgCwgOqC2lMqC5PYz_Dg) - [Lofi Hip Hop Radio](https://www.youtube.com/watch?v=5qap5aO4i9A) ## Suggestions from Friends and Co-Workers Here are some more suggestions from friends and co-workers. Personally, I find most of these songs too distracting, but maybe there's something in for you! - [Eminem](https://www.youtube.com/watch?v=eAck_-B_kv0) - [Wu-Tang Clan](https://www.youtube.com/watch?v=PBwAxmrE194&list=RDEMfS7XN_ceh10AVMOngmJv6Q) - [Limp Bizkit](https://www.youtube.com/watch?v=ZpUYjpKg9KY&list=PL119816F23E12BCCB) - [Linking Park](https://www.youtube.com/watch?v=xGvIdbB67Qs) - [Papa Roach](https://www.youtube.com/watch?v=W3l35x8jvJ4) - [Cornerstone](https://www.youtube.com/watch?v=s4jnLWDyGwM&list=RDs4jnLWDyGwM&index=1) ## Your turn Feel free to post your favourite music in the comments or in [this Twitter thread](https://twitter.com/bascodes/status/1486673873634480128)
bascodes
996,811
Android Games with Capacitor and JavaScript
In this post we put a web canvas game built in Excalibur into an Android (or iOS) app with...
0
2022-02-21T21:17:54
https://erikonarheim.com/posts/capacitorjs-game/
javascript, android, canvas
--- title: Android Games with Capacitor and JavaScript published: true tags: - javascript - android - canvas cover_image: https://erikonarheim.com/images/capacitorjs-game/examplerunning.png canonical_url: https://erikonarheim.com/posts/capacitorjs-game/ --- In this post we put a web canvas game built in Excalibur into an Android (or iOS) app with [Capacitor.js](https://capacitorjs.com/)! In the past I would have used something like Cordova, but this new thing from the folks at [Ionic](https://ionic.io/) has TypeScript support out of the box for their native APIs and support for using any Cordova plugins you might miss. TLDR [show me the code](https://github.com/eonarheim/capacitor-game-v2) ## Capacitor Setup The capacitor project setup is pretty straightforward from their docs, it can [drop in place](https://capacitorjs.com/docs/getting-started#adding-capacitor-to-an-existing-web-app) in an existing project or [create a brand new project](https://capacitorjs.com/docs/getting-started#optional-starting-a-fresh-project) from scratch. I opted for the brand new project: ``` > npm init @capacitor/app ``` Then follow their wizard and instructions to configure. After that step add the platforms you're interested in, in this case Android ``` > npx cap add android ``` I recommend reading the [capacitor documentation](https://capacitorjs.com/docs/basics/workflow) on workflow with a hybrid native app. The gist is this 1. Run `npx cap sync` to copy your web project into capacitor 2. Run `npx cap run android` to start the project on android (or start in the Android SDK) ### Android Setup Before you try to run the project 1. Download Android Studio [Android Studio](https://developer.android.com/studio) 2. Open it up and check for updates if needed (first time initialization takes some time) 3. Accept your SDK package licenses, the easiest way I've found to do this is with the SDK command line tools with Powershell on W. 1. Find the SDK Manager ![Android Studio SDK Manager](https://erikonarheim.com/images/capacitorjs-game/sdk-manager.png) 2. In SDK Tools, check `Android SDK Command-line Tools` ![SDK Tools Command-line](https://erikonarheim.com/images/capacitorjs-game/android-cli.png) 3. Next we need to accept licenses. ![Android SDK location](https://erikonarheim.com/images/capacitorjs-game/sdk-location.png) - In powershell, navigate to the Android SDK Location for command line tools `C:\Users\<username>\AppData\Local\Android\Sdk\cmdline-tools\latest\bin` - Set your java home temporarily `$env:JAVA_HOME = 'C:\Program Files\Android\Android Studio\jre'` - Run `.\sdkmanager.bat --licenses` and select `y` for each ### Starting the App Now that we have Android all setup we can start the app with the capacitor command line. The gist is that it copies the final compiled html/css/js assets from your favorite frontend frameworks and build tools into the native container ``` > npx cap sync ``` After that we can open it in Android Studio with the capacitor commandline ``` > npx cap open android ``` Building the project and running the first time can take some time, so be patient after hitting the big green play button. ![Android Studio Start Bar with Green Play Triangle Button](https://erikonarheim.com/images/capacitorjs-game/start.png) ProTip<sup>TM</sup> **The Emulator is MEGA slow** to start so once you get it on, leave it on. You can redeploy the app to a running emulator with the "re-run" hightlighted below. ![Android Studio Restart Activity Button](https://erikonarheim.com/images/capacitorjs-game/restart-activity.png) If your Android emulator crashes on the first try like mine did with something like `The emulator process for AVD Pixel_3a_API_30_x86 was killed`, this [youtube video](https://www.youtube.com/watch?v=AOK9ZxiBOGg) was super helpful. For me the problem was disk space, the AVD needs 7GBs of disk space to start so I had to clean out some junk on the laptop 😅 ## Building Your Canvas Game The dev cycle is pretty slick, run `npm cap copy android` to move your built JS living in the `www` to the right android folder. The default app looks like this after running it in the android emulator. ![Default Capacitor screen on Android emulator](https://erikonarheim.com/images/capacitorjs-game/emulator.png) ### Setting Up Your JS Build First let's setup our TypeScript by installing and creating an empty `tsconfig.json` ``` > npm install typescript --save-dev --save-exact > npx tsc --init` ``` Recently I've been a big fan of [parcel](https://parceljs.org/)(v1) for quick and easy project setup, and it works great with [excalibur](https://github.com/excaliburjs/template-ts-parcel) also [webpack is cool too](https://github.com/excaliburjs/template-ts-webpack) if you need more direct control of your js bundling. ``` > npm install parcel --save-dev --save-exact ``` I copied the generated `manifest.json`, `index.html`, and `css/` folder out of the original generated `www/` and put it into `game/`. ![Folder structure of capacitor frontend project](https://erikonarheim.com/images/capacitorjs-game/game-folder.png) We need to setup our development and final build script in the `package.json`. The npm `"start"` script tells parcel to run a dev server and use `game/index.html` as our entry point to the app and follow the links and build them (notice the magic inline `<script type="module" src="./main.ts"></script>`) ✨ ```html <!DOCTYPE html> <html lang="en" dir="ltr"> <head> <meta charset="UTF-8"> <title>Game Test</title> <meta name="viewport" content="viewport-fit=cover, width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no"> <meta name="format-detection" content="telephone=no"> <meta name="msapplication-tap-highlight" content="no"> <link rel="manifest" href="./manifest.json"> <link rel="stylesheet" href="./css/style.css"> </head> <body> <script type="module" src="./main.ts"></script> </body> </html> ``` In this setup I'm sending all my built output with `--dist-dir` into the `www` directory, which is what capacitor will copy to android. I went ahead and deleted the provided default app in the `www` directory. ```json /* package.json */ { "name": "my-cool-game", "scripts": { "start": "parcel game/index.html --dist-dir www", "typecheck": "tsc -p . --noEmit", "build": "parcel build game/index.html --dist-dir www" } ... } ``` ### Vanilla Canvas code To start with I have a really awesome game that shows the fps and a red square. This shows how get started from scratch with the HTML Canvas. ```typescript // main.ts const canvas = document.createElement('canvas') as HTMLCanvasElement; const ctx = canvas.getContext('2d') as CanvasRenderingContext2D; canvas.height = window.innerHeight; canvas.width = window.innerWidth; document.body.appendChild(canvas); let lastTime = performance.now(); const mainloop: FrameRequestCallback = (now) => { const delta = (now - lastTime)/1000; lastTime = now; ctx.fillStyle = 'blue'; ctx.fillRect(0, 0, canvas.width, canvas.height); ctx.font = '50px sans-serif'; ctx.fillStyle = 'lime'; ctx.fillText((1/delta).toFixed(1), 20, 100); ctx.fillStyle = 'red'; ctx.fillRect(canvas.width/2, canvas.height/2, 40, 40); requestAnimationFrame(mainloop); } mainloop(performance.now()); ``` ![Vanilla js game running in Android emulator](https://erikonarheim.com/images/capacitorjs-game/examplerunning.png) ## Using Excalibur🗡 Using the Excalibur engine with capacitor and parcel will be a breeze! Really any web based game engine could be substituted here if you want. Here is the [source on github](https://github.com/eonarheim/capacitor-game-v2)! ``` > npm install excalibur --save-exact ``` Update the `main.ts` with some Excalibur ```typescript import { Actor, DisplayMode, Engine, Input, Loader, ImageSource } from "excalibur"; const game = new Engine({ displayMode: DisplayMode.FillScreen, pointerScope: Input.PointerScope.Canvas }); const sword = new ImageSource('assets/sword.png'); const loader = new Loader([sword]); game.start(loader).then(() => { game.input.pointers.primary.on('move', event => { const delta = event.worldPos.sub(actor.pos); actor.vel = delta; // Original asset is at a 45 degree angle need to adjust actor.rotation = delta.toAngle() + Math.PI/4; }); const actor = new Actor({ x: game.halfDrawWidth, y: game.halfDrawHeight, width: 40, height: 40 }); actor.graphics.use(sword.toSprite()); game.add(actor); }); ``` Note, depending on your emulator settings you may need to tweak it's graphics settings and restart Android Studio for it to build and run (This works out of the box fine on real hardware tested in BrowserStack, for some reason the emulator graphics can be confused) ![Update graphics support](https://erikonarheim.com/images/capacitorjs-game/emulator-graphics.png) Tada! 🎉 ![Animated gif of excalibur sword running with Capacitor in Android](https://erikonarheim.com/images/capacitorjs-game/excalibur-capacitor.gif) Hope this helps you web game devs out there! -Erik Help support me on Github Sponsors or Patreon!
eonarheim
997,059
Importance of Website Designing
Putting aside all the other SEO considerations (which are evenly important), we can examine how...
0
2022-02-22T05:29:07
https://dev.to/draggital/importance-of-website-designing-3n2b
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bompvtymn66ouefvuogf.jpeg) Putting aside all the other SEO considerations (which are evenly important), we can examine how CONTENT specifically (text, images, videos & layout) influences your SEO. It can be summed up like this: Poor design is bad for SEO. Period.  In today's time of digital marketing, the design of your website plays a very crucial role. User experience has grown a key factor for search engine ranking. The design & the quality of your website will determine how users can interact with your website and it immediately affects your site ranking in the search results. Website design & its performance on the search engine are linked intrinsically. So, the websites that are created without keeping the factor of SEO in mind can cause ranking issues under the line.  **_[Navigation](https://www.draggital.com/blog)_**: Smooth navigation is essential for a good website design. Without proper navigation, websites can look very unorganized & unstructured. It makes it simple for readers to figure out how your website works. It also helps in providing quick answers to your viewer's search. It should be simple to understand & good navigation is very important in tracing the website above on search engines. It allows your visitors to search on your website with ease. It is more than just a menu bar. So, easy-to-follow navigation will increase the quality of your website as it does not give the visitor get lost along the way. It is also known as 'mobile responsive web design' that involves the website design that will return to any device or display, be it your smartphone or tablet. Your website needs are mobile-friendly which means the functioning and look of your website should be high on mobile devices as well. Smart Use of Graphics Though you should desist from overuse of graphics, smart use of graphics is very helpful. Use informational graphics also recognized as "infographics". This helps any of the advanced search engines to detect keywords mentioned in your graphic. Mingle this graphic with some interesting content in points, which will not simply make it attractive and informational but will also make your website the desired attention from search engines. To Read More Blogs Visit - **_[Draggital/Blog](https://www.draggital.com/website-design-affects-seo)_**
draggital
997,186
Create a new GitHub Repository from the command line
It can be frustrating when you are working on a project on your local machine and need to commit some...
0
2022-02-22T20:03:20
https://dev.to/techielass/create-a-new-github-repository-from-the-command-line-575d
github, git, beginners
It can be frustrating when you are working on a project on your local machine and need to commit some changes to GitHub, but you haven't set up that repository. The GitHub CLI can help in this situation, from your terminal you can create that repository and commit your project without leaving your terminal or Integrated development environment (IDE). If you haven't got the GitHub CLI installed, do [check out my blog post](https://www.techielass.com/install-github-cli-on-windows/) that covers the best ways to do that. ## Convert a directory to a Git repository So, you've been working on a project on your local machine for the last few hours and you want to commit that project to a GitHub repository. The first step is to [convert the directory to a Git repository](https://www.techielass.com/convert-a-folder-to-a-git-repository/). The command to use is: ```powershell git init ``` ## Stage the files The next step would be to stage the files so they can be committed to your GitHub repository: ```powershell git add . ``` ## Commit the files Now commit the files: ```powershell git commit -m "initial commit" ``` ## Authorise GitHub CLI If you have never used the GitHub CLI tool before, you need to authorise it with your GitHub account. If you type in the following command it will walk you through allowing your terminal to make changes to your GitHub account. ```powershell gh auth login ``` ## Create your GitHub repository Your directory is Git ready and your terminal is authorised to make changes to your GitHub account. The next step is to create that GitHub repository and push your files into that repository. The command you want to use is **gh repo create**. This command has a lot of switches you can use for different reasons. The command below will create a repository in GitHub called _"My-NewRepo"_. It will be a public repository. The directory you are in will be source for that GitHub repository and the push the files in the directory to it. ```powershell gh repo create my-newrepo --public --source=. --remote=upstream --push ``` Your directory is now Git initiated, you have a GitHub repository to store your project and the files are stored there. 😊 Enjoy this new found joy of being able to create GitHub repositories using the GitHub CLI tool! _Command Line icon by Icons8_
techielass
997,450
[Infographic] AWS SNS from a serverless perspective
The Simple Notification Service, or SNS for short, is one of the central services to build...
0
2022-02-22T13:04:11
https://dev.to/dashbird/infographic-aws-sns-from-a-serverless-perspective-24h9
aws, serverless, cloud, devops
* * * * * The **Simple Notification Service**, or SNS for short, is one of the central services to build serverless architectures in the AWS cloud. SNS itself is a serverless messaging service that can distribute massive numbers of messages to different recipients. These include mobile end-user devices, like smartphones and tablets, but also other services inside the AWS ecosystem. SNS' ability to target AWS services makes it the perfect companion for [AWS Lambda](https://dashbird.io/knowledge-base/aws-lambda/introduction-to-aws-lambda/). If you need custom logic, go for Lambda; if you need to fan out messages to multiple other services in parallel, SNS is the place to be. But you can also use it for SMS or email delivery; it's a versatile service with many possible use-cases in a modern serverless system. ![](https://cdn-images-1.medium.com/max/1600/0*GDXl81cBJINTd0bS.png) *See the original article for copyable code examples:* [*https://dashbird.io/blog/aws-sns/*](https://dashbird.io/blog/aws-sns/) ### SNS pricing SNS is a serverless service, which means it comes with pay-as-you-go billing. You pay about 50 cents per 1 million messages. If your messages are bigger than 64 KB, you have to pay for each additional 64 KB chunk as if it was a whole message. AWS also offers the first 1 million requests each month for free; this includes 100000 deliveries via HTTP subscription. ### SNS vs. SQS SNS' parallel delivery is different from [SQS](https://aws.amazon.com/sqs/), the serverless queuing service of AWS. If you need to buffer events to remove pressure from a downstream service, then SQS is a better solution. Another difference is SQS is pull-based, so you need a service actively grabbing an event from a queue, and AWS SNS is push-based so that it will call a service, like Lambda, that waits passively for an event. ### SNS vs. EventBridge [EventBridge](https://aws.amazon.com/eventbridge/) has similar use-cases as SNS but operates on a higher level. EventBridge can archive messages and target more services than SNS. SNS' only targets are email addresses, phone numbers, HTTP endpoints, Lambda functions, and SQS queues. This means if you want to give your data to another AWS service, you need to put some glue logic in-between. At least a Lambda function, and it will cost extra money. But SNS allows configuring a topic as FIFO, which guarantees precisely one message delivery. This lowers the throughput from about 9000 msgs/sec to about 3000 msgs/sec but can reduce the complexity of your Lambda code. ### Don't call Lambda from another Lambda One rule when building serverless systems is "Don't call a Lambda directly from another Lambda." This rule comes from the fact that events from direct calls can get lost when one of the functions crashes, or it could lead to one function waiting until the other function finishes, which means double the costs. This direct call rule means you always should put another service between your Lambda function calls. Sometimes these services follow from your use-cases, but when they don't, and you're about to make a direct call, you can grab SNS, EventBridge, or SQS to get around this issue. ### Using SNS from Lambda There are two ways SNS interacts with AWS Lambda: First, Lambda can send an event to an SNS topic, and second, a Lambda can subscribe to an SNS topic and receive events from it. ### Sending Events from Lambda to an SNS Topic To send a message to an SNS topic from your Lambda function, you need the SNS client from the AWS SDK and the ARN of your SNS topic. Let's look at an example Lambda that handles API Gateway events: ![](https://cdn-images-1.medium.com/max/1600/1*gOaJhMleNOiQ20efHqmRhQ.png) The Lambda uses the AWS SDK v3, which is better modularized than the v2, which means more space for your custom code inside a Lambda. It's a good practice to store the SNS topic ARN inside an environment variable, so you can change it without changing the code. Also, you should initialize the SNS client outside of the function handler, so it only happens on a cold-start. You need to call the send method with a PublishCommand object to publish messages. The object requires a Message string, which we get from our API Gateway event body, and the TargetArn we got from an environment variable. ### Receiving SNS Events with Lambda To receive an SNS event with a Lambda, you need to subscribe your Lambda to an SNS topic. This way, the event that invokes the Lambda will be an SNS message. Let's look at how to set things up with the CDK: ![](https://cdn-images-1.medium.com/max/1600/1*vmcSJj34dfSs3Z4u1T_nMw.png) The first crucial part here is that you need to wrap the Lambda function into a subscription so that the CDK can link it up with an SNS topic. The second part is that the event your Lambda function receives has its data inside a Records array, so you need to iterate it to get every record. ### Piping API Gateway Events to SNS Using AWS Lambda to glue things together is pretty straightforward but adds complexity and latency and costs extra money. That's why you should do simple integrations directly between services like API Gateway and SNS. Let's look at another CDK example: ![](https://cdn-images-1.medium.com/max/1600/1*HsVdpl9X97291pAZklBD_A.png) The example uses a third-party library that takes care of the event transformations. Usually, you would use the AwsServiceIntegration construct, which requires you to write VTL code that transforms the API Gateway event into something SNS understands. The library comes with some transformations out-of-the-box. If you send JSON via a POST request to the /emails resource of this REST API, API Gateway will directly pipe that data to the SNS topic; no Lambda needed! ### Dashbird now supports SNS Now that you learned that AWS SNS is a crucial part of many serverless systems, you should be happy to hear that with its latest update, Dashbird gives you insights into your SNS topics too! ![](https://cdn-images-1.medium.com/max/1600/0*g8L3Ey13_qjB6-fd.png) With its ability to run custom code, Lambda was low-hanging fruit for debugging; you could push all you wanted to know to a monitoring service. But all the other services on AWS are a bit trickier. Usually, you would learn about the issues inside other services when Lambda was calling them. But Lambda costs money, and some services, like API Gateway or EventBridge, are perfectly able to transform and distribute events directly to the services where they're needed. No Lambda needed, and that's how it should be! Only use Lambda if it simplifies something or if the direct integrations lack features. With Dashbird's new AWS SNS integration, you can now discover what is happening inside your architecture without the need to sprinkle Lambda functions all around the integration points. This saves you money, latency, and complexity! * * * * * Further reading: [AWS Kinesis vs SNS vs SQS (with Python examples)](https://dashbird.io/blog/kinesis-sqs-sns-comparison/) [Dashbird now integrates with 5 new AWS services](https://dashbird.io/blog/dashbird-now-integrates-with-5-new-aws-services/) [Triggering AWS Lambda with SNS](https://dashbird.io/blog/triggering-lambda-with-sns-messaging/)
taavirehemagi
997,762
Getting Started with Docker
Why is Docker a Powerful Tool? One of the main reasons Docker is a powerful tool is its...
0
2022-02-27T20:43:59
https://dev.to/taylormorini/getting-started-with-docker-36en
webdev, docker
## Why is Docker a Powerful Tool? One of the main reasons Docker is a powerful tool is its architecture. Docker uses a client-server architecture. According to the Docker documentation, > The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers A visualization of this occurring: ![Docker Architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93vcr7djmemx2nmhhpjh.png) ##Starting out with Docker To begin using the Docker Playground follow these steps: - Go to this [link] (https://www.docker.com/play-with-docker) - Navigate to the section labelled "Play with Docker" ![docker starting out](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sfbcnjvw0qda1bqcb6k6.png) - Log in or create an account - Press the start button and add a new instance ##How to Create a Dockerfile for NASA Image Search ###Setup 1. In the Docker Playground add a "New Instance". This will open a terminal on the screen. 2. On GitHub, navigate to the Nasa Image Search repository and click "Code". Copy the link, and in the Docker Playground run a `git clone` and `cd` into the cloned repo ![console img](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/020s6mni2vvicqv5l5ws.png) ###Configure - Run this command: `touch "Dockerfile"`. This will create a new file called `Dockerfile`. This is where custom settings can be added. - Access your new file via the "Editor" in Docker Playground. ![editor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/76ai4xr31j3okbmgs7us.png) - The document that you have added will need to be configured. ![docker](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/422splygsghkjwzevfqh.png) - Now we must reference an endpoint for Nasa Image Search. For this example, the endpoint would be `https://402a.github.io/ip-project/ ` In the News example, this is where you would run `cp dot.env.example dot.env`. This would give you the ability to open the `dot.env` file and populate it with specific data (a generated API Key and the NewsAPI endpoint): ![endpoint](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2knch7l4au6j24mc9r6u.png) For the Nasa Image Search example you can first run another touch command. For this, I ran `touch "dot.env"`. Here, you would populate the document via the editor. The endpoint, `https://402a.github.io/ip-project/` is added to this document. ###Open Port - Once you have completed the configuration, you can now click on the "Open Port" button at the top of the Docker Playground screen. ![open port](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjzx72eplr2v78h1zy7o.png) - You will be prompted to enter a port number. For this example, I inputted "4000" ![4000](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jkekv4f9upqnive80vp2.png) - You will not immediately be able to open the desired page. To do so, you will need to copy the URL of the page that opens when you input "port 4000". ![4000-2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e45rv2rykod77l3o6sp.png) - This URL should be added to the `dot.env` file as the endpoint. Click 'Save' after this has been added. ###Starting the application - After everything has been added, run the line `docker-compose up` in the playground. This will open port 4000. ![4000 opens](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65tvbvpbf9qjn1dfpwxd.png) - Click on port 80. If all was successful, you will be able to view the working application. News Example: ![port 80](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i9r7ievvr858cwuxva8.png) ####Helpful Links: 1. [Docker Documentation](https://docs.docker.com/get-started/overview/) 2. [Nasa Image Search Code] (https://github.com/402A/ip-project) 3. For the News API Key: [News API Site] (https://newsapi.org/register/success)
taylormorini
997,869
The big STL Algorithms tutorial: wrapping up
With the last article on algorithms about dynamic memory management, we reached the end of a...
362
2022-02-23T07:53:32
https://www.sandordargo.com/blog/2022/02/23/stl-alogorithms-tutorial-part-31-wrap-up
cpp, tutorial, stl, algorithms
With the last article on algorithms about [dynamic memory management](https://www.sandordargo.com/blog/2022/02/02/stl-alogorithms-tutorial-part-30-memory-header), we reached the end of a 3-year-long journey that we started at the beginning of 2019. Since then, in about 30 different posts, we learned about the algorithms that the STL offers us. We are not going to have a crash course on them, if you are looking for something like that watch Jonathan Boccara's video from CppCon2018, [105 STL Algorithms in Less Than an Hour](https://www.youtube.com/watch?v=2olsGf6JIkU) Instead, let's remind us of a couple of key concepts and oddities that we learned along the way. ## You don't pay for what you don't need Standard algorithms showcase perfectly that in C++ you don't pay for what you don't need. One example like that is bound checks. Most of the algorithms that need more than one range take only the first range via two iterators (`begin` and `end`), the rest is taken only by one iterator, by one that denotes the beginning of the range. It's up to the caller to guarantee that the additional input containers have enough elements or that an output container has enough space to accommodate the results. There are no checks of the sizes, no extra cost for ensuring something that is up to the caller to guarantee. While this means potentially undefined behaviour, it also makes the algorithms faster and as the expectations are clearly documented we have nothing to complain about. ## Lack of consistency sometimes We've also seen that sometimes the STL lacks consistency quite a bit. Even though it is something standardized, it's been under development for almost 3 decades, so I think it's normal to end up with some inconsistencies. As C++ and the standard library is widely used, it's almost impossible to alter the existing API, so we have to live with these oddities. But do I have in mind? - `std::find` will look for an element by value, `std::find_if` takes a predicate. At the same time, `std::find_end` can either take a value or a predicate. There is no `std::find_end_if`. While it's true that `std::find_end_if` would be a strange name, it would also be more consistent. - Wile `exclusive_scan` can optionally take an initial value and a binary operation in this order, `inclusive_scan` takes these optional values in the different order, first the binary operation and then the initial value. Maybe it's just a guarantee that you don't mix them up accidentally? - I found it strange that `transform_reduce` takes first you pass the reduction algorithm and then the transformation. I think the name is good because first the transformation is applied, then the reduction, but perhaps it should take the two operations in a reversed order. ## Algos are better than raw loops! No more raw loops as Sean Parent suggested in his talk [C++ Seasoning](https://www.youtube.com/watch?v=W2tWOdzgXHA) at GoingNative 2013. But why? STL algorithms are less error-prone than raw loops as they were already written and tested - a lot. Thousands if not millions of developers are using them, if there were bugs in these algorithms, they have been already discovered and fixed. Unless you are going for the last drops of performance, algorithms will provide good enough efficiency for you and often they will not just match but outperform simple loops. The most important point is that they are more expressive. It’s straightforward to pick the good among many, but with education and practice, you’ll be able to easily find an algorithm that can replace a for loop in most cases. For more details read [this article](https://www.sandordargo.com/blog/2020/05/13/loops-vs-algorithms)! ## Conclusion Thank you for following through this series on STL algorithms where we discussed functions from the `<algorithm>`, `<numeric` and `<memory>` headers. After about 30 parts, today we wrapped up by mentioning once again some important concepts and inconsistencies of algorithms. We discussed how algorithms follow one of the main principles of C++: you don't pay for what you don't need. We saw three inconsistencies within the STL, such as sometimes you have to postpend an algorithm with `_if` to be able to use a unary predicate instead of a value, but sometimes it's just a different overload. Finally, we reiterated the main reasons why STL algorithms are better than raw loops. Use STL algorithms in your code, no matter if it's a personal project or at work. They'll make your code better! ## Connect deeper If you liked this article, please - hit on the like button, - [subscribe to my newsletter](http://eepurl.com/gvcv1j) - and let's connect on [Twitter](https://twitter.com/SandorDargo)!
sandordargo
997,974
Recommend me the best phone for productive mobile app development!
My phone just died on me. I am in the hunt for a new phone. I've always been an Android User but...
0
2022-02-22T20:59:29
https://dev.to/azrinsani/recommend-me-the-dest-phone-for-productive-mobile-app-development-df5
flutter, xamarin, reactnative, android
My phone just died on me. I am in the hunt for a new phone. I've always been an Android User but wouldn't mind moving to iOS. However, my main concern is development speed. Particularly a phone that is able to power up and launch debug mode of an app in a few seconds. I develop apps mostly in C# Xamarin & Flutter. Some suggest CPU is most important. Others suggest using a Stock Android Phone like Pixel. Some even say if you are doing React or Xamarin Dev, always start with IPhones as they tend to launch faster in debug mode. Has anyone ever looked into these dichotomies? Any suggestions?
azrinsani
998,006
Usa console.table en lugar de console.log
Una forma interesante de mostrar los resultados de un array o un objeto es utilizar console.table....
17,009
2022-02-22T22:51:30
https://jfbarrios.com/usa-consoletable-en-lugar-de-consolelog
javascript, tips, spanish, logging
Una forma interesante de mostrar los resultados de un array o un objeto es utilizar `console.table`. Esta función toma un argumento obligatorio: `data`, que debe ser un `array` o un `objeto`, y un parámetro adicional: `columns`. ## Colecciones de tipos primitivos ```javascript // Array de string console.table(['Manzana', 'Pera', 'Melón']); ``` ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645472655115/WDgKzSjm4.png) ```javascript // Objeto con propiedades de tipo strings function Persona(nombres, apellidos) { this.nombres = nombres; this.apellidos = apellidos; } var yo = new Persona("Fernando", "Barrios"); console.table(yo); ``` ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645472681381/Q4nwYb5s3.png) ## Colecciones de tipos compuestos Si `data` es un `array` y sus elementos son `array`, o bien cuando `data` sea un objeto y sus propiedades sean un `array` sus propiedades o elementos se enumerarán en la fila. ```javascript // un array de arrays var personas = [["Fernando", "Barrios"], ["Juan", "Carlos"], ["Carmen", "María"]] console.table(personas); ``` ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645472977487/dX1V6mO36.png) ```javascript // un array de objetos console.table([{ nombre: "Fernando", apellido: "Barrios" }, { nombre: "John", apellido: "Doe" }]) ``` ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645473295171/uvxmrnE5k.png) ## Restringiendo las columnas mostradas ```javascript // un array de objetos, donde se mostrará solo la columna apellido console.table([{ nombre: "Fernando", apellido: "Barrios" }, { nombre: "John", apellido: "Doe" }], ['apellido']) ``` ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645473387454/tID3Hq0Kv.png)
jfernandogt
998,015
Truthy and Falsy values in JavaScript
Introduction In this article, we shall learn about the concept of Truthy and Falsy values...
0
2022-02-22T23:19:54
https://dev.to/naftalimurgor/truthy-and-falsy-values-in-javascript-458p
javascript, webdev, beginners, tutorial
## Introduction In this article, we shall learn about the concept of Truthy and Falsy values in JavaScript and why this concept is useful. Let's jump in! ## What are truthy values? Truthy values are values that evaluate to `boolean` in a conditional such as `if..else` and `switch...case`. In Computer Science, a Boolean is a logical data type that can only hold the value of `true` or `false`. Booleans are often used in conditionals like `if...else` ```typescript if (boolean conditional) { // code to be executed if conditional is true } else { // if boolean conditional resolves to false } ``` > Truthy values are values that resolve to `true` when used in conditionals. ### Truthy values in JavaScript JavaScript treats the following values as Truthy: 1. `true` - A boolean of value `true` 2. `{}` value of an object with no `properties`, declared using Object literal 3. `[]` an empty array 4. `51` any non-zero number, including negative and positive numbers 5. `"0"` any non-empty string 6. `new Date()` any value constructed from the `Date` object ```typescript if (true) { } // evaluates to true if ({}) { }// evaluates to true if ([]) { }// evaluates to true if (51) { }// evaluates to true, negative and positive numbers except zero if ("0") {...}// any string value execept an empty string if (new Date()) {...} // value constructed from the Date Object ``` ### Why are Truthy values important? Consider the following example: ```typescript const updateKeys = (keys = []) => { // this will resolve to true always for both an empty keys argument and nonempty if (keys) { } } ``` An array with no elements will evaluate to true, hence the `if(toDoItems) {...}` will always evaluate to `true` and execute for both empty and none empty arrays. ### What are falsy values > Falsy values are values that resolve to `false` when used in conditionals. ### Falsy values in JavaScript JavaScript treats the following values as Falsy: 1. `false` - A boolean of value `false` 2. `"", ''` value of an object with no `properties`, declared using Object literal 3. `null`, a value of `null`, no vale given 4. `undefined`, a value of a variable not assigned to a value 4. `NaN` a value representing `Not-A-Number`, usually as a result of adding a value of type `number` to the value of a non-number `8` + `1` 5. `0` a value of number zero ```typescript if (0) {...} // evaluates to true if ("") {...}// evaluates to true if (null) {...}// evaluates to true if (undefined) {...}// evaluates to true, negative and positive numbers except zero if (NaN) {...}// any string value execept an empty string ``` Considering falsy values, it's more beautiful for instance to do this: ```typescript if (userName) { } ``` And not this: ```typescript if (username === "") { } // or worse still if (typeof txString !== undefined) { } ``` ## Summary JavaScript has a set of truthy values and falsy values that are important to know: 1. Evaluating an empty array, will always evaluate to `true` in a conditional 2. Evaluating an object with no properties will always evaluate to `true` in a conditional *** That's all for this week!, I've been setting up a new domain for my blog [naftalimurgor.com](naftalimurgor.com).. You may follow my Twitter handle [https://twitter.com](@nkmurgor). Any questions queries shoot me an email, at nmurgor10@gmail.com
naftalimurgor
998,148
I made a Hacker News reader with Flutter
Here is the GitHub repo: https://github.com/Livinglist/Hacki
0
2022-02-23T01:04:30
https://dev.to/livinglist/i-made-a-hacker-news-reader-with-flutter-3dfl
flutter, engineering, android, ios
--- title: I made a Hacker News reader with Flutter published: true description: tags: Flutter, engineering, android, iOS cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/albjomq6nm9sfjdxbwoq.png --- Here is the GitHub repo: https://github.com/Livinglist/Hacki
livinglist
1,002,430
Testing a Feature: A concise approach
Any change in the existing program that enhances it's functionality is called as a "Feature" For a...
0
2022-02-26T18:00:22
https://dev.to/yashmunjal/testing-a-feature-a-concise-approach-4h63
testing, beginners, productivity
Any change in the existing program that enhances it's functionality is called as a "Feature" For a feature to work effectively, it has to be properly tested. Testing ensures that the software is working correctly, and that it meets its requirements. It ensures that the product does what it says on the box. ### Software testing has usually two concepts to it #### Manual vs Automation Testing: Most of the Software development cycles usually look towards Automation as it can do more testing in less time and with good efficiency. However not everything can be automated and manual testing is also often required as it ensures far greater test coverage #### Black Box vs White Box: In Black box testing the access to programme is “need-to-know” basis. However, in White box testing, we have additional control in order to test individual functions. ### Testing a real world application Testing a real world application consists of the following or more steps **Step 1: Understanding the scenario** Let us assume you want to test an API endpoint **Endpoint: “/users/save”** This API takes the parameters “Name” and “Age” in a form of post request as ```json { "name":"My Name", "age":22 } ``` The API then saves the data in a cloud based SQL database **Step 2: Defining use cases** Generally, the convention that I follow in order to test a functionality is as follows” - #### The Positive Case: At the first you need to understand the basic flow of the application and how it works. - One positive case is saving the data how it should go. - #### The Illegal Values: Testing Illegal values is another thing you should check. A proper software should test the values against real world possible scenarios - One case can be entering the age as a negative number - Another case can be entering the age in float value e.g 2.2 or 14.5 - #### The Extremities: Testing Extreme values is also of utmost importance. - One case can be sending a different type in age eg: ```json { "name":"My Name", "age":true } ``` - Another case can be not sending age at all - Another case can be sending extra parameters in the request ```json { "name":"My Name", "age":44, "exra_params":"Yes! Extra params" } ``` - #### User Acceptance Testing - Not useful in every case but certain feedback from a non tech perspective is of utmost importance since some real world cases can still get missed - #### Compatibility and away user functional check - Another case testing includes checking if the software is compatible with the previous versions of codes - Another cases you should check (particularly for this case) is if the data is getting saved in the SQL database as well by the software **Step 3: Defining the Expected Result** Defining the expected results are usually need to know basis depending upon the responses you get from the software. However the responses should have a uniform approach in all the negative cases. A major test is that in any of the negative test cases, the software should handle the case **gracefully** For example:- - Instead of giving a generic error like ```json { "status":false, "error":"some error!" } ``` The user response should be like ```json { "status":false, "error":"Value of age should not be negative" } ``` **Step 4: Writing Test Code** Once all the previous steps are done, it is time to write the automation test cases A normal test case in javascript would look something like ```javascript const request={ "name":"My Name", "age":15 } test("Test Positive API Flow", (request) => { expect(request.status).toBe(true); }); //expecting that API gets saved and status is returned as true ``` Hope it gives the readers a basic workflow of testing a new feature
yashmunjal
998,325
How to use Netlify as your continuous integration
Netlify is a hosting provider that you can use for static websites or web applications. The free plan...
0
2022-02-23T05:52:56
https://how-to.dev/how-to-use-netlify-as-your-continuous-integration
javascript, netlify, ci
Netlify is a hosting provider that you can use for static websites or web applications. The free plan comes with 300 minutes of build time, which should be enough to set up continuous deployment (CD) for a project that doesn’t receive a lot of commits. I’ll show you how to use those resources to add simple continuous integration (CI) to your build. ## The example application To keep it simple, I’ll use an application generated with Create React App (CRA) as the example app. In this way, we get a nontrivial application that: * is similar to simple, real-world cases, * has some npm dependencies, and * most of what we need is already set up for. The resulting application looks like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wys44lsnfwsdpjbzbnwd.png) ## Verification steps I’ve previously written about what [steps](https://how-to.dev/continuous-integration-ci-and-how-it-can-help-you) you can run with your CI. Let’s see how you can set it up for our example application. ### Building For building, the code generated by CRA does everything we need: ```SH $ npm run build > netlify-ci@0.1.0 build > react-scripts build Creating an optimized production build... Compiled successfully. File sizes after gzip: 43.71 kB build/static/js/main.1fb16459.js 1.78 kB build/static/js/787.c84d5573.chunk.js 541 B build/static/css/main.073c9b0a.css … ``` Netlify automatically picks the `build` script from our CRA-generated repository as a build command, and it’s working perfectly: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kgpp05nw8ocwph5brvdl.png) ### Testing Code generated by CRA comes with a complete setup for unit testing and one example test. The `npm test` script is made for development; it runs in interactive mode and watches the files by default. For running on CI, we need a single run: ```SH $ npm test -- --watchAll=false > netlify-ci@0.1.0 test > react-scripts test "--watchAll=false" PASS src/App.test.js ✓ renders learn react link (16 ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 total Time: 0.644 s, estimated 1 s Ran all test suites. ``` To have it readily available, let’s define a new script in `package.json`: ```JSON { … "scripts": { … "test": "react-scripts test", "test:ci": "react-scripts test --watchAll=false", … }, ``` ### Static analysis One thing we would like to add to the code is static analysis. The basic configuration should be pretty straightforward, but I’ll leave it outside of the scope of this article. If you want to follow up on this, I recommend you give it a try with: * ESLint – as it warns you against potential issues in code, or * Prettier – to automatically enforce code style. ## New CI script With the code we have now, we need the following steps for a successful CI/CD run: * `npm install` – gets package dependencies, done by default by Netlify * `npm run test:ci` – our modified test command * `npm run build` – the original build command * **deployment** – done by Netlify Now, we want the build to be conditional based on tests: if they fail, the execution should stop, and this is why I will use ‘&&’. At the same time, the Netlify configuration has only one input for the command to run. We can address those two things by creating a new script dedicated to this use case: ```JSON { … "scripts": { … "test:ci": "react-scripts test --watchAll=false", "ci": "npm run test:ci && npm run build", … }, … } ``` ### Example run In the end, the scripts behave as expected: * if build tests fail, then you get a failing run on your Netlify dashboard * if everything works as expected, then the application gets deployed ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q059g1ltteeerjsayx5j.png) ### Resource usage In the few runs I did, there was hardly any impact of tests on the build time—the resource that Netlify checks to control the system usage. Of course, this will change when your project grows, and you will add more tests to your project. At some point, it will make more sense to invest in setting up a dedicated CI solution and use Netlify only as hosting. ## Links * [deployed application](https://netlify-ci.netlify.app/) * [example repository](https://github.com/how-to-js/netlify-ci) ## What would you do next? Running CI on Netlify is just a temporary solution. I’m interested in hearing from you—what tool would you like to use next? Please let me know in this [poll](https://strawpoll.com/co8pv72pr).
marcinwosinek
998,486
Test test test
A post by Besi
0
2022-02-23T09:01:53
https://www.linkedin.com/testest
besi
998,492
Remind Solution (Australia) - It Helps You Remember All Details Well!
This is not simple that buds are more interested in Remind Solution than in Brain Booster...
0
2022-02-23T09:24:11
https://dev.to/remindsolution1/remind-solution-australia-it-helps-you-remember-all-details-well-254e
This is not simple that buds are more interested in [Remind Solution](https://ipsnews.net/business/2021/09/18/remind-solution-real-brain-booster-or-a-scam-read-ingredients-and-side-effects-reports/) than in Brain Booster Supplements. There have been several new Brain Booster Supplements endorsements. It is a worthwhile cause. When push comes to shove, these thought provoking viewpoints as that relates to my tactic. A standard Brain Booster Supplements can be substituted for Brain Booster Supplements. I daresay that most of the skillful people who are serious dealing with that proposal aren't the kind of folks who would turn to using this. That was how to have a Remind Solution of your very own. This is how that's positioned in the market so you may have to take note this. I had occasionally considered this. Which organization do you belong to? Fellow travelers that buy these budget imitations don't realize this until it's too late. I've been looking for the malarkey with precision quality. It also can get very dull. Quite a few of these tricks can be absorbed rather easily. Visit Hare=> [https://ipsnews.net/business/2021/09/18/remind-solution-real-brain-booster-or-a-scam-read-ingredients-and-side-effects-reports/](https://ipsnews.net/business/2021/09/18/remind-solution-real-brain-booster-or-a-scam-read-ingredients-and-side-effects-reports/)
remindsolution1
998,849
Module 2 | React BillsApp Project with Material Ui and Context Api
Just Released the Second Module of BillsApp Project on My Channel Do Check it out and Provide your...
0
2022-02-23T14:06:19
https://dev.to/47karimasif/module-2-react-billsapp-project-with-material-ui-and-context-api-5e2j
javascript, react, webdev, devops
Just Released the Second Module of BillsApp Project on My Channel Do Check it out and Provide your feedback :) [Link for The Video](https://youtu.be/T6XM7uI3138) Do check it out and Share :)
47karimasif
999,182
Javascript: isFunctions
In every language, we need to validate data before modifying or displaying it. So, here in the case...
16,797
2022-02-23T16:54:29
https://dev.to/urstrulyvishwak/isfunctions-in-javascript-5d0h
javascript, webdev, programming, discuss
In every language, we need to validate data before modifying or displaying it. So, here in the case of javascript also we almost regularly use these functions and we maintain all these in single util class to reuse quickly. No further theory let’s jump into stuff. Basically, {% embed https://gist.github.com/K-Vishwak/a96acce6736dbf7d163cf739e8bbc22c %} Use the above reference and write `isFunctions` now: **isNumber** {% embed https://gist.github.com/K-Vishwak/7b72295c65d131c93a9f7b8da9db81cf %} (+/-)Infinity is of type number in javascript as if you want to avoid then you can update logic as: {% embed https://gist.github.com/K-Vishwak/d7a633d1397288a43b30d6b5e28a0068 %} **isBoolean** {% embed https://gist.github.com/K-Vishwak/fc38097d9757357cd5d46ca1c42e9f22 %} **isString** {% embed https://gist.github.com/K-Vishwak/5629b996c8c80ef4aa35baf1c1cf36d8 %} **isObject** {% embed https://gist.github.com/K-Vishwak/5e04a2280f1083a520fa1549bb8dc6a4 %} **isArray** {% embed https://gist.github.com/K-Vishwak/5e63f38433383cb962bf1c11edca2a17 %} **isFunction** {% embed https://gist.github.com/K-Vishwak/b51c2a02a6a0e1248ddb311c0358bd4f %} **isInteger** We can use `Number.isInteger(value)` Yes. There are some exceptions based on where do we use these functions. Please do suggest if I miss any mandatory validations in the above isFunctions. Hope these help to a little extent instead of going over lengthy explanations around the globe 😝. Thank you! Happy Reading :) <hr style="border-width: 2px;border-color: #87CEEB;"> <h1> 💎 Love to see your response</h1> <ol> <li style="padding: 10px;font-size: 30px;"><b>Like</b> - You reached here means. I think, I deserve a like. </li> <li style="padding: 10px;font-size: 30px;"><b>Comment</b> - We can learn together. </li> <li style="padding: 10px;font-size: 30px;"><b>Share</b> - Makes others also find this resource useful.</li> <li style="padding: 10px;font-size: 30px;"><b>Subscribe / Follow</b> - to stay up to date with my daily articles. </li> <li style="padding: 10px;font-size: 30px;"><b>Encourage me</b> - <a href="https://www.buymeacoffee.com/urstrulyvishwak" target="_blank">You can buy me a Coffee</a> </ol> <hr style="border-width: 2px;border-color: #87CEEB;"> <h2> Let's discuss further. </h2> <ol style="padding: 20px;"> <li style="padding: 10px;font-size: 30px;">Just DM <br> <a href="https://twitter.com/messages/compose?recipient_id=1388837509824610314&text=Hello world" class="twitter-dm-button" data-screen-name=“@urstrulyvishwak” data-size="large">@urstrulyvishwak</a> </li> <li style="padding: 10px;font-size: 30px;"> <p> Or mention <br> <a href="https://twitter.com/intent/tweet?screen_name=urstrulyvishwak&ref_src=twsrc^tfw" class="twitter-mention-button" data-size="large" data-text="Hi, Vishwak" data-related="@urstrulyvishwak" data-show-count="false">@urstrulyvishwak</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p> </li> </ol> <h2> For further updates: </h2> <a href="https://twitter.com/urstrulyvishwak?ref_src=twsrc^tfw" class="twitter-follow-button" data-size="large" data-text="Hi, Vishwak" data-related="@urstrulyvishwak" data-show-count="false">Follow @urstrulyvishwak</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
urstrulyvishwak
343,230
A post about productivity
If you have a programming blog you have to write a post about programmer productivity. It's kind of l...
0
2020-05-25T05:52:38
https://dev.to/zasuh_/a-post-about-productivity-407g
programming, productivity, work
If you have a programming blog you have to write a post about programmer productivity. It's kind of like a ritual to be accepted into the pantheon of programming bloggers (next to writing your opinion on the industry and how it has changed as a whole). Productivity in programming is weird when you are starting out. You think that you aren't making any progress when you just play around with HTML and CSS, because those aren't ReAlll programming languages, when in reality you are learning the basics for whats to come. You can't properly learn how to make web apps with just JS because eventually you are gonna run into some problem that requires HTML or CSS knowledge at a much deeper level. So you are making progress I assure you. The type of productivity I'm referring to is how to properly get something done that you have set out to do and not spend hours just watching tutorials. What I'm saying is that to me **productivity is when you are actively moving towards a goal**. You don't have to consistently move towards the goal, yes it is prefered, but not necessary as long as you are moving. There is this great quote that I don't know the source of anymore, but it says **It's much easier to steer a moving train than a not moving train**. Once you have some momentum towards the goal all you have to keep doing is correcting your path to the finish line. --- Taking development as an example. It takes ages for someone that hasn't programmed before to actually become really productive, but they are still productive in some sense. They are learning and learning takes a very long time to make into production of value. And that sucks, but it sucks all the time because we as developers are always learning and consiquently always feeling like shit because we don't 'know' enough. It's something that comes with the job. Here are a few things I've learned to make me more productive in the work environment : ### Having a list of tasks ### I was using Trello for the longest time as the default home for all my tasks that I had to do at work. It quickly became clear that Trello was no working, because tasks were always changing and I was being switched to different projects all the time, making it difficult to constantly change Trello entries. Instead I actually downgraded to a regular A4 piece of paper, where I write the project name at the top and then tasks that need to be done. I cross them out once the tasks are done and it makes it easier to draw out UI requirements and make diagrams for easier understanding. Also one of the reasons why I use paper is because we don't do development on laptops and making notes on my phone is just annoying at meetings. ### Understanding the importance of a task ### Using the [Eisenhower method](https://appfluence.com/productivity/what-is-the-eisenhower-method/) you should go over your tasks and decide their priority. If you have 2 critical and important tasks, analize both and figure out which one will take the longest and do that one first. This sounds easier said than done, because a lof of the time you can't really know how long it will take you to finish a task, since programming can have a lot of unexpected bumps in the road. Plan accordingly and if the task completion time goes into the red, talk to your manager about getting some extend time since it's in everyones best interest to have work done at the end of the day. ### Knowing when you can get the most done ### For me mornings are the most productive times of the day. I love programming in the morning with a cup of coffee by my side, the earlier the better. That comes at a cost though. I just can't be as productive after lunch as I am in the morning, I feel like crap, no concentration, irritated and want to go home. It's just the way I'm wired, but I have to force myself to do work untill about 4-5pm. Management doesn't care if you can't programm after 2, stuff needs to get done. So know yourself enough to know when to schedule the most important and hard tasks for the day. ### Slow down ### This is something I'm working on myself, but hear me out. **Do everything 25% slower**. Why? Well you can do **10%** the point is to slow down. Programming and development is a tricky thing and small mistakes because of rushing happen all the time. So slow down, check what that function is doing, do you have all cases handled and test the shit out of it, so you can be certain when push to production happens you didn't make a simple mistake because you were rushing. For me it helps to really think about the problem and solution before diving in and solving it. You can't predict everything that might break your system, but slowing down might cause you to see something you otherwise wouldn't if you were just rushing through. Thank you for reading!
zasuh_
999,252
How to POST and Receive JSON Data using PHP cURL
JSON is the most popular data format for exchanging data between a browser and a server. The JSON...
0
2022-02-23T19:21:09
https://dev.to/saymon/how-to-post-and-receive-json-data-using-php-curl-5pn
php, webdev, beginners, tutorial
**JSON** is the most popular data format for exchanging data between a browser and a server. The JSON data format is mostly used in web services to interchange data through API. When you working with web services and APIs, sending **JSON data via POST** request is the most required functionality. PHP cURL makes it easy to POST JSON data to URL. In this tutorial, we will show you how to **POST JSON data** using PHP cURL and **get JSON data** in PHP. ## Send JSON data via POST with PHP cURL The following example makes an HTTP POST request and send the JSON data to URL with cURL in PHP. - Specify the URL `($url)` where the JSON data to be sent. - Initiate new cURL resource using `curl_init()`. - Setup data in PHP array and encode into a JSON string using `json_encode()`. - Attach JSON data to the POST fields using the `CURLOPT_POSTFIELDS` option. - Set the Content-Type of request to `application/json` using the `CURLOPT_HTTPHEADER` option. - Return response as a string instead of outputting it using the `CURLOPT_RETURNTRANSFER` option. - Finally, `the curl_exec()` function is used to execute the POST request. ```php // API URL $url = 'http://www.example.com/api'; // Create a new cURL resource $ch = curl_init($url); // Setup request to send json via POST $data = [ 'username' => 'saymon', 'password' => '123' ]; $payload = json_encode($data); // Attach encoded JSON string to the POST fields curl_setopt($ch, CURLOPT_POSTFIELDS, $payload); // Set the content type to application/json curl_setopt($ch, CURLOPT_HTTPHEADER, [ 'Content-Type: application/json' ]); // Return response instead of outputting curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // Execute the POST request $result = curl_exec($ch); if($result === false) { exit('Curl error: ' . curl_error($ch)); } else { echo $result; } // Close cURL resource curl_close($ch); ``` ## Receive JSON POST Data using PHP The following example shows how you can get or fetch the JSON POST data using PHP. - Use `json_decode()` function to decoded JSON data in PHP. - The `file_get_contents()` function is used to received data in a more readable format. ```php $data = json_decode(file_get_contents('php://input'), true); ``` * * * I hope you enjoyed the read! Feel free to follow me on [GitHub](https://github.com/saymontavares), [LinkedIn](https://www.linkedin.com/in/saymon-tavares/) and [DEV](https://dev.to/saymontavares) for more!
saymon
999,444
cryptocurrency Coinmarketdo news
Coinmarketcap has been the leader in tracking cryptocurrency prices in real time. It was founded in...
0
2022-02-23T22:04:08
https://coinmarketdo.com
cryptocurrency, crypto, bitcoin, ethereum
--- title: cryptocurrency Coinmarketdo news published: true date: 2021-07-23 21:36:50 UTC tags: Cryptocurrency,crypto,Bitcoin,Ethereum canonical_url: https://coinmarketdo.com --- Coinmarketcap has been the leader in tracking cryptocurrency prices in real time. It was founded in May 2013 and has been a well-known brand in the fast-growing crypto-space. Website and mobile apps aim to make cryptocurrency accessible to everyone. To empower investors every day with accurate, high-quality information. Coinmarketcap is still the best source for institutional investors, retail investors, and media to compare the prices of crypto-assets.
coiner
999,474
Longest Palindromic Substring - Leetcode #5
https://www.youtube.com/watch?v=Aiw3m8EK0rs&amp;t=8s Verdict: Pass Score: 5/5 Coding:...
0
2022-02-23T21:56:45
https://dev.to/dannyhabibs/longest-palindromic-substring-1aim
career, interview, programming, python
https://www.youtube.com/watch?v=Aiw3m8EK0rs&t=8s Verdict: Pass Score: 5/5 Coding: 5 Communication: 5 Problem Solving: 5 Inspection: 3 Review: 5 Leetcode #5 - Longest Palindromic Substring (https://leetcode.com/problems/longest-palindromic-substring/) The candidate did an excellent job. He had seen the problem during his practice, but was still able to explain the details of an optimal solution and bring the interviewer along for the ride. He coded the solution fluently, and reviewed thoroughly. Excellent work. ## What they did well - Able to code the optimal solution quickly - Explained his thinking - Drew out an example and walked me through it - Was able to deduce the runtime and space complexity confidently - Handled questions from the interviewer really well ## What could be better - Nothing significant that needed to be improved on. - As an interviewer, I would have loved to see more of the problem solving process involved with coming up with the solution with more questions about the inputs and outputs. But since the candidate was able to so clearly articulate the solution I disregarded this. The candidate very thoroughly tested his code. Be sure to check out that part of the vid
dannyhabibs
1,000,267
Web3 Tutorial: build DApp with Web3-React and SWR
In "Tutorial: Build DAPP with hardhat, React and Ethers.js", we connect to and interact with the...
0
2022-02-24T14:18:00
https://dev.to/yakult/tutorial-build-dapp-with-web3-react-and-swr-1fb0
blockchain, web3, dapp, react
In "[Tutorial: Build DAPP with hardhat, React and Ethers.js](https://dev.to/yakult/a-tutorial-build-dapp-with-hardhat-react-and-ethersjs-1gmi)", we connect to and interact with the blockchain using `Ethers.js` directly. It is ok, but there are tedious processes needed to be done by ourselves. We would rather use handy frameworks to help us in three aspects: 1. maintain context and connect with blockchain. 2. connect to different kinds of blockchain providers. 3. query blockchain more efficiently. [Web3-React](https://github.com/NoahZinsmeister/web3-react/tree/v6), a connecting framework for React and Ethereum, can help us with job 1 & 2. (We will focus on job 1.) Web3-React is an open source framework developed by Uniswap engineering Lead Noah Zinsmeister. You can also try [WAGMI: React Hooks for Ethereum](https://wagmi.sh/). SWR can help us to query blockchains efficiently. SWR (stale-while-revalidate) is a library of react hooks for data fetching. I learned how to use SWR with blockchain from Lorenzo Sicilia's tutorial [How to Fetch and Update Data From Ethereum with React and SWR](https://consensys.net/blog/developers/how-to-fetch-and-update-data-from-ethereum-with-react-and-swr/). I am still trying to find an efficient way to deal with Event. The Graph (sub-graph) is one of the good choices. [The Graph Protocol](https://thegraph.com/en/) and sub-graph are widely used by DeFi applications. In Nader Dabit's tutorial "[The Complete Guide to Full Stack Web3 Development](https://dev.to/dabit3/the-complete-guide-to-full-stack-web3-development-4g74)", he gives us a clear guide on how to use sub-graph. Special thanks to Lorenzo Sicilia and his tutorial. I adapted the SWR flow and some code snippets from him. You can find the code repos for this tutorial: Hardhat project: https://github.com/fjun99/chain-tutorial-hardhat-starter Webapp project: https://github.com/fjun99/web3app-tutrial-using-web3react Let's start building our DApp using Web3-React. --- ![Task #1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8kbdczsudwvjvy1htprk.png) ## Task 1: Prepare webapp project and smart contract The first half of Task 1 is the same as the ones in "[Tutorial: build DApp with Hardhat, React and Ethers.js](https://dev.to/yakult/a-tutorial-build-dapp-with-hardhat-react-and-ethersjs-1gmi)". Please refer to that tutorial. > ### Tutorial: build DApp with Hardhat, React and Ethers.js > https://dev.to/yakult/a-tutorial-build-dapp-with-hardhat-react-and-ethersjs-1gmi > > ### Task 1: setup development environment > #### Task 1.1 Install Hardhat and init a Hardhat project > #### Task 1.2 Development Circle in Hardhat > #### Task 1.3 MetaMask Switch Local testnet > #### Task 1.4 Create webapp with Next.js and Chakra UI > #### Task 1.5 Edit webapp - header, layout, _app.tsx, index.tsx We choose to download the webapp scaffold code from [our github repo](https://github.com/fjun99/webapp-tutorial-scaffold). First, we make a `hhproject/` directory for our project (`hhproject/chain/` for hardhat project, `hhproject/webapp/` for React/Node.js webapp): ``` mkdir hhproject && cd hhproject ``` Project directory structure: ``` - hhproject - chain (working dir for hardhat) - contracts - test - scripts - webapp (working dir for NextJS app) - src - pages - components ``` Download an empty webapp scaffold: ``` git clone https://github.com/fjun99/webapp-tutorial-scaffold.git webapp cd webapp yarn install yarn dev ``` We also need to prepare an ERC20 token ClassToken for our webapp to interact with. This is the second half of Task 1. This job can be done same as Task 3 of "Tutorial: build DApp with Hardhat, React and Ethers.js" > ### Task 3: Build ERC20 smart contract using OpenZeppelin > #### Task 3.1 Write ERC20 smart contract > #### Task 3.2 Compile smart contract > #### Task 3.3 Add unit test script > #### Task 3.4 Add deploy script > #### Task 3.5 Run stand-alone testnet again and deploy to it > #### Task 3.6 Interact with ClassToken in hardhat console > #### Task 3.7 Add token to MetaMask Again, we choose to download the hardhat chain starter project from [github repo](https://github.com/fjun99/chain-tutorial-hardhat-starter).In your `hhproject/` directory: ``` git clone git@github.com:fjun99/chain-tutorial-hardhat-starter.git chain cd chain yarn install ``` Let's run "compile, test, deploy" circle of smart contract development. In another terminal, run command line in `hhproject/chain/` directory to start a stand-alone Hardhat Network (local testnet) : ``` yarn hardhat node ``` Then compile, test and deploy smart contract: ``` yarn hardhat compile yarn hardhat test test/ClassToken.test.ts yarn hardhat run scripts/deploy_classtoken.ts --network localhost // ClassToken deployed to: 0x5FbDB2315678afecb367f032d93F642f64180aa3 // ✨ Done in 4.04s. ``` Now we have ClassToken deployed to local testnet: `0x5FbDB2315678afecb367f032d93F642f64180aa3` --- ![Task #2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1r015rmw73nni9lznycn.png) ## Task 2: Add Web3-React to our webapp - Connect button ### Task 2.1: Understanding Web3-React ![Web3-react explained](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zyxu90s949jel93g3rke.png) From my point of view, Web3-React is a web3 blockchain **connecting framework** which provides three features we need: - Web3ReactProvder, a react context we can access throughout our web app. - useWeb3React, handy react hook to interact with blockchain. - Connectors of several kinds of blockchain providers, such as MetaMask (browser extension), RPC connector(Alchemy and Infura), QR code connector(WalletConnect), Hardware connector (Ledger/Trezor). Currently Web3-React has [stable V6](https://github.com/NoahZinsmeister/web3-react/tree/v6) and [beta V8](https://github.com/NoahZinsmeister/web3-react/tree/main). We will use V6 in our tutorial. ### Task 2.2: Install `Web3-React`, `Ethers.js` and add `Web3ReactProvder` STEP 1: install dependencies In the `webapp` directory, run: ``` yarn add @web3-react/core yarn add @web3-react/injected-connector yarn add ethers yarn add swr ``` We will use `swr` later. STEP 2: edit `pages/_app.tsx`: ``` js // src/pages/_app.tsx import { ChakraProvider } from '@chakra-ui/react' import type { AppProps } from 'next/app' import { Layout } from 'components/layout' import { Web3ReactProvider } from '@web3-react/core' import { Web3Provider } from '@ethersproject/providers' function getLibrary(provider: any): Web3Provider { const library = new Web3Provider(provider) return library } function MyApp({ Component, pageProps }: AppProps) { return ( <Web3ReactProvider getLibrary={getLibrary}> <ChakraProvider> <Layout> <Component {...pageProps} /> </Layout> </ChakraProvider> </Web3ReactProvider> ) } export default MyApp ``` Explanations: - We add a [react context provider](https://reactjs.org/docs/context.html#contextprovider) `Web3ReactProvider` in `_app.tsx`. - Blockchain provider (library) is an Ethers.js `Web3Provider` which we can add connector and activate later using hooks. ### Task 2.3: Add an empty ConnectMetamask component ![connector, provider, signer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhe7s2hy0f7frwwbodlu.png) The relationship between connector, provider and signer in `Ethers.js` is illustrated in the graph. In this sub-task we will add an empty ConnectMetamask component. - STEP 1: Add `src/components/ConnectMetamask.tsx`: ``` js import { useEffect } from 'react' import { useWeb3React } from '@web3-react/core' import { Web3Provider } from '@ethersproject/providers' import { Box, Button, Text} from '@chakra-ui/react' import { injected } from 'utils/connectors' import { UserRejectedRequestError } from '@web3-react/injected-connector' import { formatAddress } from 'utils/helpers' const ConnectMetamask = () => { const { chainId, account, activate,deactivate, setError, active,library ,connector} = useWeb3React<Web3Provider>() const onClickConnect = () => { activate(injected,(error) => { if (error instanceof UserRejectedRequestError) { // ignore user rejected error console.log("user refused") } else { setError(error) } }, false) } const onClickDisconnect = () => { deactivate() } useEffect(() => { console.log(chainId, account, active,library,connector) }) return ( <div> {active && typeof account === 'string' ? ( <Box> <Button type="button" w='100%' onClick={onClickDisconnect}> Account: {formatAddress(account,4)} </Button> <Text fontSize="sm" w='100%' my='2' align='center'>ChainID: {chainId} connected</Text> </Box> ) : ( <Box> <Button type="button" w='100%' onClick={onClickConnect}> Connect MetaMask </Button> <Text fontSize="sm" w='100%' my='2' align='center'> not connected </Text> </Box> )} </div> ) } export default ConnectMetamask ``` STEP 2: define a `injected` connector in `uitls/connectors.tsx`: ``` js import { InjectedConnector } from "@web3-react/injected-connector"; export const injected = new InjectedConnector({ supportedChainIds: [ 1, 3, 4, 5, 10, 42, 31337, 42161 ] }) ``` STEP 3: add a helper in `utils/helpers.tsx` ``` js export function formatAddress(value: string, length: number = 4) { return `${value.substring(0, length + 2)}...${value.substring(value.length - length)}` } ``` STEP 4: add `ConnectMetamask` component to `index.tsx` ``` js import ConnectMetamask from 'components/ConnectMetamask' ... <ConnectMetamask /> ``` STEP 5: run web app by running `yarn dev` ![connect wallet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3ls65u9i5gof7sxj1jg.png) Explanation of what do we do here: - We get hooks from `useWeb3React`: chainId, account, activate,deactivate, setError, active,library ,connector - When a user clicks connect, we call `activate(injected)`. `inject` is `InjectedConnector` (mostly it means window.ethereum injected by MetaMask) that we can configure. - When user click disconnect, we call `decativate()`. - The library is the Ethers.js Web3Provider we can use. Specifically, the library is an `Ethers.js` provider which can be used to connect and read blockchain. If we want to send transaction to blockchain (write), we will need to get [Ethers.js signer](https://docs.ethers.io/v5/api/signer/) by call `provider.getSigner()`. --- ![Task #3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zs6513yhaz3ji6rgqk4.png) ## Task 3: Read from blockchain - ETHBalance We will use Web3-React to read from smart contract. ### Task 3.1: Add `ETHbalance.tsx` (first attempt) Add a component to get the ETH balance of your current account. Add `components/ETHBalance.tsx` ``` js import { useState, useEffect } from 'react' import { useWeb3React } from '@web3-react/core' import { Web3Provider } from '@ethersproject/providers' import { Text} from '@chakra-ui/react' import { formatEther } from "@ethersproject/units" const ETHBalance = () => { const [ethBalance, setEthBalance] = useState<number | undefined>(undefined) const {account, active, library,chainId} = useWeb3React<Web3Provider>() const provider = library useEffect(() => { if(active && account){ provider?.getBalance(account).then((result)=>{ setEthBalance(Number(formatEther(result))) }) } }) return ( <div> {active ? ( <Text fontSize="md" w='100%' my='2' align='left'> ETH in account: {ethBalance?.toFixed(3)} {chainId===31337? 'Test':' '} ETH </Text> ) : ( <Text fontSize="md" w='100%' my='2' align='left'>ETH in account:</Text> )} </div> ) } export default ETHBalance ``` Edit `pages/index.tsx` to display ETHBalance: ``` js <Box mb={0} p={4} w='100%' borderWidth="1px" borderRadius="lg"> <Heading my={4} fontSize='xl'>ETH Balance</Heading> <ETHBalance /> </Box> ``` The problem with this is how to constantly sync the results (ETH balance) with blockchain. Lorenzo Sicilia suggests to use `SWR` with events listening to get data more efficiently. The [SWR project homepage](https://swr.vercel.app/) says: > SWR is a strategy to first return the data from cache (stale), then send the fetch request (revalidate), and finally come with the up-to-date data. > With SWR, components will get a stream of data updates constantly and automatically. The UI will always be fast and reactive. ### Task 3.2: Add `ETHBalanceSWR.tsx` (second attempt) Add `components/ETHBalanceSWR.tsx` ``` js import { useState, useEffect } from 'react' import { useWeb3React } from '@web3-react/core' import { Web3Provider } from '@ethersproject/providers' import { Text} from '@chakra-ui/react' import { formatEther } from "@ethersproject/units" import useSWR from 'swr' const fetcher = (library:any) => (...args:any) => { const [method, ...params] = args return library[method](...params) } const ETHBalanceSWR = () => { const { account, active, library,chainId} = useWeb3React<Web3Provider>() const { data: balance,mutate } = useSWR(['getBalance', account, 'latest'], { fetcher: fetcher(library), }) console.log("ETHBalanceSWR",balance) useEffect(() => { if(!library) return // listen for changes on an Ethereum address console.log(`listening for blocks...`) library.on('block', () => { console.log('update balance...') mutate(undefined, true) }) // remove listener when the component is unmounted return () => { library.removeAllListeners('block') } // trigger the effect only on component mount // ** changed to library prepared }, [library]) return ( <div> {active && balance ? ( <Text fontSize="md" w='100%' my='2' align='left'> ETH in account: {parseFloat(formatEther(balance)).toFixed(3)} {chainId===31337? 'Test':' '} ETH </Text> ) : ( <Text fontSize="md" w='100%' my='2' align='left'>ETH in account:</Text> )} </div> ) } export default ETHBalanceSWR ``` Add `ETHBalanceSWR` component to `index.tsx` ``` js <Box mb={0} p={4} w='100%' borderWidth="1px" borderRadius="lg"> <Heading my={4} fontSize='xl'>ETH Balance <b>using SWR</b></Heading> <ETHBalanceSWR /> </Box> ``` Explanations: - We use SWR to fetch data, which calls `provider.getBalance( address [ , blockTag = latest ] ) ` ([Ethers docs link](https://docs.ethers.io/v5/api/providers/provider/#Provider--account-methods)). The `library` is a web3 provider. ``` js const { data: balance,mutate } = useSWR(['getBalance', account, 'latest'], { fetcher: fetcher(library), }) ``` - The fetcher is constructed as: ``` js const fetcher = (library:any) => (...args:any) => { const [method, ...params] = args return library[method](...params) } ``` - We get `mutate` of SWR to change its internal cache in the client. We mutate balance to `undefined` in every block, so SWR will query and update for us. ``` library.on('block', () => { console.log('update balance...') mutate(undefined, true) }) ``` - When library(provider) changes and we have a provider, the side effect (`useEffect()`) will add a listener to blockchain new block event. Block events are emitted on every block change. Let's play with the webapp: - Send test ETH from Hardhat local testnet Account#0(`0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266`) to Account#1(`0x70997970C51812dc3A010C7d01b50e0d17dc79C8`). - Check that the ETH balance of the current account (Account#0) changes accordingly. More explanations about SWR can be found at: - Lorenzo Sicilia's blockchain tutorial: [link](https://consensys.net/blog/developers/how-to-fetch-and-update-data-from-ethereum-with-react-and-swr/) - SWR documents: [link](https://swr.vercel.app/docs/getting-started) --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1m8s9gvs970na6m4p2tc.png) ## Task 4: Read / Listen - Interact with smart contract In this task, we will read data using SWR from smart contract. We use smart contract event listening to get updates. ### Task 4.1: Add `ERC20ABI.tsx` Add `abi/ERC20ABI.tsx` for standard ERC20. ``` js export const ERC20ABI = [ // Read-Only Functions "function balanceOf(address owner) view returns (uint256)", "function totalSupply() view returns (uint256)", "function decimals() view returns (uint8)", "function symbol() view returns (string)", // Authenticated Functions "function transfer(address to, uint amount) returns (bool)", // Events "event Transfer(address indexed from, address indexed to, uint amount)" ]; ``` Add `components/ReadERC20.tsx` ``` js import React, { useEffect,useState } from 'react'; import { useWeb3React } from '@web3-react/core' import { Web3Provider } from '@ethersproject/providers' import {Contract} from "@ethersproject/contracts"; import { formatEther}from "@ethersproject/units" import { Text} from '@chakra-ui/react' import useSWR from 'swr' import {ERC20ABI as abi} from "abi/ERC20ABI" interface Props { addressContract: string } const fetcher = (library: Web3Provider | undefined, abi: any) => (...args:any) => { if (!library) return const [arg1, arg2, ...params] = args const address = arg1 const method = arg2 const contract = new Contract(address, abi, library) return contract[method](...params) } export default function ReadERC20(props:Props){ const addressContract = props.addressContract const [symbol,setSymbol]= useState<string>("") const [totalSupply,setTotalSupply]=useState<string>() const { account, active, library} = useWeb3React<Web3Provider>() const { data: balance, mutate } = useSWR([addressContract, 'balanceOf', account], { fetcher: fetcher(library, abi), }) useEffect( () => { if(!(active && account && library)) return const erc20:Contract = new Contract(addressContract, abi, library); library.getCode(addressContract).then((result:string)=>{ //check whether it is a contract if(result === '0x') return erc20.symbol().then((result:string)=>{ setSymbol(result) }).catch('error', console.error) erc20.totalSupply().then((result:string)=>{ setTotalSupply(formatEther(result)) }).catch('error', console.error); }) //called only when changed to active },[active]) useEffect(() => { if(!(active && account && library)) return const erc20:Contract = new Contract(addressContract, abi, library) // listen for changes on an Ethereum address console.log(`listening for Transfer...`) const fromMe = erc20.filters.Transfer(account, null) erc20.on(fromMe, (from, to, amount, event) => { console.log('Transfer|sent', { from, to, amount, event }) mutate(undefined, true) }) const toMe = erc20.filters.Transfer(null, account) erc20.on(toMe, (from, to, amount, event) => { console.log('Transfer|received', { from, to, amount, event }) mutate(undefined, true) }) // remove listener when the component is unmounted return () => { erc20.removeAllListeners(toMe) erc20.removeAllListeners(fromMe) } // trigger the effect only on component mount }, [active,account]) return ( <div> <Text >ERC20 Contract: {addressContract}</Text> <Text>token totalSupply:{totalSupply} {symbol}</Text> <Text my={4}>ClassToken in current account:{balance ? parseFloat(formatEther(balance)).toFixed(1) : " " } {symbol}</Text> </div> ) } ``` Add `ReadERC20` to `index.tsx`: ``` js const addressContract='0x5fbdb2315678afecb367f032d93f642f64180aa3' ... <Box my={4} p={4} w='100%' borderWidth="1px" borderRadius="lg"> <Heading my={4} fontSize='xl'>ClassToken: ERC20 Smart Contract</Heading> <ReadERC20 addressContract={addressContract} /> </Box> ``` Some explanations: - We query data from blockchain and smart contract by calling `contract.balanceOf()`. ``` js const { data: balance, mutate } = useSWR([addressContract, 'balanceOf', account], { fetcher: fetcher(library, ERC20ABI), }) ``` - The fetcher is constructed as: ``` js const fetcher = (library: Web3Provider | undefined, abi: any) => (...args:any) => { if (!library) return const [arg1, arg2, ...params] = args const address = arg1 const method = arg2 const contract = new Contract(address, abi, library) return contract[method](...params) } ``` - When ethereum network connection is changed to `active`, query `symbol()` and `totalSupply`. Since these two are non-changable constants, we only query them once. - Add listener when change to `active` or `account` change. Two listeners are added: events transfer ERC20 token to `account` and from `account`. ``` js // listen for changes on an Ethereum address console.log(`listening for Transfer...`) const fromMe = erc20.filters.Transfer(account, null) erc20.on(fromMe, (from, to, amount, event) => { console.log('Transfer|sent', { from, to, amount, event }) mutate(undefined, true) }) const toMe = erc20.filters.Transfer(null, account) erc20.on(toMe, (from, to, amount, event) => { console.log('Transfer|received', { from, to, amount, event }) mutate(undefined, true) }) ``` Result: ![display results](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pn2zszdi326g2rufeu0n.png) --- ![Task #5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nnbdp0h6n3wm8i4hcgd.png) ## Task 5: Write - Interact with smart contract ### Task 5.1: Add a component for Transfer In this task, we will add `TransferERC20.tsx`. Edit `components/TransferERC20.tsx` ``` js import React, { useState } from 'react'; import { useWeb3React } from '@web3-react/core' import { Web3Provider } from '@ethersproject/providers' import { Contract } from "@ethersproject/contracts"; import { parseEther }from "@ethersproject/units" import { Button, Input , NumberInput, NumberInputField, FormControl, FormLabel } from '@chakra-ui/react' import { ERC20ABI } from "abi/ERC20ABI" interface Props { addressContract: string } export default function TransferERC20(props:Props){ const addressContract = props.addressContract const [toAddress, setToAddress]=useState<string>("") const [amount,setAmount]=useState<string>('100') const { account, active, library} = useWeb3React<Web3Provider>() async function transfer(event:React.FormEvent) { event.preventDefault() if(!(active && account && library)) return // new contract instance with **signer** const erc20 = new Contract(addressContract, ERC20ABI, library.getSigner()); erc20.transfer(toAddress,parseEther(amount)).catch('error', console.error) } const handleChange = (value:string) => setAmount(value) return ( <div> <form onSubmit={transfer}> <FormControl> <FormLabel htmlFor='amount'>Amount: </FormLabel> <NumberInput defaultValue={amount} min={10} max={1000} onChange={handleChange}> <NumberInputField /> </NumberInput> <FormLabel htmlFor='toaddress'>To address: </FormLabel> <Input id="toaddress" type="text" required onChange={(e) => setToAddress(e.target.value)} my={3}/> <Button type="submit" isDisabled={!account}>Transfer</Button> </FormControl> </form> </div> ) } ``` ### Task 5.2 Add transfer component to `index.tsx` Add `TransferERC20` in `index.tsx`: ``` js <Box my={4} p={4} w='100%' borderWidth="1px" borderRadius="lg"> <Heading my={4} fontSize='xl'>Transfer ClassToken ERC20 token</Heading> <TransferERC20 addressContract={addressContract} /> </Box> ``` Let's go to `http://localhost:3000/` in browse and play with our DApp: ![Webapp](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syu2ykzllbv4r50575r5.png) --- You can find that the webapp is structured well and simply by using `Web3-React`. Web3-React gives us context provider and hooks we can use easily. From now on, you can begin to write your own DAPPs. --- ### Tutorial List: #### 1. A Concise Hardhat Tutorial(3 parts) https://dev.to/yakult/a-concise-hardhat-tutorial-part-1-7eo #### 2. Understanding Blockchain with `Ethers.js`(5 parts) https://dev.to/yakult/01-understanding-blockchain-with-ethersjs-4-tasks-of-basics-and-transfer-5d17 #### 3. Tutorial : build your first DAPP with Remix and Etherscan (7 Tasks) https://dev.to/yakult/tutorial-build-your-first-dapp-with-remix-and-etherscan-52kf #### 4. Tutorial: build DApp with Hardhat, React and Ethers.js (6 Tasks) https://dev.to/yakult/a-tutorial-build-dapp-with-hardhat-react-and-ethersjs-1gmi #### 5. Tutorial: build DAPP with Web3-React and SWR https://dev.to/yakult/tutorial-build-dapp-with-web3-react-and-swr-1fb0 #### 6. Tutorial: write upgradeable smart contract (proxy) using OpenZeppelin(7 Tasks) https://dev.to/yakult/tutorial-write-upgradeable-smart-contract-proxy-contract-with-openzeppelin-1916 #### 7. Tutorial: Build an NFT marketplace DApp like Opensea(5 Tasks) https://dev.to/yakult/tutorial-build-a-nft-marketplace-dapp-like-opensea-3ng9 --- If you find this tutorial helpful, follow me at Twitter [@fjun99](https://twitter.com/fjun99)
yakult
1,000,558
Busy Creating A rideshare app using react native
I am going through a error when running my app ,what's the best online IDE for react-native or the...
0
2022-02-24T18:03:02
https://dev.to/bigboycrypto/busy-creating-a-rideshare-app-using-react-native-5h57
I am going through a error when running my app ,what's the best online IDE for react-native or the best emulator for react .
bigboycrypto
1,000,805
Accept Web3 Crypto Donations right on GitHub Pages
This approach is a game-changer for every dev who thinks about accepting donations/support for his or...
0
2022-02-25T00:13:08
https://dev.to/web3-payments/accept-web3-crypto-donations-right-on-github-pages-2oj8
web3, tutorial, javascript, opensource
This approach is a game-changer for every dev who thinks about accepting donations/support for his or her projects or currently does so. I will show you how to accept donations with any ERC-20 or BEP-20 token with automatic conversion right on GitHub Pages. **The coolest part:** - your supporters pay with any token available in their wallet on multiple blockchains (number of supported blockchains is growing) - you always receive the one asset you define in the source code (e.g., DAI or USDT) All this with just a single button, implemented for free with a small code snippet. ---- **The used solution is decentralized, therefore trustless and permissionless (no email signup required). Watch it live in action:** {% embed https://www.youtube.com/watch?v=LdnE2n6sM-c %} (Live demo: [https://lxpzurich.github.io](https://lxpzurich.github.io)) ---- ## Example from a donor’s perspective I have set up this scenario with real tokens to show you what is possible. **Let’s imagine**: Your supporter Christina (the donor) wants to say thank you for your great repository. She holds the following tokens in her wallet: ![Tokens in Christinas wallet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8qi9elblugce9axqkqzc.gif) --- At the time of her donation, the assets in the shown wallet have the following USD values: --- ![Screenshot was taken from Unmarshal’s amazing multi-chain explorer Xscan. ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvfg8m4aqrvtfc1u8tas.png) --- As you can see, Christina has multiple assets with different USD values at her disposal. The donation widget automatically recognizes the blockchain network with a wallet address containing some value (via Metamask) — on top of this, it also suggests the token with sufficient funds to pay or donate the chosen amount with. If there are multiple options for a certain amount, the wallet will display the one with the least conversion cost. **In a nutshell**: The donation widget will display different tokens as means of payment, depending on the donation amount. The donor can still select another Token to pay with, as long as it has a sufficient balance. {% embed https://www.youtube.com/watch?v=Rr4SpHiuZ6g %} ## More examples > - Someone donates **10$** worth of **DAI** 👉 you receive ~**10 USDT.** > - Someone donates **1000$** worth of **ETH** (currently about 0.39 ETH) 👉 you receive **~1000 USDT.** > - Someone donates **20$** worth of **SHIB** (currently about 881K SHIB)👉 you receive ~**20 USDT.** You have to initially define one particular asset that you want to receive on your end (I took the stable coin USDT as an example, but you could take any other token!). **As mentioned, the used solution is Open Source, permissionless & trustless.** It will take any dev less than 5 minutes to implement. My dev skills probably suck compared to yours, but even I managed to make this work 🚀. ## 🔎 Under the hood: Open Source Web3 Payment Protocol developed by DePay 👇 (Skip this part with a [click](#stepbystep-tutorial) if you just want to know how to implement this…) ☝️ ![DePay — Web3 Payment Protocol](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pl6u8ky1es9y0ufsk4si.png) DePay was born after my friend Sebastian Pape (@spape) had the idea of a permissionless & trustless Open Source payment protocol after the _DeFi Summer_ in 2020. The new hype around decentralized finance brought the TVL (total value locked) in DeFi protocols to new dimensions. Sebastian figured out that the immense amount of DeFi liquidity in DEx’es (such as Uniswap or PancakeSwap) can be harnessed to make crypto payments finally decentralized, easy to implement & simple to use. He participated in the ETHOnline hackathon with his MVP and [became a finalist in October 2020](https://www.youtube.com/watch?v=n8M_GwbKKWs&t=2140s). **Fast forward**: We quit our jobs at Swisscom & founded the DePay company in Crypto Valley (Zug) together with our friend Aleks. Our ecosystem token $DEPAY serves as a utility & governance token. $DEPAY is **not required** to use the protocol. It can (optionally) be used to unlock data insight dashboards & other PRO features, which will become more and more interesting for heavy users. That’s the “why” for this article. **Now let’s roll straight away!** 🪨🤘🪨 --------------- ## 📙 Step-by-step tutorial All you need: - A **GitHub account** & GitHub Desktop (if you don’t use the terminal). - A static **HTML page** (template). - **Your receiving wallet address per blockchain.** The Ethereum wallet address can be used on the Binance Smart Chain (et vice versa). - **The contract address of the token** you want to receive. No matter which token your supporter pays with, it will be converted to this one. > _💡_ **Note:** The same token can have _different_ token contract addresses per blockchain. > > For example, **ETH** has the (technically not really an) address “0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE” on the **Ethereum Network** because it is the native blockchain token. > > You would need to provide this _tons-of-E’s-address_ in case you want to receive ETH in your Ethereum wallet as destination token. > > At the same time, **ETH** has the address 0x2170ed0880ac9a755fd29b2688956bd959f933f8 **on the Binance Smart Chain**. > > Please make sure to research your destination token addresses, before you continue. > > What I personally prefer to go for is to receive USDT on Ethereum and BUSD on the Binance Smart Chain. - **The DePay base snippet** for the DePay Donation button: [Find it here](https://depay.fi/documentation/donations#donation-button). - Check the source code of my demo page if you want. --- ### 🛠️ Step 1: Build a Donation Page - I used the Bulma CSS framework to build the demo page. - Name your file **index.html** (important). - Fill it with content. Leave some space for the Donation button. --- ### 🛠️ Step 2: Donation Button Configuration - The HTML/JS snippet contains the configuration for the blockchains you want to support (as of writing this, BSC & Ethereum are supported — more on this below) ![Donation button source code - view-source:https://lxpzurich.github.io/](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a27w9wvvia02zfmkwl1l.png) ![The configuration attribute contains JSON — you see the prettified config. In this config, payments/donations are converted to USDT on Ethereum & to BUSD on BSCscan. Make sure to double-check all token addresses. ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2gmyepuegipu9b8fpd6.png) - Insert your receiver wallet address per blockchain. You can use the same wallet address on Ethereum and the Binance Smart Chain. - Insert the addresses of the token you want your donations to be converted to on each blockchain. As mentioned before: The same asset can have different token addresses on other blockchains. - After having your payment config ready, insert the snippet in your HTML wherever it looks amazing & save the file. --- ### 🛠️ Step 3: Set up Github Pages & Upload your page - You should have your page ready to be uploaded by now. - Create a repository for your Github Pages site. The repository should be the same as your Github handle. - Follow the steps described in Github’s official tutorial on this part of the setup: https://docs.github.com/en/pages/getting-started-with-github-pages/creating-a-github-pages-site ⭐ That’s it ⭐ push your page live & insert the link in your profile or elsewhere! --- > **Official Documentation**: [https://depay.fi/documentation/donations#introduction](https://depay.fi/documentation/donations#introduction) ----------- ## Some FAQ ### Multi-chain support? DePay currently supports: - Ethereum Network - Binance Smart Chain - (very soon): Major L2 solutions & networks We can not wait to see L2’s added to DePay. Concrete projects are not added to the roadmap yet but the DePay team is in close contact to multiple teams of amazing projects. Please follow the DePay news channel on Telegram in order to be notified about relevant updates! ### What are the benefits of decentralized altcoin donations? _Financial censorship_ seems to be more present than ever. No doubt — there are always several perspectives on one and the same story. The fact is that centralization always allows for the possibility of limiting opinions, speech and activities. _Permissionlessness_ is a strong indicator for a high grade of decentralization. Most payment solutions require individuals to sign up or even apply in order to use their solution. These companies have the power to stop your payments at any time. ### Integration examples for payments? We only disclose integrators who approach us for an official partnership. Our most recent official partner is BlackEyeGalaxy (Metaverse/NFT Gaming). You can buy their token with DePay straight on their website: {% embed https://www.youtube.com/watch?v=YsbCL1yHSsk %} ### What about decentralized Web3 subscriptions? It's not live yet, but we will release Web3 Subscriptions in the next few months. Making this work will enable tons of new use cases and we also look forward to celebrate the release. ### Will there be a setup configurator or wizard? Yes, it will actually be released within the next few days! We are super excited about it as the configurator will enable literally everyone to get this working in no time. ### What if my project requires a custom integration? Just hit us up, we will always take the time to help you with any question. There are indeed custom setups for Payments that require some more effort, but we were always able to provide quick support. ### Can my visitors pay or donate with mobile wallets? Yes! We integrated support for most major mobile wallets, too. ----- ## Do you like this? 👍 **DePay believes in freedom and growth through decentralisation & Open Source.** That’s why the source code of our [altcoin payment solution](https://depay.fi) is open for you: ⭐ [GitHub.com/DePayFi](GitHub.com/DePayFi) If you like our solution, please implement it and share this article, our documentation or the GitHub repository with likeminded devs or communities. This is the most appreciated way to say thank you 🙏 Cheers Alex, DePay CMO
lxp
1,000,961
How to add Rive animations in Flutter?
Simple animations in Flutter are boring and obsolete, so let’s spice up the game and learn about a...
0
2022-02-25T05:18:37
https://dev.to/wolfizsolutions/how-to-add-rive-animations-in-flutter-5ib
gamedev, programming, rive, flutter
Simple animations in Flutter are boring and obsolete, so let’s spice up the game and learn about a new tool that will change the whole landscape of your animations in any project. Yes, I am talking about Rive. So, let’s get started! ## What is Rive? Rive, formally known as Flare, is a real-time interactive design and animation tool by 2 Dimensions that enables us to integrate awe-inspiring animations like a pro. Rive animations are platform-independent which means build once and utilize on multiple platforms. You can use this collaborative editor to construct motion graphics that react to different states and user intakes. Then upload your animations into Flutter apps, games, & websites with their weightless open-source runtimes. ## Creating Animations In Rive Rive offers an online editor/studio for designing animations. You can gain access to the Rive editor by visiting the official website. The Rive community has great potential and they always create something new. Plenty of pre-built animations is available for us to start diving into the captivating and interesting animated Rive world. You can use these straight or can modify them as per your needs. We’ve experienced Rive in one of our flutter projects “**The Werewolves Escape**” a sliding puzzle game in which we have a character named Andrew. According to the plot Andrew got stuck in a deadly forest surrounded by werewolves and we have to play the game to make an escape for him. In this project, we have shown different emotions and physical movements of Andrew through Rive. His facial expressions are angry, sad, depressed, fearful, and weary, but as soon as he reaches the end, the expressions turn into a happy and excited person. With Rive, we also make his hands movement possible. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/di61quc32xh308jzmw7j.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bokz4npb72f9zt5ax397.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b65flade67nswgp1v5vx.png) ## **Getting Started With Rive** Getting started with Rive is straightforward. If you want to start from the base, just use the Rive console which is made for creating things from scratch. So, firstly let’s have a look at the primary controls of the Rive console. **Bones** Bone components are used for moving your character which is much similar to a skeleton. Just build Bones and then attach them to the path and watch the wondrous magic. **Nodes** Nodes are also one of the simple components of the console which enables transformations for instance rotation, scaling, and translation. You just need to select the Node and click on the specific place of your artboard to connect the node with any element. **Solo** If you need frame-by-frame animation, the solo will be your must-have. They are pretty much similar to the Node but they add the feature to fast toggle its child visibility. Just like Node, click on the Solo element and put it anywhere in your artboard. Then select all elements which you want to place in Solo and then from the animate window, select which child to show in which frame, that’s all. It’s ascertained that Rive animations are effortless, robust, and maintainable. The integration is so easy and makes it truly possible to create a variety of intricate animations while sitting on a couch, can’t we?.
wolfizsolutions
1,001,322
Create a Hyperlink UI in .NET MAUI Preview 13
On Feb. 15, 2022, Microsoft released .NET MAUI Preview 13. In this preview release, .NET MAUI...
0
2022-02-25T12:39:44
https://www.syncfusion.com/blogs/post/create-a-hyperlink-ui-in-net-maui-preview-13.aspx
csharp, dotnet, maui, mobile
--- title: Create a Hyperlink UI in .NET MAUI Preview 13 published: true date: 2022-02-25 11:51:30 UTC tags: csharp, dotnet, maui, mobile canonical_url: https://www.syncfusion.com/blogs/post/create-a-hyperlink-ui-in-net-maui-preview-13.aspx cover_image: https://www.syncfusion.com/blogs/wp-content/uploads/2022/02/Create-a-Hyperlink-UI-in-.NET-MAUI-Preview-13-672x372.png --- On Feb. 15, 2022, [Microsoft released .NET MAUI Preview 13](https://devblogs.microsoft.com/dotnet/announcing-net-maui-preview-13/ "Announcing .NET MAUI Preview 13 blog"). In this preview release, .NET MAUI supports formatted text with label control. I enjoyed working with this feature in .NET MAUI apps. In this blog, let’s see how the formatted text feature helps create labels with a hyperlink UI in .NET MAUI Preview 13 apps. **Note: ** Syncfusion is releasing .NET MAUI Preview updates for its controls in the middle of every month. ## Formatted Text in Labels As you know, a label is a view that displays text with or without text wrapping. With the formatted text feature, now in a single label, you can choose multiple options for each setting using different span elements. For example, you can apply separate colors to the words in a single label. This will make the label even more decorative. The span element supports the following options: - **CharacterSpacing** : Applies spacing between characters to the belonging span of the label. - **FontAttributes** : Applies font attributes to the text in the span of the label. - **FontFamily** : Applies font family to the text in the span of the label. - **FontSize** : Applies the font size to the text in the span. - **TextColor** : Applies color to the text of the span. - **TextTransform**. **Lowercase** : Transforms all uppercase characters of the text into lowercase. - **TextTransform**. **Uppercase** : Transforms all lowercase characters of the text into uppercase. - **TextDecorations**. **Underline** : Underlines the text in the span of the label. - **TextDecorations**. **Strikethrough** : Strikes through the text in the span of the label. Refer to the following code example. ```html <Label Margin="10" LineHeight="2"> <Label.FormattedText> <FormattedString> <Span Text=".NET MAUI Label with Text Formatting in Preview 13 " FontSize="20" /> <Span Text="Character Spacing - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" CharacterSpacing="12" /> <Span Text="Font Attributes - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" FontAttributes="Bold"/> <Span Text="Font Size - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="18"/> <Span Text="Font Family - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" FontFamily="Matura MT Script Capitals" /> <Span Text="Text Color - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" TextColor="Red"/> <Span Text="Lowercase - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" TextTransform="Lowercase"/> <Span Text="Uppercase - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" TextTransform="Uppercase" /> <Span Text="Strikethrough - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" TextDecorations="Strikethrough"/> <Span Text="Underline - " FontSize="14" TextColor="Black"/> <Span Text=" Hello World! " FontSize="14" TextDecorations="Underline" /> </FormattedString> </Label.FormattedText> </Label> ``` ![Labels—Formatted Text in .NET MAUI Preview 13](https://www.syncfusion.com/blogs/wp-content/uploads/2022/02/Labels%E2%80%94Formatted-Text-in-.NET-MAUI-Preview-13.png) <figcaption>Labels—Formatted Text in .NET MAUI Preview 13</figcaption> ## Create a Hyperlink UI Using the Formatted Text Feature of Labels I am going to use two options, **TextColor** and **TextDecorations.Underline** , to create a label with a hyperlink UI. ### Create reusable hyperlink class I have created a class named **HyperlinkUI** that is derived from span, where I have added a bindable property named LinkUrl. Since span inherits the GestureElement, you can add the Gesture Recognizer to navigate using the **LinkUrl** property. Refer to the following code example. ```csharp public class HyperlinkUI : Span { public static readonly BindableProperty LinkUrlProperty = BindableProperty.Create(nameof(LinkUrl), typeof(string), typeof(HyperlinkUI), null);public string LinkUrl { get { return (string)GetValue(LinkUrlProperty); } set { SetValue(LinkUrlProperty, value); } }public HyperlinkUI() { ApplyHyperlinkAppearance(); }void ApplyHyperlinkAppearance() { this.TextColor = Color.FromArgb("#0000EE"); this.TextDecorations = TextDecorations.Underline; }void CreateNavgigationCommand() { //... Since Span inherits GestureElement, you can add Gesture Recognizer to navigate using LinkUrl } } ``` Now, you can use this HyperlinkUI as a span element in the labels. We can show the whole text or part of the text as hyperlink text. Refer to the following code example. ```html <Label Margin="10" LineHeight="2" InputTransparent="False" TextColor="Black"> <Label.FormattedText> <FormattedString> <Span Text="Click "/> <local:HyperlinkUI Text="here" LinkUrl="https://docs.microsoft.com/xamarin/"/> <Span Text=" to learn more about Syncfusion .NET MAUI Controls."/> </FormattedString> </Label.FormattedText> </Label> ``` ![Label with Hyperlink UI in .NET MAUI Preview 13](https://www.syncfusion.com/blogs/wp-content/uploads/2022/02/Label-with-Hyperlink-UI-in-.NET-MAUI-Preview-13-1.png) <figcaption>Label with Hyperlink UI in .NET MAUI Preview 13</figcaption> ## Syncfusion .NET MAUI Controls Are Compatible with .NET MAUI Preview 13 Syncfusion [.NET MAUI controls](https://www.syncfusion.com/maui-controls ".NET MAUI controls") are compatible with .NET MAUI Preview 13. You can install our control package (latest version: 19.4.53-preview) from [NuGet Gallery](https://www.nuget.org/packages?q=Syncfusion.maui "Syncfusion .NET MAUI controls in NuGet Gallery") and use it in your application. Currently, Syncfusion offers nine controls: [Cartesian Charts](https://help.syncfusion.com/maui/cartesian-charts/getting-started ".NET MAUI Cartesian Charts documentation"), [Circular Charts](https://help.syncfusion.com/maui/circular-charts/getting-started ".NET MAUI Circular Charts documentation"), [Scheduler](https://www.syncfusion.com/maui-controls/maui-scheduler ".NET MAUI Scheduler"), [ListView](https://www.syncfusion.com/maui-controls/maui-listview ".NET MAUI ListView"), [Tab View](https://www.syncfusion.com/maui-controls/maui-tab-view ".NET MAUI Tab View"), [Radial Gauge](https://www.syncfusion.com/maui-controls/maui-radial-gauge ".NET MAUI Radial Gauge"), [Slider](https://www.syncfusion.com/maui-controls/maui-slider ".NET MAUI Slider"), [Range Slider](https://www.syncfusion.com/maui-controls/maui-range-slider ".NET MAUI Range Slider"), [Badge View](https://www.syncfusion.com/maui-controls/maui-badge-view ".NET MAUI Badge View"), and [Effects View](https://www.syncfusion.com/maui-controls/maui-effects-view ".NET MAUI Effects View"). The suite also supports file-format libraries for [Excel](https://www.syncfusion.com/excel-framework/maui/excel-library ".NET MAUI Excel Library"), [PDF](https://www.syncfusion.com/pdf-framework/maui/pdf-library ".NET MAUI PDF Library"), [Word](https://www.syncfusion.com/word-framework/maui/word-library ".NET MAUI Word Library"), and [PowerPoint](https://www.syncfusion.com/powerpoint-framework/maui/powerpoint-library ".NET MAUI PowerPoint Library") files. Check out our [.NET MAUI controls road map](https://www.syncfusion.com/products/roadmap/maui-controls "Road Map for Essential Studio 2022 Volume 1 for .NET MAUI") for plans for our upcoming 2022 Volume 1 release and find more on our controls in their [documentation](https://help.syncfusion.com/maui/introduction/overview "Syncfusion .NET MAUI controls documentation"). ## GitHub reference For more details, refer to the [example for Creating a Hyperlink UI in .NET MAUI Preview 13](https://github.com/SyncfusionExamples/maui-general-samples/tree/main/FormattedText "Creating Hyperlink UI in .NET MAUI GitHub demo"). ## Conclusion I hope you enjoyed this blog and thanks for reading! For more details, refer to the article, [.NET MAUI Preview 13](https://devblogs.microsoft.com/dotnet/announcing-net-maui-preview-13/ "Announcing .NET MAUI Preview 13 blog"). Also, check out the [.NET MAUI Preview 13 release notes](https://github.com/dotnet/maui/releases/tag/6.0.200-preview.13.2 ".NET MAUI Preview 13 release notes on GitHub"). As I said, our Syncfusion .NET MAUI controls are compatible with the Preview 13 version, so you can easily use them in your app. If you have any feedback, special requirements, or controls that you’d like to see in our .NET MAUI suite, please let us know in the comments section below. Also, you can contact us through our [support forum](https://www.syncfusion.com/forums "Syncfusion support forum"), [support portal](https://support.syncfusion.com/ "Syncfusion support portal"), or [feedback portal](https://www.syncfusion.com/feedback/xamarin-forms "Syncfusion Feedback Portal"). We are always happy to assist you! ## Related blogs - [Reuse Xamarin.Forms Custom Renderers in .NET MAUI](https://www.syncfusion.com/blogs/post/how-to-reuse-xamarin-forms-custom-renderers-in-net-maui.aspx "How to Reuse Xamarin.Forms Custom Renderers in .NET MAUI blog") - [5 Advantages of .NET MAUI Over Xamarin](https://www.syncfusion.com/blogs/post/advantages-net-maui-over-xamarin.aspx "5 Advantages of .NET MAUI Over Xamarin blog") - [Learn How to Use Dependency Injection in .NET MAUI](https://www.syncfusion.com/blogs/post/learn-how-to-use-dependency-injection-in-net-maui.aspx "Learn How to Use Dependency Injection in .NET MAUI blog") - [Create Your First .NET MAUI App with Microsoft MVP Codrina Merigo [Webinar Show Notes]](https://www.syncfusion.com/blogs/post/create-your-first-net-maui-app-with-microsoft-mvp-codrina-merigo-webinar-show-notes.aspx "Create Your First .NET MAUI App with Microsoft MVP Codrina Merigo [Webinar Show Notes]")
sureshmohan
1,001,601
Build a Next.js Website in 4 Steps
Carson Gibbons is the Co-Founder &amp; CMO of Cosmic JS, an API-first Cloud-based Content Management...
0
2022-02-25T17:21:52
https://dev.to/mdmahirfaisal/build-a-nextjs-website-in-4-steps-3m4c
Carson Gibbons is the Co-Founder & CMO of Cosmic JS, an API-first Cloud-based Content Management Platform that decouples content from code, allowing devs to build slick apps and websites in any programming language they want. In this blog I will show you how to pick up an existing website example and curtail it into your own, beautiful Next.js website for publishing content and easy updates. This React Universal Website is built using the Next.js framework. Code is shared between the client and server making development a breeze. Add Cosmic JS-powered content and you’re taking your website to the Next.js level. Next.js is an amazing new addition to the React open-source ecosystem. It is “a minimalistic framework for universal server-rendered React applications” that makes the process of building these types of applications much faster and easier. If you haven’t already, get started by Signing Up for Cosmic JS. Helpful resources are provided below to streamline your development operations. 1. CREATE A NEW BUCKET Your bucket’s name is the name of your website, project, client or web application that you are building. 2. INSTALL THE NEXT.JS APP Once you’ve signed up and named your bucket, you’ll be prompted to start from scratch or “see some apps”. For this blog I simply clicked the right button to “see some apps” so that I could begin the installation process for the Next.js Website. 3. DEPLOY TO WEB I clicked “Deploy to Web”. I can then start editing my Global Objects, Object Types and Pages while my web application is deploying. You will receive an email confirming the deployment of your web application. If you encounter any issues during deployment, you may be routed to the Cosmic JS Troubleshooting Page. 4. EDIT GLOBAL OBJECTS, OBJECT TYPES AND OBJECTS Editing is a dream come true in the Cosmic JS Dashboard. To read more about how Cosmic JS was built with editing content in mind, read Building With the Content Editor in Mind.
mdmahirfaisal
1,001,824
Postgres Connection Pooling and Proxies
One essential concept that every backend engineer should know is connection pooling. This technique...
0
2022-02-25T23:07:04
https://arctype.com/blog/connnection-pooling-postgres
technology, programming, tutorial, productivity
One essential concept that every backend engineer should know is connection pooling. This technique can improve the performance of an application by reducing the number of open connections to a database. Another related term is "proxies," which help us implement connection pools. In this article, we'll discuss connection pooling, implementing it in Postgres, and how proxies fit in. We'll do this while also examining some platform-specific considerations. ## Challenges of managing user requests Most applications have users constantly sending requests for different purposes and activities. Some applications might have many requests, while others might have fewer. Furthermore, the frequency and the intervals of these user requests also vary. Whenever a user sends a request, the backend database or servers need to perform several activities to open a connection, maintain it, and close it. Several resources are used to perform these functions. ![](https://arctype.com/blog/content/images/2022/02/image-17.png) When multiple requests are coming simultaneously, performing these functions for [every request becomes challenging](https://blog.crunchydata.com/blog/your-guide-to-connection-management-in-postgres). The database overhead is just too much to handle. As a result, when there are too many requests, the number of transactions per second plummets, and latency swells. In essence, the time to fulfill every request increases, and the performance of database servers becomes relatively poor. Here's where the concept of connection pooling comes into play. ## What is connection pooling? [Connection pooling](https://www.cockroachlabs.com/blog/what-is-connection-pooling/) is the process of having a pool of active connections on the backend servers. These can be used any time a user sends a request. Instead of opening, maintaining, and closing a connection when a user sends a request, the server will assign an active connection to the user. Once the user has completed their process, the active connection will be added back to the pool of connections. This way, the number of overhead operations drops significantly, optimizing the servers' performance and solving the problem of low transaction frequency and high latency. ## When should you use connection pooling? The connection pooling mechanism can be used in the following scenarios: - When the overhead for opening, maintaining, and closing connections is too much for servers to handle. - When an application requires [JDBC or JTA connection objects](https://www.developer.com/database/understanding-jdbc-connection-pooling/). - When connections need to be shared among multiple users for the same transaction. - When the application doesn't manage the pooling of its own connections for operations such as creating a connection or searching for a username. The obvious benefit of connection pooling is that it improves server performance. Interestingly, a server using pooling can operate at [speeds almost double](https://stackoverflow.blog/2020/10/14/improve-database-performance-with-connection-pooling/) that of a server that doesn't leverage this strategy. ## Connection Pooling and Postgres Postgres is an open-source relational database management system (RDBMS) that has been actively developed by volunteers from its [community for more than 15 years.](https://github.com/postgres) Used by various applications on both web and mobile platforms, Postgres has various tools and offers features such as foreign keys, SQL sub-selects, hot standby (as of 9.0), views, and triggers. Postgres also has tools for connection pooling. There are several tools in the ecosystem: - [pgpool](https://www.pgpool.net/mediawiki/index.php/Main_Page) provides connection pooling, load balancing, high availability, and replication abilities. - [pgbouncer](https://www.pgbouncer.org/) is the go-to tool made for connection pooling only. - [pgcat]( https://github.com/levkk/pgcat ) is pgbouncer rewritten in Rust with support for load balancing between replicas, failover in case a replica fails a health check, and sharding at the pooler level. You can use either of these tools based on your project requirements. However, Pgbouncer is the more lightweight connection pooler of the two. There are [three main types of pooling](https://www.pgbouncer.org/features.html) supported by Pgbouncer: - **Session Pooling:** This is the default method, where a connection is assigned to a client application for the lifetime of the client connection. Once the client application disconnects, the connection is added back to the pool. - **Transaction Pooling:** In this method, the connection is assigned to the client application for every transaction. Once the transaction ends, the connection is added back to the pool. - **Statement Pooling:** The connection is assigned for every statement. Once the statement is complete, the connection is released. In this method, multi-statement transactions are not supported. Crunchy Data, [Supabase](https://arctype.com/postgres/connect/supabase-postgres), and [Heroku](https://arctype.com/postgres/setup/heroku-postgres) all offer [built-in solutions for using pgbouncer](https://supabase.com/blog/2021/04/02/supabase-pgbouncer ). ## What is Yandex/Odyssey? [Odyssey](https://github.com/yandex/odyssey/blob/master/documentation/internals.md), developed by [Yandex](https://yandex.com/), is a newer solution for pooling. While Pgbouncer is a single-threaded pooler, Odyssey supports multi-core and multi-threaded processing. The tool has the following features: ### Multi-thread processing This feature enables one to set up multiple worker threads to process connections. With single-threaded processing, only one thread of instructions and requests could be processed simultaneously. Here, however, many threads can be processed simultaneously. Due to this, the processing capabilities of the backend databases are scaled multiple times. ### Evolved pooling mechanisms Odyssey's transactional pooling method is much more evolved than Pgbouncer. If the client disconnects unexpectedly in the middle of a transaction after assigning a connection, it'll automatically roll back the client to its previous state and cancel the connection. Furthermore, Odyssey can remember its last server connection owner client, which means there's lesser overhead for the database to set up client options for every client-to-server assignment. Besides the above two features, there are numerous others, such as full-featured [SSL/TLS](https://www.csoonline.com/article/3246212/what-is-ssl-tls-and-how-this-encryption-protocol-works.html) support, the ability to define connection pools as database and users, individual authentication for every pool, and more. These more advanced features are only possible thanks to Odyssey's architecture. Let's see how it contributes. ### Architecture [Odyssey](https://github.com/yandex/odyssey) has an instance that manages the incoming client connection requests. Beneath it is the worker pool with multiple worker threads that can process incoming connections. Parallel to this worker pool is the system with routers, servers, consoles, and cron. - The instance is an entry point for application requests and handles the initialization processes. - The system then starts the routers, cron, and console. It listens to the incoming requests, and when there is an incoming connection, it notifies the worker pool. - The router is responsible for the attachment and detachment of client-to-server and client pool queueing, among other operations. - The worker thread is responsible for creating coroutines for clients when an incoming connection request is received on the queue and handling the complete lifecycle of the client connection. On the other hand, the worker pool is responsible for managing all the threads. Sometimes the multi-threaded structure might have a lot of communication overhead, such as when there are fewer requests and the requests' time is relatively short. In this case, there is a single worker mode where, instead of creating separate threads and coroutines for every worker thread, only one worker coroutine is created inside the system thread. While Odyssey and Pgbouncer function well as connection poolers with applications that have servers, there's another solution for implementing connection pools in serverless environments. These are proxy servers. Before I give you an overview of this solution, we'll need to make a detour to discuss how serverless applications function. ## Serverless applications Not all applications require servers since maintaining them is quite a hassle, their costs are much higher, and there's a lot of waste. As a result, some applications go serverless. In [serverless applications](https://vercel.com/docs/concepts/solutions/databases ), when a user makes a request, an instance is created, their request is fulfilled within that instance, and the instance is destroyed. So, in essence, these instances are tiny containers that run only the code needed to fulfill the client's request, and there's no other communication overhead. [Serverless environments](https://en.wikipedia.org/wiki/Serverless_computing) present certain challenges for developers who want to use tools like Odyssey or Pgbouncer. Odyssey's architecture is such that every instance manages the entire connection pool. In a serverless environment, numerous requests are made, and an instance is created and destroyed for each request. This means when there are multiple requests, multiple instances will be created, and this means there will be multiple connection pools that wouldn't have communication between each other since it all doesn't fall under one server. Hence, they will not be able to coordinate and communicate if and when the database is overloaded. Thus, Odyssey will not be able to offer its connection pooling services in a serverless environment. Here's where [Prisma Data Proxy](https://www.prisma.io/docs/concepts/components/prisma-data-platform) comes into play. ## What are proxies? How do they work? Proxy servers are a solution where a data management platform acts as an intermediary between the client and the database. This proxy server can manage the traffic coming the database's way. In this article, we'll learn about proxies by studying [Prisma Data Proxy](https://www.prisma.io/docs/concepts/data-platform/data-proxy). ![](https://arctype.com/blog/content/images/2022/02/image-18.png) This is a recent solution that is still in its early stages. However, it looks promising and even easy to handle, with a clean layout and a navigable UI. By using the connection string, or clicking **Create a new connection string**, you can set up the Prisma Data Proxy for serverless applications and manage the connection pools. ## Conclusion Connection pooling is a helpful way to improve the performance of your application. This article showed what challenges server and serverless applications face when managing client requests and how they can be solved with Postgres tools like Pgbouncer, Odyssey, and Prisma Proxy Data. Consider implementing some of these solutions in your next project!
rettx
1,002,986
Python for everyone: Mastering Python The Right Way
We all know that food, shelter and clothing are the basic needs in live and are essential for...
0
2022-02-27T09:01:08
https://dev.to/mainashem/python-for-everyone-mastering-python-the-right-way-523l
We all know that food, shelter and clothing are the basic needs in live and are essential for survival. Similarly in being a developer especially now in web3, python is more like a basic need. With the ever evolving changing and new advancements in technology, python is the present and the future. Many will argue that there are many languages in the market (which is a fact) and thus we can not isolate python as the god of all. This is a valid argument but what really makes python stand out is that it is a language for everyone and can be used almost everywhere in technology. Why am I saying python is a language for everyone? First, python is very easy to learn even for people with very little experience in programming languages due to its easy syntax. Second, python is a multipurpose programming language and is fields such as: - Game development - Machine learning and Artificial intelligence - Data science and data visualization - Web development - Web scrapping - Desktop GUI and many more. Third, most organizations and employers are switching to python thus making it a very marketable language and skill. ## Mastering Python There are people who prefer to learn with documentation while others understand better with video tutorials. Use whatever works for you since the end goal is to master the language and make a living out of it. There are a lot of resources on the internet for you to choose from. You can also attend one of the many boot camps which are mostly instructor-led. How do you master python is not an easy question to answer since we all have different approaches when learning a new skill. How fast one masters a skill is also unique. The most helpful answer one can give is providing a roadmap for python that will guide a learner as they try to master the language like the one below; 1. Python basics 2. Data structures in python 3. Objected oriented vs functional programming 4. Modules and packages 5. File and exception handling 6. Important libraries You need an IDE (Integrated development environment) where you will write and run your codes after having installed the python bundle in your pc. You can now start working with python by writing short codes based on your road map and the kind of tutorials you are using. With time you can advance to frameworks which help you minimise the amount of code you write and also save you quite a lot of time. One key thing that many beginners the for granted is practice. Practice makes perfect is not a myth since it has been proven by professionals from all walks of life to be true. The more you code, the more you master the skill, the more you are conversant with the language, the more you know how to avoid errors, the more you understand shortcuts and ways to make your coding easier and more enjoyable. if I was to restart my programming journey, python would definitely be the first object oriented language I would learn.
mainashem
1,003,055
Coding Interview – Converting Roman Numerals in Python
A common assignment in Python Coding Interviews is about converting numbers to Roman Numerals and...
17,096
2022-02-28T07:41:05
https://bas.codes/posts/python-roman-numerals/
python, career, beginners, algorithms
A common assignment in Python Coding Interviews is about converting numbers to Roman Numerals and vice versa. Today, we'll look at two possible implementations in Python. ## Roman Numerals Roman Numerals consists of these symbols: | Symbol | Numerical Value | |--------|-----------------| | `I` | 1 | | `V` | 5 | | `X` | 10 | | `L` | 50 | | `C` | 100 | | `D` | 500 | | `M` | 1000 | The numbers are constructed by combining these symbols. For example, the number `22` is represented by: `XXII`: The symbol with the highest value we can use is `X`(=10), and we need it twice. Then, we are left with 2 for which we use the symbol `I` with the value of `1` twice. We end up with `XXII` for `22`. That's mostly it, but there is one exception. Symbols must not repeat more than twice. So `24` cannot be written as `XXIIII`, but must be written as `XXIV`: When there normally would be 4 repeating symbols, these are replaced by the next higher symbol and a substraction symbol in front of it. Our `XXIV` reads like so: - Two `X` - One `V` - one `I` ## Converting Numbers to Roman Numerals in Python ```python def to_roman_numeral(value): roman_map = { # 1 1: "I", 5: "V",, 10: "X", 50: "L", 100: "C", 500: "D", 1000: "M", } result = "" remainder = value for i in sorted(roman_map.keys(), reverse=True):# 2 if remainder > 0: multiplier = i roman_digit = roman_map[i] times = remainder // multiplier # 3 remainder = remainder % multiplier # 4 result += roman_digit * times # 4 return result ``` - `# 1`: We start with a `dict` containing a translation for each symbol (`roman_map`). - `# 2`: Now, we sort the numerical values in descending order and iterate over these values. - `# 3`: Inside the loop, we use an integer division to determine how often we need this particular symbol to repeat. - `# 4`: We calculate the missing magnitude of our value by the modulo operation - `# 5`: We append the number of symbols to our `result` string. Let's see what happens if we convert the number `6`: - First iteration: `i = 1000` - `6 // 1000 = 0` - Second iteration: `i = 500` - `6 // 500 = 0` - Third iteration: `i = 100` - `6 // 100 = 0` - Fourth iteration: `i = 50` - `6 // 50 = 0` - Fivth iteration: `i = 10` - `6 // 10 = 0` - Sixth iteration: `i = 5` - `6 // 1000 = 1` - `result = "V"` - `remainder = 1` - Seventh iteration: `i = 1` - `1 // 1 = 1` - `result = "VI"` - `remainder = 0` That looks good so far. However, we do not obey the rule of not repeating one symbol more than three times with this algorithm. If `times` is greater than `3`, we need to introduce a special case. Let's think about that for a second. A repetition of a symbol more than three times can only occur for the symbols `I`, `X`, and `C`. This is because `VVVV`, `LLLL`, and `DDDD` (20, 200, and 2000) would be covered by our algorithm as `XX`, `CC`, and `MM`, anyways. As a result, we could just attach special symbols to our map instead of introducing a condition in our code: ```python ... roman_map = { 1: "I", 4: "IV", 5: "V", 9: "IX", 10: "X", 40: "XL", 50: "L", 90: "XC", 100: "C", 400: "CD", 500: "D", 900: "CM", 1000: "M", } ... ``` This map would cover all cases of a symbol repeated more than three times. ## Converting Roman Numerals to Numbers in Python ```python def from_roman_numeral(numeral): value_map = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000} value = 0 last_digit_value = 0 for roman_digit in numeral[::-1]: # 1 digit_value = value_map[roman_digit] if digit_value >= last_digit_value: # 2 value += digit_value last_digit_value = digit_value else: # 3 value -= digit_value return value ``` - `# 1`: We iterate the Roman Numeral string *backwards*. If you're not familiar with the `[::-1]` notation, have a look at my [guide on slicing in Python](https://bas.codes/posts/python-slicing) where I cover that in detail. - `# 2:`: We check if the digit we are currently looking at is larger than the digit we have looked at before. If it is, we can just add the value we read from our map. - `# 3`: If the current digit has a smaller value than the last one, we know that we deal with the special case of not repeating a symbol more than three times. In this case we must substract the value from our result and must not update the `last_digit_value`. ## Your Coding Interview I hope that this little tutorial helped you with preparing for your Coding Interview in Python. I will add more algorithm examples like this in the future. Stay tuned and [check my blog for updates](https://bas.surf/codinginterview) or subscribe to my newsletter! You can also [follow me on Twitter](https://twitter.com/bascodes).
bascodes
1,003,743
What is React-Redux and Why it is used?
Today we will discuss react-redux and its use in web development projects. Also, Linearloop is a...
0
2022-02-28T06:27:14
https://www.linearloop.io/blog/what-is-react-redux-and-why-it-is-used/
react, redux, webdev, beginners
--- canonical_url: https://www.linearloop.io/blog/what-is-react-redux-and-why-it-is-used/ --- Today we will discuss react-redux and its use in web development projects. Also, Linearloop is a prominent [React JS web development company in India & USA](https://www.linearloop.io/web-development/), and we have prepared the content by collecting our years of experience. Technology is constantly evolving, and we need to upgrade with it. Because of this reason, we always keep developers aware through our blogs. So, guys, if you are looking to know what is the use of redux in react and [how does redux works](https://www.linearloop.io/blog/what-is-react-redux-and-why-it-is-used/), this article will be worthy. Moreover, we are your reliable service partner and, from here, you can hire redux developers effortlessly. ## What is react-redux? Let’s elaborate on react first. Developers working in react technology must know that it is a component-based front-end library tool. Further react helps connect distinct segments of a web page. The terminology “props” is used in the component through which non-static variables are accessed. Props help, pass those variables to other child components from the parent. Also, with the implementation of React, an attractive UI becomes obvious. Also, for each state of an application, it renders and updates accurate components, as and when the data updates. Further, with the declarative views, the code becomes more predictable and simpler for debugging. Now, coming to redux, it is a state management tool for the applications developed in JavaScript. Basically, redux is storage, or you can say cache, and all components access it in an organized way. The access is done via Actions & Reducer. Furthermore, redux acts as a store that contains the state of all the variables used in an application, and it also builds the process for the interaction with the store. Hence it prohibits components from the random update and read. You can understand it with an example. Almost all of us have some deposits in banks, but in order to access that money, we need to follow the protocol. The same case is here. Redux is frequently used with react and is compatible with other technologies like preact, Vue, [Angular](https://linearloop.io/blog/react-vs-angular-which-better-for-front-end-development/), etc. As we know, Redux and React are used together a lot, but you must know they do not have dependencies. ### How does redux work? As we have stated above, the key role of redux revolves around Store, Action, Reducers & Subscriptions. **Initial State:** At this stage, all the To-dos lists are placed. **Dispatches/Actions:** Here, we need to perform “Action to Dispatch” to the Reducer. **Reducer:** Now comes the role of reducer, and it will decide what to do with the given actions. It is the core functioning of redux, and now we will focus on what is redux in react. As we have explained above, react applications hold various components, with the state of other components. And, it becomes challenging to decide where to place the state among these components where their maintenance could be easier. To eliminate the issue, react-redux comes, and it offers a dedicated store and state that resides inside the components, and here, the maintenance is quick & simple. Apart from that, actions and reducers are also introduced. And they work in synchronization of the stores that further enhance the predictability of the state. ### What is the use of redux in react? With the implementation of redux in react, the application becomes more interactive, innovative, and productive. Further, the development time is fast here. Let’s discuss the advantages of using redux in react: #### 1. React-redux offers centralized state management: With the concept of “Store” the technology delivers controlled and centralized management. Further, redux is very strict for the codes to be structured. And the organized code makes maintenance and management effortless. Moreover, it also helps in the segregation of the business logic from the component tree. Components of the whole application get quick accessibility of the data. With the evolution of react-redux, the component can easily get the state it needs. #### 2. Performance: When a component updates, react re-renders it by default. Under a specific scenario, when the data of the component is not updated, this re-rendering becomes useless. And Redux store skips such non-required re-renders and improves the overall performance. Further, it also assures the component will only be re-rendered when the data changes in the actual. ### 3. Long-term data storage: In redux, data resides on the page until it is refreshed. Hence utilizing this feature, the concept is widely being used for the storage of long-term data that is needed when the end-users are navigating the applications. Furthermore, react is the best option to store short-term data that changes frequently. Hence react-redux brings performance optimization and increases the overall productivity of an application. #### 4. Fast Debugging: Being a react developer, you may be aware that tracking the state of an application while debugging is a challenging process. And with the contribution of redux, debugging becomes easier. As we know, redux represents the whole state of an application hence facilitating the best for time travel debugging. Moreover, it is capable of sending the entire bug report to the server as well. #### 5. Superior community support: Redux holds a vast range of communities and thus offers great support. One can easily get to know about the best practices of redux-react along with quick support while coding. Further, here numerous ranges of extensions are available through which the code logic can be more simplified, and as a result, performance enhances. Conclusion We have explained all the aspects of react-redux. Further, being your reliable partner for [redux development services in India](https://www.linearloop.io/web-development/) & USA, we are always available to listen to your queries. If you have doubt, please connect. Further, if you are looking to h[ire react-js developers from India](https://www.linearloop.io/web-development/), you are just a click away. Our team has executed many projects using react-redux successfully and now is your turn. Apart from that we have a solid presence in USA and have an extensive range of clients here. So, if you want to [hire a react js developer in India & USA](https://www.linearloop.io/web-development/), Linearloop is always there. Don’t delay because every second is important.
linearloophq
1,003,766
How do I keep my devs busy while waiting on code review?
I was recently involved in a discussion with a Scrum master who asked this question. “Last sprint,...
0
2022-08-13T11:41:25
https://jhall.io/archive/2022/02/28/how-do-i-keep-my-devs-busy-while-waiting-on-code-review/
agile, flow, flowengineering, scrum
--- title: How do I keep my devs busy while waiting on code review? published: true date: 2022-02-28 00:00:00 UTC tags: agile,flow,flowengineering,scrum canonical_url: https://jhall.io/archive/2022/02/28/how-do-i-keep-my-devs-busy-while-waiting-on-code-review/ --- I was recently involved in a discussion with a Scrum master who asked this question. “Last sprint, most of our stories were stuck in queue, waiting for seniors on another team to review them. How can I keep my devs busy while they’re waiting?” This type of question is extremely common in my experience. Whatever’s causing a delay, we look for ways to stay busy while waiting for delayed items. Unfortunately, this is _the wrong approach_. This is what I told this distressed Scrum master: **Don’t worry about the devs not having enough work.** If every dev is always busy, you’ll have a traffic jam. A highway at full capacity is called a parking lot. Instead, worry on the smooth flow of work through the system. If you solve the flow problems, all the devs will have enough work to do anyway. If you focus on giving them work, though, you’ll end up with even more work stuck in queue,causing more blockage, and even more problems than you’re facing right now. * * * _If you enjoyed this message, [subscribe](https://jhall.io/daily) to <u>The Daily Commit</u> to get future messages to your inbox._
jhall
1,004,145
Next.js + Tailwind CSS
Create your project Start by creating a new Next.js project if you don’t have one set up...
0
2022-02-28T14:41:10
https://dev.to/reactwindd/nextjs-tailwind-css-1ci1
javascript, nextjs, react, tailwindcss
## Create your project Start by creating a new Next.js project if you don’t have one set up already. The most common approach is to use [Create Next App](https://nextjs.org/docs/api-reference/create-next-app). ``` // Terminal $ npx create-next-app my-project $ cd my-project ``` ## Install Tailwind CSS Install `tailwindcss` and its peer dependencies via npm, and then run the init command to generate both `tailwind.config.js` and `postcss.config.js`. ``` // Terminal $ npm install -D tailwindcss postcss autoprefixer $ npx tailwindcss init -p ``` ## Configure your template paths Add the paths to all of your template files in your `tailwind.config.js` file. ``` // tailwind.config.js module.exports = { content: [ "./pages/**/*.{js,ts,jsx,tsx}", "./components/**/*.{js,ts,jsx,tsx}", ], theme: { extend: {}, }, plugins: [], } ``` ## Add the Tailwind directives to your CSS Add the `@tailwind` directives for each of Tailwind’s layers to your `./styles/globals.css` file. ``` // globals.css @tailwind base; @tailwind components; @tailwind utilities; ``` ## Start your build process Run your build process with `npm run dev`. ``` // Terminal $ npm run dev ``` ## Start using Tailwind in your project Start using Tailwind’s utility classes to style your content. ``` // index.js export default function Home() { return ( <h1 className="text-3xl font-bold underline"> Hello world! </h1> ) } ```
reactwindd