id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
71,282
Checking if an input is empty with JavaScript
This article shows you how to check if an input is empty with JavaScript
0
2018-12-28T22:52:43
https://zellwk.com/blog/check-empty-input-js
css, javascript
--- title: Checking if an input is empty with JavaScript canonical_url: https://zellwk.com/blog/check-empty-input-js tags: css, js description: This article shows you how to check if an input is empty with JavaScript published: true cover_image: https://zellwk.com/images/2018/empty-input-validation-js/check.gif --- Last week, I shared how to [check if an input is empty with CSS]( https://zellwk.com/blog/check-empty-input-css "Checking if an input is empty with CSS"). Today, let's talk about the same thing, but with JavaScript. It's much simpler. Here's what we're building: <figure> <img src="https://zellwk.com/images/2018/empty-input-validation-js/check.gif" alt="When input is filled, borders should turn green"> <figcaption></figcaption> </figure> ## Events to validate the input If you want to validate the input when a user types into the field, you can use the `input` event. ```js const input = document.querySelector('input') input.addEventListener('input', evt => { // Validate input }) ``` If you want to validate the input when a user submits a form, you can use the `submit` event. Make sure you prevent the default behavior with`preventDefault`. If you don't prevent the default behavior, browsers will navigate the user to the URL stated in the action attribute. ```js const form = document.querySelector('form') form.addEventListener('submit', evt => { evt.preventDefault() // Validate input }) ``` ## Validating the input We want to know whether an input is empty. For our purpose, empty means: 1. The user hasn't typed anything into the field 2. The user has typed one or more empty spaces, but not other characters In JavaScript, the pass/fail conditions can be represented as: ```js // Empty ' ' ' ' ' ' // Filled 'one-word' 'one-word ' ' one-word' ' one-word ' 'one phrase with whitespace' 'one phrase with whitespace ' ' one phrase with whitespace' ' one phrase with whitespace ' ``` Checking this is easy. We just need to use the `trim` method. `trim` removes any whitespace from the front and back of a string. ```js const value = input.value.trim() ``` If the input is valid, you can set `data-state` to `valid`. If the input is invalid, you can set the `data-state` to `invalid`. ```js input.addEventListener('input', evt => { const value = input.value.trim() if (value) { input.dataset.state = 'valid' } else { input.dataset.state = 'invalid' } }) ``` ```css /* Show red borders when filled, but invalid */ input[data-state="invalid"] { border-color: hsl(0, 76%, 50%); } /* Show green borders when valid */ input[data-state="valid"] { border-color: hsl(120, 76%, 50%); } ``` This isn't the end yet. We have a problem. When a user enters text into the field, input validation begins. However, if the user removes all text from the field, the input continues to be invalid. We don't want to invalidate the input if the user removes all text. They may need a moment to think, but the invalidated state sets off an unnecessary alarm. <figure> <img src="https://zellwk.com/images/2018/empty-input-validation-js/problem.gif" alt="Form becomes invalid when empty after user types into it"> <figcaption></figcaption> </figure> To fix this, we can check whether the user has entered any text into the input before we `trim` it. ```js input.addEventListener('input', evt => { const value = input.value if (!value) { input.dataset.state = '' return } const trimmed = value.trim() if (trimmed) { input.dataset.state = 'valid' } else { input.dataset.state = 'invalid' } }) ``` Here's a Codepen for you to play with: <p data-height="476" data-theme-id="7929" data-slug-hash="EObQpr" data-default-tab="result" data-user="zellwk" data-pen-title="Empty validation with JavaScript" class="codepen">See the Pen <a href="https://codepen.io/zellwk/pen/EObQpr/">Empty validation with JavaScript</a> by Zell Liew (<a href="https://codepen.io/zellwk">@zellwk</a>) on <a href="https://codepen.io">CodePen</a>.</p> <script async src="https://static.codepen.io/assets/embed/ei.js"></script> <hr> Thanks for reading. This article was originally posted on [my blog](https://zellwk.com/blog/check-empty-input-js). Sign up for [my newsletter](https://zellwk.com) if you want more articles to help you become a better frontend developer.
zellwk
71,394
Announcing SIESGSTarena's Open Source Site
Description below
0
2018-12-30T04:17:03
https://dev.to/siesgstarena/announcing-siesgstarenas-open-source-site--4324
opensource, siesgstarena, github, studentdevelopers
--- title: Announcing SIESGSTarena's Open Source Site published: true tags: opensource, siesgstarena, github, student developers description: Description below --- <center><img src="https://thepracticaldev.s3.amazonaws.com/i/xolbfpjfj3lyr64qgyp6.png" width="100%"></center> <a href="https://arena.siesgst.ac.in">SIESGSTarena</a> is the <a href="https://codechef.com/">CodeChef</a> SIESGST Campus Chapter is an online platform for conducting programming competitions. It is a one spot for friendly competition, built on top of competitive programming platform. It was created as a platform to help programmers learn, share and improve themselves in the world of algorithms. This platform is built and maintained by students of <a href="http://siesgst.edu.in/">SIES Graduate School of Technology</a>. <br> At SIESGSTarena, we focus on improving the level of Competitive Programming in the college as well as we engineer applications at an aggressive pace. To do so, the software or tools, we develop as well as use must be simple to understand, production-hardened, and easily accessible. That’s why we support open source software. We make use of the best the community has to offer, and contribute our best work back to the community. Our Developer Team is excited to announce <a href="https://siesgstarena.github.io/"><b>SIESGSTarena’s Open Source Site</b></a>, a page highlighting some of the best softwares and tools available on our <a href="https://github.com/siesgstarena">public GitHub page</a>. We have some exciting projects in the works. And they’ll be highlighted here, first. <br> We hope that SIESGSTarena’s open source software sparks adoption, discussion, contributions and ultimately, an even richer open source community. <br>
siesgstarena
71,455
Nokogiri installation errors on macos
Quick answer: bundle config build.nokogiri --use-system-libraries &amp;&amp; bundle install...
0
2019-01-06T02:56:45
https://markentier.tech/posts/2018/12/ruby-bundler-nokogiri-macos/
nokogiri, ruby, bundler, rails
--- title: Nokogiri installation errors on macos published: true tags: nokogiri, ruby, bundler, rails canonical_url: https://markentier.tech/posts/2018/12/ruby-bundler-nokogiri-macos/ cover_image: https://thepracticaldev.s3.amazonaws.com/i/uxh9hxetameb2z8d94hx.png --- Quick answer: ```sh bundle config build.nokogiri --use-system-libraries && bundle install ``` ----- _This was originally posted [on my blog](https://markentier.tech/posts/2018/12/ruby-bundler-nokogiri-macos/) here._ Every other year or so I want (or need) to install dependencies for a Ruby application on my Macbook, directly on the host instead of a VM or container. Mostly it's a Rails app. And every time our most "loved" dependency bails on me: [`nokogiri`][n]. I think it always fails on a Mac. (At least once.) Because I never go directly to the documentation of whatever refuses to work, I usually google my way to a solution. In my case this was then harder than it should have been, so I write this down here for me as a reminder. The next time I google it, I hope to find my own blog post and will make the same expression at the end. Again. ## How does it fail Try to get and build the dependencies: ```sh # maybe a fancy Rails application bundle install ``` And after a while … ```sh # snippet An error occurred while installing nokogiri (1.8.5), and Bundler cannot continue. Make sure that `gem install nokogiri -v '1.8.5' --source 'https://rubygems.org/'` succeeds before bundling. In Gemfile: rails was resolved to 5.2.1.1, which depends on actioncable was resolved to 5.2.1.1, which depends on actionpack was resolved to 5.2.1.1, which depends on actionview was resolved to 5.2.1.1, which depends on rails-dom-testing was resolved to 2.0.3, which depends on nokogiri # /snippet ``` Now if you run what is suggested … ```sh gem install nokogiri -v '1.8.5' ``` ```txt Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. # ... snip ... Undefined symbols for architecture x86_64: "_iconv", referenced from: _main in conftest-451598.o "_iconv_open", referenced from: _main in conftest-451598.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) checked program was: /* begin */ 1: #include "ruby.h" 2: 3: #include <stdlib.h> 4: #include <iconv.h> 5: 6: int main(void) 7: { 8: iconv_t cd = iconv_open("", ""); 9: iconv(cd, NULL, NULL, NULL, NULL); 10: return EXIT_SUCCESS; 11: } /* end */ ``` You can also check the logs for later reference, too. <div class="with-wrap"> ```txt /Users/chris/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/extensions/x86_64-darwin-17/2.5.0-static/nokogiri-1.8.5/gem_make.out /Users/chris/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/extensions/x86_64-darwin-17/2.5.0-static/nokogiri-1.8.5/mkmf.log ``` </div> This is a problem that it cannot work with the **iconv** library currently present for linking. Alternatively also another library can cause some troubles: `libxml2` Then the output might look like … ```txt Running 'compile' for libxml2 2.9.8... ERROR, review '/Users/chris/.rbenv/versions/2.5.3/lib/ruby/gems/2.5.0/gems/nokogiri-1.8.5/ext/nokogiri/tmp/x86_64-apple-darwin17.7.0/ports/libxml2/2.9.8/compile.log' to see what happened. Last lines are: ======================================================================== _parseAndPrintFile in xmllint.o "_xmlXPathEval", referenced from: _doXPathQuery in xmllint.o "_xmlXPathFreeContext", referenced from: _doXPathQuery in xmllint.o "_xmlXPathFreeObject", referenced from: _doXPathQuery in xmllint.o "_xmlXPathIsInf", referenced from: _doXPathDump in xmllint.o "_xmlXPathIsNaN", referenced from: _doXPathDump in xmllint.o "_xmlXPathNewContext", referenced from: _doXPathQuery in xmllint.o "_xmlXPathOrderDocElems", referenced from: _parseAndPrintFile in xmllint.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) ``` ## How to fix it Well, if one would pay attention to the output, sometimes the output already tells you what could be done. _For the libxml case:_ ```txt IMPORTANT NOTICE: Building Nokogiri with a packaged version of libxml2-2.9.8 with the following patches applied: - 0001-Revert-Do-not-URI-escape-in-server-side-includes.patch - 0002-Fix-nullptr-deref-with-XPath-logic-ops.patch - 0003-Fix-infinite-loop-in-LZMA-decompression.patch Team Nokogiri will keep on doing their best to provide security updates in a timely manner, but if this is a concern for you and want to use the system library instead; abort this installation process and reinstall nokogiri as follows: gem install nokogiri -- --use-system-libraries [--with-xml2-config=/path/to/xml2-config] [--with-xslt-config=/path/to/xslt-config] If you are using Bundler, tell it to use the option: bundle config build.nokogiri --use-system-libraries bundle install Note, however, that nokogiri is not fully compatible with arbitrary versions of libxml2 provided by OS/package vendors. ``` So, the last commands are the things you should consider for a bundler based project. ```sh bundle config build.nokogiri --use-system-libraries bundle install ``` You can check the config: ```sh bundle config ``` ```txt Settings are listed in order of priority. The top value will be used. gem.test Set for the current user (/Users/chris/.bundle/config): "rspec" gem.mit Set for the current user (/Users/chris/.bundle/config): true gem.coc Set for the current user (/Users/chris/.bundle/config): true build.nokogiri Set for the current user (/Users/chris/.bundle/config): "--use-system-libraries" ``` The file `~/.bundle/config` for completion (shown as diff): ```diff --- BUNDLE_GEM__TEST: "rspec" BUNDLE_GEM__MIT: "true" BUNDLE_GEM__COC: "true" + BUNDLE_BUILD__NOKOGIRI: "--use-system-libraries" ``` And that's it. Fixed! ## Nokogiri documentation If I had consulted [the nokogiri documentation][ndoc] at all, I also would have got the hint earlier: > Nokogiri will refuse to build against certain versions of libxml2, libxslt supplied with your operating system, and certain versions will cause mysterious problems. The compile scripts will warn you if you try to do this. It focuses on libxml2, but the steps also help with the **libiconv** issue, too. 🤦 [n]: https://www.nokogiri.org/ [ndoc]: https://www.nokogiri.org/tutorials/installing_nokogiri.html#install-with-system-libraries
asaaki
72,483
Tea Break Challenge, Day #2
Our choices are the sustenance of our behavior.
0
2019-01-02T16:23:53
https://dev.to/jcutrell/tea-break-challenge-day-2-3of7
teabreakchallenge, career
--- title: Tea Break Challenge, Day #2 published: true description: Our choices are the sustenance of our behavior. tags: teabreakchallenge, career --- Our choices are the sustenance of our behavior. It may not be immediately obvious, but choices are not only about what we choose to do, but perhaps even more about what we choose *not* to do. This is especially true when we have multiple good options to choose from. Take a moment to write a list of three tempting things you are actively choosing *not* to do today. https://www.teabreakchallenge.com/messages/2
jcutrell
72,901
14 useful tools for network engineers
It's important that a network professional know some tools which can increase his productivity and make his work easier.
0
2019-01-03T06:27:13
https://dev.to/rafaelmonteiro/14-useful-tools-for-network-engineers-2d0l
networking, informationsecurity, tools
--- title: 14 useful tools for network engineers published: true description: It's important that a network professional know some tools which can increase his productivity and make his work easier. tags: Networking, Information Security, Tools cover_image: https://thepracticaldev.s3.amazonaws.com/i/b47dre9db4fs2z6v06bb.jpg --- It's interesting that a network professional know some tools which can increase his productivity and make his work easier. Although some tools - especially ARPSpoof, nmap, TCPTraceroute and AirCrack - can be used in malicious contexts - as to perform reconnaissance and probe for weaknesses in preparation for attacks - they also have value for legitimate purposes. **AirCrack** - Can reveal who's using the wireless network and can be used to troubleshoot issues. Also it's a great tool for discovering nearby wireless networks. **ARPSpoof** - Hackers use it to send spoofed ARP requests trying to pair MAC and IP addresses of networked devices. But it can also be used to create man-in-the-middle monitoring of activity without having to install a device on the span port of a router, for instance. **Cacti** - It gathers and graphs SNMP values over time, giving a picture of device utilisation. **cURL** - Basically this tool moves data to and from servers and it is really useful in measuring the response time of Web sites. **Elasticsearch** - It is a search server that can be paired with Logstash and Kibana (ELK) to gather log data and create dashboards. Elasticsearch provides the search capabilities and Kibana renders the data to create the dashboards. **fprobe** - Listening to specified interfaces and gathering NetFlow data about traffic going through is the primary purpose of fprobe. It can be used to detect undesired traffic types, such as video streaming services in a corporate environment. **iperf** - This tool measures throughput, packet loss and jitter, and supports both UDP and TCP packets to determine quality of connections between devices on a network. It can graph the data it gathers to see how network conditions vary over time. **nfdump** - Flow information gathered by fprobe can be exported to nfdump, which stores it in a file system that it can read and use to display the data based on protocols and rank top users. One kind of application is to discover time-of-day congestion issues, for example. **Nmap** - This is a powerful tool for network, device and service discovery useful for network scanning and for performing security audits. It can scan for specific ports and determine, for example, whether they are open or not. It can scan devices by subnet and deliver valuable information such as what type of traffic devices are putting out. In addition to discovering what devices are on the network, Nmap can scan for services that are active and perform pointer-record lookups and reverse DNS lookups which may help ID what kind of device it has found. **OpenNMS** - This tool, which monitors devices and services, issues alerts when they go down and can write availability reports on devices. **Smokeping** - It measures latency and packet loss that can be analysed over time to reveal changes in latency that can be used for troubleshooting or network planning. It does this by firing off Ping packets at regular intervals and recording the response times. Spikes that show up on graphs of the data gathered indicate when response-time troubles arise and can help narrow down investigations into their causes. **Snort** - This is a intrusion detection ID tool that can be used to live-monitor networks, but it can also be used to apply rules to a set of trace files captured. It can be paired with logging tools like ElasticSearch and LogStash, and the gathered data can be analysed, with rules set to look for specific conditions and send alerts. **TCPTraceroute** - This tool traces paths through networks using TCP rather than ICMP. It's good for finding what's blocking traffic in transit, such as firewalls configured to block the ports the traffic needs to use. **Wireshark** - It captures and analyses packets to find malformed frames, mis-ordered packets and the like. Users can write rules to capture only certain protocols such as wireless, TCP or http to troubleshoot slow server response time, for example. --- Originally published at [rafaelmonteiro.github.io](http://rafaelmonteiro.github.io/).
rafaelmonteiro
73,520
FlatBuffers (in Rust)
Following up on the last post closing the chapter on Apache Thrift, we’re looking at another...
0
2019-01-07T06:57:46
https://dev.to/jeikabu/flatbuffers-in-rust-59f9
rust, flatbuffers, showdev
--- title: FlatBuffers (in Rust) published: true tags: rust,flatbuffers,showdev canonical_url: --- Following up on the [last post closing the chapter on Apache Thrift](./thrift-012-1mi9), we’re looking at another serialization library, [FlatBuffers](https://google.github.io/flatbuffers/). ## FlatBuffers A lesson learned from using Thrift was we wanted performant, schema’d serialization and robust messaging, but _not_ tightly coupled together. We ended up using the message framing and other features, but [using ZeroMQ as a transport](./apache-thrift-over-zeromq-66e) and implementing message identification for pub/sub (as opposed to Thrift’s request/reply model). Other strikes against Thrift we didn’t discover until later. The C++ client has a [boost](https://www.boost.org/) dependency and requires both exceptions and [RTTI](https://en.wikipedia.org/wiki/Run-time_type_information). Boost you either love or hate, but the last two are basically verboten in the video game industry. Case in point, getting our [SDK working](https://github.com/subor/sample_ue4_platformer) in [UE4](https://www.unrealengine.com/) was [a hassle](https://github.com/subor/sdk/blob/master/docs/topics/ue4.md). Potentially better performance and reduced memory churn drove us to evaluate options like [Cap’n Proto](https://capnproto.org/) and we’re leaning towards [FlatBuffers](https://google.github.io/flatbuffers/). Notes: - FlatBuffers is backed by Google - Cap’n [C# support is dubious](https://capnproto.org/otherlang.html) at best - Cap’n likely has an edge in performance ([[0]](https://github.com/thekvs/cpp-serializers), [[1]](https://github.com/felixguendling/cpp-serialization-benchmark)) - Both have first-class support for [Rust](https://www.rust-lang.org/) ([FlatBuffers](https://github.com/google/flatbuffers#supported-programming-languages), [Cap’n Proto](https://github.com/capnproto/capnproto-rust)) - [Unity3D](https://unity3d.com/) support: FlatBuffers’ C# library [targets .NET 3.5](https://github.com/google/flatbuffers/blob/master/net/FlatBuffers/FlatBuffers.csproj#L12) and [“just works”](http://exiin.com/blog/flatbuffers-for-unity-sample-code/). There’s a [dormant github project](https://github.com/ThomasBrixLarsen/capnproto-dotnet) for Cap’n. - FlatBuffers has better adoption in video game industry ([google](https://developers.google.com/games/#Tools), [cocos2d-x](https://cocos2d-x.org/reference/native-cpp/V3.5/d7/d2d/namespaceflatbuffers.html)) The last two are significant deciding factors for us. There’s a better chance a developer is already using FlatBuffers and that’s one less dependency our SDK will introduce. ## FlatBuffers in Rust [Documentation](https://docs.rs/flatbuffers/0.5.0/flatbuffers/) is a bit light, although they have a [“Use in Rust” guide](https://google.github.io/flatbuffers/flatbuffers_guide_use_rust.html) in the FlatBuffers documenation. Instead of following the directions (I know, I know) and using `HEAD`, we’re using the [latest stable release, 1.10.0](https://github.com/google/flatbuffers/releases). FlatBuffers schema `protos/bench.fbs`: ``` namespace bench; table Basic { id:ulong; } table Complex { name:string (required); basic:bench.Basic (required); reference:string (required); } ``` Run `flatc --rust -o src protos/bench.fbs` to generate Rust source `src/bench_generated.rs` containing two structs: `bench::Basic` and `bench::Complex`. We can then serialize/deserialize `bench::Basic` struct: ```rust use crate::bench_generated as bench_fbs; #[test] fn it_deserializes() { const ID: u64 = 12; let mut builder = FlatBufferBuilder::new(); // bench::Basic values that will be serialized let basic_args = bench_fbs::bench::BasicArgs { id: ID, .. Default::default() }; // Serialize bench::Basic struct let basic: WIPOffset<_> = bench_fbs::bench::Basic::create(&mut builder, &basic_args); // Must "finish" the builder before calling `finished_data()` builder.finish_minimal(basic); // Deserialize the bench::Basic let parsed = flatbuffers::get_root::<bench_fbs::bench::Basic>(builder.finished_data()); assert_eq!(parsed.id(), ID); } ``` Notes: - `.. Default::default()` isn’t needed here, but shows [`impl Default`](https://doc.rust-lang.org/std/default/trait.Default.html) is also generated - `create()` serializes the struct and returns a `WIPOffset<bench::Basic>` which is an offset to the root of the data - The documents deserialize with `get_root_as_XXX()` functions which aren’t generated by flatc 1.10 (need HEAD?) [but appear to be](https://github.com/google/flatbuffers/issues/5000) wrappers around `get_root()`. For the `bench::Complex` struct: ```rust use crate::bench_generated as bench_fbs; #[test] fn it_deserializes() { const ID: u64 = 12; const NAME: &str = "name"; const REFERENCE: &str = "reference"; let mut builder = flatbuffers::FlatBufferBuilder::new(); { let args = bench_fbs::bench::BasicArgs{id: ID}; let basic = Some(bench_fbs::bench::Basic::create(&mut builder, &args)); let name = Some(builder.create_string(NAME)); let reference = Some(builder.create_string(REFERENCE)); let args = bench_fbs::bench::ComplexArgs{ basic, name, reference }; let complex = bench_fbs::bench::Complex::create(&mut builder, &args); builder.finish_minimal(complex); } let parsed = flatbuffers::get_root::<bench_fbs::bench::Complex>(builder.finished_data()); assert_eq!(parsed.basic().id(), ID); assert_eq!(parsed.name(), NAME); assert_eq!(parsed.reference(), REFERENCE); } ``` Notes: - Needing to manually serialize each member of `bench::Complex` is cumbersome and error-prone. There doesn’t seem to be a way to automatically handle it… ## Benchmarks The above schema is from the [proto\_benchmarks](https://github.com/ChrisMacNaughton/proto_benchmarks) repository which compares protobuf with capnproto. I stumbled upon it and swooned over the pretty [criterion](https://github.com/bheisler/criterion.rs) plots. [I forked it](https://github.com/jeikabu/proto_benchmarks) to add FlatBuffers and migrate to [Rust 2018](https://blog.rust-lang.org/2018/07/27/what-is-rust-2018.html). The flatbuffers schema is actually auto-magically generated from the protobuf schema: ```protobuf syntax = "proto2"; option optimize_for = SPEED; package bench; message Basic { required uint64 id = 1; } message Complex { required string name = 1; required Basic basic = 2; required string reference = 3; } ``` In `build.rs`: ```rust // Convert protobuf .proto to FlatBuffers .fbs std::process::Command::new("flatc") .args(&["--proto", "-o", "protos", "protos/bench.proto"]) .spawn() .expect("flatc"); // Generate rust source std::process::Command::new("flatc") .args(&["--rust", "-o", "src", "protos/bench.fbs"]) .spawn() .expect("flatc"); ``` Using [`std::process::Command`](https://doc.rust-lang.org/std/process/struct.Command.html) to execute `flatc`. First, to convert the protobuf schema into FlatBuffers, then to output Rust source. ### Results Not entirely sure I believe the results. Not knowing much about any of the three libraries, we’re likely comparing slightly different things. Basic write: ![](https://rendered-obsolete.github.io/assets/flatbuffers_rust_basic_write.png) Complex build: ![](https://rendered-obsolete.github.io/assets/flatbuffers_rust_complex_build.png) Also found [rust-serialization-benchmarks](https://github.com/erickt/rust-serialization-benchmarks) which hasn’t been updated in 3 years and seems to use [Bencher](https://doc.rust-lang.org/1.1.0/test/struct.Bencher.html) for benchmarking.
jeikabu
73,702
Public API's
public api
0
2019-01-08T05:36:32
https://dev.to/sicksand/public-apis-16dh
api, android
--- title: Public API's published: true description: public api tags: api, android --- Yesterday from my Telegram Channel, there is a website that list all public api. I'm a sucker for API. Try to make at least one app (on Android) from the lists. One that caught my attention is Advice Slip (http://api.adviceslip.com/) It return simple json. ```{ slip: { advice: "You can fail at what you don't want. So you might as well take a chance on doing what you love.", slip_id: "184" } }``` I want to use unsplash api (https://unsplash.com) as background and the text on top of image will be the advice from Advice Slip. So here is the site : https://public-apis.xyz/ for you developer out there to make some MVP. Happy coding.
sicksand
73,984
Snippet or not indexed posts
Sometimes I just want to add quick notes but it's not qualify as proper post. I...
0
2019-01-09T10:54:47
https://dev.to/k4ml/snippet-or-not-indexed-posts-1dil
meta
Sometimes I just want to add quick notes but it's not qualify as proper post. It's something similar to coderwall.com. Maybe we can add another flag like `indexed: false` and this post will not appear in any feed. But it still public with proper url, or maybe you can view it all at specific url like dev.to/snippets/, and people can still react or comment on that post. It just to reduce some clutter on the main feeds from some half finished notes.
k4ml
74,122
Fancy Function Parameters of JS
The code is from this post. It makes the params more clear. function renderList(list, { ordere...
0
2019-01-10T01:00:41
https://dev.to/chenge/fancy-function-parameters-of-js-52b6
javascript, clean
--- title: Fancy Function Parameters of JS published: true description: tags: js, clean --- The code is from [this post](https://www.javascriptjanuary.com/blog/fancy-function-parameters). It makes the params more clear. ```js function renderList(list, { ordered = true, color = '#1e2a2d', bgColor = 'transparent' } = {}) { /* ... */ } // simple use renderList(['love', 'patience', 'pain']) // with all arguments renderList(['one', 'two'], { ordered: true, color: '#c0ffee', bgColor: '#5ad' }) // with only one optional argument (bgColor) renderList(['one', 'two'], { bgColor: '#5ad' }) ```
chenge
74,142
Web Page Usability Matters
Users appreciate pages being usable and interactive soon after they're visually ready. UI interactions (scrolls, taps, clicks) can be delayed by script and other browser work so minimizing their impact can really help your users.
0
2019-01-10T17:44:23
https://addyosmani.com/blog/usability/
webperf, lighthouse, webdev, timetointeractive
--- title: "Web Page Usability Matters" published: true description: Users appreciate pages being usable and interactive soon after they're visually ready. UI interactions (scrolls, taps, clicks) can be delayed by script and other browser work so minimizing their impact can really help your users. tags: - web performance - lighthouse - webdev - time to interactive canonical_url: https://addyosmani.com/blog/usability/ --- **Users appreciate pages being usable and interactive soon after they're visually ready. UI interactions (scrolls, taps, clicks) can be delayed by script and other browser work so minimizing their impact can really help your users.** You may have heard there isn't a single metric that fully captures the "loading experience" of a web page. Loading a page is a progressive journey with four key moments to it: is it happening? Is it useful? is it usable? and is it delightful?. ![When did the user feel they could interact? When could they interact? Speed metrics illustrate First Paint, First Contentful Paint, Time to Interactive for a page](https://thepracticaldev.s3.amazonaws.com/i/9enrd4e7p4slrlgns6t1.png) In terms of measurable metrics, these moments breakdown as follows: * **Is it happening?**: Has the navigation started successfully? has the server started responding? Metric: [First Paint](https://www.w3.org/TR/paint-timing/) * **Is it useful?**: when you’ve painted text, an image or content that allows the user to derive value from the experience and engage with it. Metrics: [First Contentful Paint](https://developers.google.com/web/tools/lighthouse/audits/first-contentful-paint), [First Meaningful Paint](https://developers.google.com/web/tools/lighthouse/audits/first-meaningful-paint), [Speed Index](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index) (lab) * **Is it usable?**: when a user can start meaningfully interacting with the experience and have something happen (e.g tapping on a button). This can be critical as users can get disappointed if they try using UI that looks ready but isn't. Metrics: [Time to Interactive](https://developers.google.com/web/tools/lighthouse/audits/time-to-interactive) (lab), [First CPU Idle](https://developers.google.com/web/tools/lighthouse/audits/first-cpu-idle), [First Input Delay](https://developers.google.com/web/updates/2018/05/first-input-delay) (field) * **Is it delightful?**: delightfulness is about ensuring performance of the user experience remains consistent after page load. Can you smoothly scroll without janking? are animations smooth and running at 60fps? do other [Long Tasks](https://calendar.perfplanet.com/2017/tracking-cpu-with-long-tasks-api/) block any of these from happening?. <img alt="Time to Interactive as highlighted by Chrome DevTools and Lighthouse. The user sees content early on but the page is not interactive until much later." src="https://thepracticaldev.s3.amazonaws.com/i/2g63ozqm2x3wyce97c8a.jpg"> Page usability matters. Single-page Apps are unlikely to be usable (able to quickly respond to user input) if they haven't completed loading the JavaScript needed to attach event handlers for the experience or are hogging the browser's main thread (similar to above). This is a reason why monitoring usability metrics can be important. ## Do users care about page usability? In 2018, Akamai conducted a [UX study](https://speakerdeck.com/bluesmoon/ux-and-performance-metrics-that-matter-a062d37f-e6c7-4b8a-8399-472ec76bb75e) into the impact of interactivity on [Rage Clicks](https://blog.fullstory.com/rage-clicks-turn-analytics-into-actionable-insights/) using [mPulse](https://www.akamai.com/us/en/products/web-performance/mpulse-real-user-monitoring.jsp). Rage Clicks happen when users rapid-click (or tap) on a site out of frustration. Akamai discovered that **Rage Click likelihood depends on the latency of page usability**: * Rage Clicks are consistent if a user's first interaction is before the page becomes interactive (before interactive or `onload`). This may be because event handlers are hogging CPU. * In 30%+ of cases, pages were interactive after `onload` fired and in 15%, users tried interacting sometime between `onload` and interactive. <img alt="Rage Clicks by first interaction and visually ready. There is a clear correlation." src="https://thepracticaldev.s3.amazonaws.com/i/6ok1usi72wmur9ssgqjy.jpg"> **What is the optimum time to reduce Rage Clicking?** <img alt="What is the optimum time to reduce Rage Clicking? Pages can be visually complete. Users expect they can interact soon after (1.3x). There is the potential for a rage-click soon after." src="https://thepracticaldev.s3.amazonaws.com/i/vxcakzxga0zm55znq8rt.jpg"> Akamai observed: * The majority of users who Rage Click attempt to interact with a page between 1.25 and 1.5x the visually ready time. * They suggested making sure your page is interactive and loaded before 1.3x the visually ready time. For more on this study, see [UX and performance metrics that matter](https://speakerdeck.com/bluesmoon/ux-and-performance-metrics-that-matter-a062d37f-e6c7-4b8a-8399-472ec76bb75e) by Philip Tellis. **Time to Interactive can have a high correlation with overall conversion rate** In a [2017 study](https://www.slideshare.net/nicjansma/reliably-measuring-responsiveness), Akamai and Chrome found that across three real-world sites (in retail, travel and gaming) Time to Interactive had a high correlation with overall conversion rate. Conversions can be tapping a button to complete a purchase flow or any number of other types of responses to an interaction. <img src="https://thepracticaldev.s3.amazonaws.com/i/gy6g572iop59oigb86m8.png" alt="Time to Interactive impact on conversion rates"> They discovered: * Long Tasks directly delayed Time to Interactive. * As first-page Long Task time increased, overall conversion rates decreased. * Mobile devices could see a 12x Long Task time vs Desktop. * Older devices could be spending half of their load-time on Long Tasks. Note: This is an obviously small sample size and every site can be different. ## The phases of loading in more detail ### Is it happening? Is it useful? When the user gets timely feedback that "It is happening" they feel much better, and they perceive the site as faster. At the same time, you don't want a user to render a page which is "useful" but which they cannot interact with because it isn't yet ready. This would leave them feeling it's not [usable](https://www.slideshare.net/fwdays/improve-your-web-application-using-progressive-web-metrics). <img src="https://thepracticaldev.s3.amazonaws.com/i/9vhm0ypfgg1g5qobo7x5.jpg" alt="Perceptual load timings. The timestamp of a shipped frame that contains any of these - first paint of text, first paint of SVG" class="lazyload"> #### First Paint First Paint marks when the browser can render anything visually different from what was displayed before navigation. It confirms that rendering has started. This can be an important metric, as the duration of ‘blank screen’ is probably the most significant indicator of page abandonment. #### First Contentful Paint First Contentful Paint is when the browser renders the first content from the DOM - this could be article text, an image or SVG. The hope is this paint communicates a navigation successfully began. ### Is it usable? As I've covered in [The Cost Of JavaScript](https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4), the web has a problem. Many sites are optimizing for content visibility but ignoring interactivity as their JavaScript for accomplishing this takes time to be processed. This means large numbers of very popular sites have [multi-second](https://httparchive.org/reports/loading-speed#ttci) delays between having painted something useful and being "usable" or interactive. During this time, the web feels slow and unresponsive. #### Time to Interactive and First Input Delay ![First Contentful Paint - FCP - when the main thread is idle, styles are loaded and browser can paint content. First Interactive - when the browser can respond to the first user input](https://thepracticaldev.s3.amazonaws.com/i/1h5f9t29ggc0mh5v2vqy.png) [Time to Interactive](https://github.com/WICG/time-to-interactive) (TTI) is a metric that measures how long it takes for a web page to become interactive. This is defined as a point when: * The page has displayed useful content * Event handlers are registered for most visible page elements * When a user interacts with the page, the page consistently responds within 50ms - the user does not experience jank. When a page has a great TTI, a user can tap around the interface with high confidence it will respond to input. This strictly meets the Idle guideline of the [RAIL performance model](https://developers.google.com/web/fundamentals/performance/rail): the page yields control back to the main thread at least once every 50ms. The network is idle. Specifically, there are only two open network requests remaining. First Input Delay (FID) is TTI's complimentary metric in the field - it measures the time from a user first interacting with a page (e.g. tapping a button) to when the browser can actually respond to the interaction. ## Optimizing user-centric metrics A focus on optimizing user-centric metrics will ultimately improve the user experience. If you would like to reduce... Time to Interactive or First Input Delay: * Do less work * Split up large JavaScript bundles using [code-splitting](https://web.dev/fast/reduce-javascript-payloads-with-code-splitting) * Split up long tasks. Consider moving intensive off-main thread to workers. * Postpone non-critical work until after page load First Paint and First Contentful Paint: * Remove render blocking scripts from head * Identify the [critical CSS](https://www.smashingmagazine.com/2015/08/understanding-critical-css/) needed and inline in `<head>` * App Shell pattern - improve user perception rendering UI skeleton ## Monitoring metrics Performance tools like [PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/), [WPT](https://webpagetest.org/easy) and [Lighthouse](https://developers.google.com/web/tools/lighthouse/) capture user-centric loading metrics: <img src="https://thepracticaldev.s3.amazonaws.com/i/mt5uxwewjkgy3bntc8av.png" alt="First Contentful Paint, Speed Index, Time to Interactive, First Meaningful Paint, First CPU Idle in Lighthouse"> For CLI users, they are also available via [pwmetrics](https://github.com/paulirish/pwmetrics) by Paul Irish and Artem Denysov. For monitoring metrics in the field (via RUM), I recommend the [Paint Timing API](https://developer.mozilla.org/en-US/docs/Web/API/PerformancePaintTiming) which provides First Paint and First Contentful Paint. A polyfill for [First Input Delay](https://github.com/GoogleChromeLabs/first-input-delay) is also available. Paired with an analytics service these enable logging progressive web metrics for your real users. The [Chrome User Experience Report](https://developers.google.com/web/tools/chrome-user-experience-report/) ([also in PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fchrome.com)) provides access to some of these metrics (like First Contentful Paint and First Input Delay) for your real users using Chrome: <img alt="Chrome User Experience Report distributions for metrics" src="https://thepracticaldev.s3.amazonaws.com/i/2zyvauug3fdk6kqudgi6.png"> I also like [SpeedCurve](https://speedcurve) or [Calibre](https://calibreapp.com) for tracking metrics like FCP and TTI over time. It allows me to set performance budgets for them too which can help detect regressions: <img src="https://thepracticaldev.s3.amazonaws.com/i/4mw8beesk2j3lc3qna4e.png" alt="Speedcurve metrics tracking"> ## Debugging metrics Chrome DevTools now highlights performance metrics in the Performance panel. These can be found under Timings: <img alt="DevTools Metrics in performance panel" src="https://thepracticaldev.s3.amazonaws.com/i/hb2e04oc9ej4h8b4gqhn.png"> In addition to other surfaces these metrics are exposed, this can provide value while attempting to improve times in your iteration workflow. ## References and learn more * [The Latest in Metrics & Measurement - Paul Irish](https://www.youtube.com/watch?v=XvZ7-Uh0R4Q) * [UX and performance metrics that matter - Philip Tellis](https://speakerdeck.com/bluesmoon/ux-and-performance-metrics-that-matter-a062d37f-e6c7-4b8a-8399-472ec76bb75e) * [User-centric Performance Metrics - Philip Walton](https://developers.google.com/web/fundamentals/performance/user-centric-performance-metrics) * [Reliably Measuring Responsiveness In The Wild - Shubhie Panicker and Nick Jansma](https://www.slideshare.net/nicjansma/reliably-measuring-responsiveness) * [Speed Tooling - State Of the Union: Elizabeth Sweeny and Paul Irish](https://www.youtube.com/watch?v=ymxs8OSXiUA) * [Focusing on the human-centric metrics - Radimir Bitsov](https://calibreapp.com/blog/time-to-interactive/) * [Improve your web application using Progressive Web Metrics - Artem Denysov](https://www.slideshare.net/fwdays/improve-your-web-application-using-progressive-web-metrics) * [Why Web Developers Need to Care about Interactivity - Philip Walton](https://philipwalton.com/articles/why-web-developers-need-to-care-about-interactivity/) * [First Input Delay - Philip Walton](https://developers.google.com/web/updates/2018/05/first-input-delay) * [First Input Delay - State Of The Web - Rick Viscomi](https://www.youtube.com/watch?v=ULU-4-ApcjM) * [The Guide to Understanding Frustration Online](https://www.fullstory.com/resources/guide-to-understanding-frustrating-user-experiences-online/) * [First Input Delay: Correlation with TTI - Tim Dresser](https://docs.google.com/document/d/1g0J5JIcICFyXYM6ifBF6vNd_qUPIfpf6YAfFHE1TYRE/edit#heading=h.c79uz11ezek4) * [An Event Timing API - Tim Dresser and Nicolás Peña Moreno](https://wicg.github.io/event-timing/)
addyosmani
74,398
Two Ways To Get Synergy Out Of An iMac + Macbook Combo
How to use an unloved iMac as both a second screen and a TimeMachine server
0
2019-01-11T16:47:01
https://dev.to/bhison/two-ways-to-get-synergy-out-of-an-imac--macbook-combo-1f02
mac, hardware, battlestations
--- title: Two Ways To Get Synergy Out Of An iMac + Macbook Combo published: true description: How to use an unloved iMac as both a second screen and a TimeMachine server tags: mac, hardware, battlestations --- ![That burrito is my pencilcase, but that's another story](https://thepracticaldev.s3.amazonaws.com/i/eq5bwewmlr2vsedfgdgv.JPG) Let's start with the elephant in the room - an iMac plus a Macbook is decadent, over the top and somewhat unnecessary, I agree! I think the intention was for me to use the iMac in the studio and the laptop everywhere else, but I kind of got tired of reinstalling whatever latest package I'm using on my iMac once I come back from an evening's hack session. Either way, I have these devices through whatever means and actually, after some time, I kind of like them as a set up. The question I asked myself upon being given these to work with was - where's the famed Apple interoperability? Surely requiring something mobile and something powerful is a classic mix? Actually, there's very little to reward someone for regularly using two different Macs. I dream of the day of MacOS roaming profiles - not just your desktop being mirrored between devices (which kind of works with iCloud), but all your applications and your application settings and all your files, sites, themes... So whilst there's some way to go in reaching Apple's own standard of magical user experience, you can, depending on your precise models, do the following ###1. Use your iMac as an external monitor with Target Display Mode * Compatible with late 2009 - mid 2014 iMacs So, I had this massive screen on a machine I didn't have much use for, and a tiny screen on a machine I used all of the time. Also, I'd seen my brother use his iMac to play Xbox on a few years back so I knew something along these lines was possible. I was delighted to see that the iMac I had been passed custody of was old enough to be included within the compatibility list. For some reason, retina iMacs do not have Target Display Mode as an option. I read somewhere this was because there was an issue scaling up lower resolution outputs. Unfortunately, it's just another example of Apple slowly peeling away interesting uses of their products. But you're lucky and have a compatible device! So how do we do it? First, you need a mini displayport cable. And *no* I don't mean a thunderbolt cable and *yes* I spent a lot of time trying to get this to work before I realised these might look the same but are functionally different for some reason. Shove this cable in to your iMac's display port and your Macbook's display port and hit `Cmd + F2` on your iMac. Note, newer devices can use their thunderbolt ports for display whereas older devices have dedicated ports with a little screen symbol. Also note, this also works with things other than Macbooks - if you have an HDMI to mini displayport adapter you can use your iMac like a TV with audio and everything. If you have any issues, view [more details here](https://support.apple.com/en-us/HT204592). ###2. Use your iMac as a Time Machine server * *Doesn't* work on El Capitan, *does* work on High Sierra, *might* work on Sierra I haven't checked! On to the other comparable difference between the two machines - one sits on my studio desk all of the time, the other sits in my bag, sits on my dinner table and is generally out in the wild a lot more. My iMac is safe whilst my Macbook is not. Thankfully, there's a way to leverage this. By setting your Macbook to backup wirelessly to your iMac whenever they're on the same network you know you'll have all your files safely stored should you pass out drunk on a train and get robbed again. To set this up is relatively straight forwards once you know it's a possibility. It just requires setting up your iMac so it can be used as a Time Machine target [[article]](https://support.apple.com/en-gb/HT202784#mac) and then selecting that machine as your target when setting up Time Machine [[article]](https://support.apple.com/en-gb/HT201250). In reality, the speed at which data transfers wirelessly will mean it takes quite a while to do the initial backup, so I used an ethernet cable for this. Thankfully the way Time Machine works is kind of like git, in that it tracks actual changes and doesn't store a whole disk image for each backup. This is actually the first time I've used Time Machine seriously and I have to say I'm impressed. And it feels good to finally be properly backing up everything I do on a daily basis. Have any issues setting either of these up? Or do you have any other clever ways to use this overpriced, luxury combo? Let me know in the comments.
bhison
74,428
Make Your Specs Faster With Sleep Study
Sleep Study is an RSpec formatter that shows you where your specs are blocking on `sleep` statements in your code.
0
2019-01-11T21:50:01
https://dev.to/joeyschoblaska/make-your-specs-faster-with-sleep-study-1ff
ruby, rails, testing
--- title: Make Your Specs Faster With Sleep Study published: true description: Sleep Study is an RSpec formatter that shows you where your specs are blocking on `sleep` statements in your code. tags: #ruby #rails #testing --- There are a lot of reasons why test suites for large applications become slow. One of them - errant `sleep` statements in your code - is easy to fix and, with the `rspec-sleep_study` gem, easy to find. Any time you find a spec that takes a nice, round number of seconds to complete, you should immediately be suspicious. Especially when the code under test integrates with another service or makes any kind of network calls. ```text Top 4 slowest examples (40.14 seconds, 96.0% of total time): IntercomClient::post when an error is raised when it's a retryable error retries 20.04 seconds ./spec/services/data/intercom_client_spec.rb:45 IntercomClient::get when an error is raised when it's a retryable error retries 20.01 seconds ./spec/services/data/intercom_client_spec.rb:23 IntercomClient::get when an error is raised raises the error 0.07472 seconds ./spec/services/data/intercom_client_spec.rb:17 IntercomClient::post when an error is raised raises the error 0.01893 seconds ./spec/services/data/intercom_client_spec.rb:39 Finished in 41.82 seconds (files took 8.88 seconds to load) 4 examples, 0 failures ``` This is usually a sign that you have a `sleep` somewhere in your code, and your specs are blocking while they wait for the sleeps to complete. The solution to this is generally easy: you need to stub out the sleep statements or the sleep interval in your specs. But how do you find out which specs are blocking, and where, so that you can fix them? That's where [Sleep Study](http://github.com/KennaSecurity/rspec-sleep_study) comes in. Sleep Study uses Ruby's [TracePoint](https://ruby-doc.org/core-2.0.0/TracePoint.html) class to wrap every C function call and return executed by your code. If it detects a `sleep` function, it records the time spent sleeping and then prints a report showing you both the spec and the line(s) of code responsible. ```text $ gem install rspec-sleep_study $ rspec --format RSpec::SleepStudy spec/services/data/intercom_client_spec.rb ... The following examples spent the most time in `sleep`: 20.013 seconds: ./spec/services/data/intercom_client_spec.rb:23 - 20.013 seconds: /opt/apps/vendor/ruby/2.5.0/gems/retryable-2.0.4/lib/retryable/configuration.rb:36 20.001 seconds: ./spec/services/data/intercom_client_spec.rb:45 - 20.001 seconds: /opt/apps/vendor/ruby/2.5.0/gems/retryable-2.0.4/lib/retryable/configuration.rb:36 ``` Now we know that our specs are indeed blocking on sleep, and we have an idea of where to look in our code: somewhere in our client class we're calling `Retryable`. ```ruby def self.get(endpoint) Retryable.retryable(:tries => 5, :sleep => RETRY_SLEEP) do JSON.parse RestClient.get(endpoint, HEADERS) end end def self.post(endpoint, data) Retryable.retryable(:tries => 5, :sleep => RETRY_SLEEP) do JSON.parse RestClient.post(endpoint, data.to_json, HEADERS) end end ``` And now that we've found that, we can make our specs run way faster by stubbing that `RETRY_SLEEP` constant: ```ruby before do stub_const("IntercomClient::RETRY_SLEEP", 0) end ``` ``` Top 4 slowest examples (0.08328 seconds, 4.3% of total time): IntercomClient::post when an error is raised raises the error 0.0358 seconds ./spec/services/data/intercom_client_spec.rb:39 IntercomClient::get when an error is raised when it's a retryable error retries 0.02302 seconds ./spec/services/data/intercom_client_spec.rb:23 IntercomClient::get when an error is raised raises the error 0.0135 seconds ./spec/services/data/intercom_client_spec.rb:17 IntercomClient::post when an error is raised when it's a retryable error retries 0.01096 seconds ./spec/services/data/intercom_client_spec.rb:45 Finished in 1.94 seconds (files took 7.54 seconds to load) 4 examples, 0 failures ``` 40 seconds shaved off our build time with a one-line change! It's important to note that putting trace points around every single C function isn't free, and Sleep Study WILL make your specs run more slowly. Only use it once in a while to find new sleeps that have snuck into your code. Stub those sleeps out, open a PR showing how much faster you made your tests, and then do a victory lap around the office and collect your high fives.
joeyschoblaska
74,602
Explaining Scrum story points
After many years using Scrum and lots of companies later, I have seen that peop...
0
2019-02-23T10:18:52
https://dev.to/anortef/explaining-scrum-story-points-3mn1
scrum, points, agile
After many years using Scrum and lots of companies later, I have seen that people have a tendency to assign timings to story points like 3 points = 8hour work. This is bad for planning and makes you unable to use one of the properties of Scrum. Let's start by quickly recalling the point of the story points and what do we seek to obtain by using them. Software development teams aren't usually a uniform mind of similar expertise and experience, some people are better at the backend, some have more years of experience, and so on. These differences make them think that they can solve a particular problem in X hours but that X greatly differs on value from member to member of the team and add to that the fact that we, software developers, are way too optimistic at estimating timings. How can we solve these problems? Fortunately, software developers are extremely good at assessing the complexity of a problem and the appraisal difference between a senior and a junior regarding complexity is not that big when compared to difference while comparing timings. So, by relating story points to the complexity we gain a more general consensus about the workload of a given problem thus allowing the team to be able to predict how much complexity they can tackle on a given stretch of time. ## Important things to take into account: - Always remember that this is an iterative process that yields better results the more iterations it runs, so don't expect the first sprint to nail the points. - Never ever change the points assigned to a story. One of the things you expect to gain is predictability and for that, you need to maintain the same error pattern your team has. If you change the value of a story after a sprint that pattern will be blurred or directly lost. - It's the product owner job to translate the amount of complexity a team can tackle into timings for upper management and that should never be a concern for the developers. - If you reduce the number of story points to fit more complexity on the sprint you are only cheating yourself.
anortef
75,191
Competing with a "Stanford grad just dying to work all nighters on Red Bull"
A reader of my newsletter wrote in, talking about the problem of finding a job with work/life balance...
0
2019-01-14T17:15:12
https://codewithoutrules.com/2019/01/09/worklife-balance-silicon-valley/
career, productivity
--- title: Competing with a "Stanford grad just dying to work all nighters on Red Bull" published: true tags: [career, productivity] canonical_url: https://codewithoutrules.com/2019/01/09/worklife-balance-silicon-valley/ --- A reader of [my newsletter](https://codewithoutrules.com/softwareclown/?ref=dev.to) wrote in, talking about the problem of finding a job with work/life balance in Silicon Valley: > It seems like us software engineers are in a tough spot: companies demand a lot of hard work and long hours, and due to the competitiveness here in Silicon Valley, you have to go along with it (or else there’s some bright young Stanford grad just dying to work all nighters on Red Bull to take your place). > > But they throw you aside once the company has become established and they no longer need the “creative” types. In short, how do you get a job with work/life balance when you’re competing against people willing to work long hours? ## All nighters make for bad software The starting point is realizing that working long hours makes you a much less productive employee, to the point that your total output will actually decrease (see [Evan Robinson on crunch time](http://www.igda.org/?page=crunchsixlessons)). **If you want to become an effective and productive worker, you’re actually much better off working normal hours and having a personal life than working longer hours.** Since working shorter hours makes you [more productive](https://codewithoutrules.com/2018/02/11/working-long-hours/), that gives you a starting point for why _you_ should be hired. Instead of focusing on demonstrative effort by working long hours, you can focus on [identifying and solving valuable problems](https://codewithoutrules.com/2018/10/10/beyond-senior-software-engineer/), especially the bigger and longer term problems that require thought and planning to solve. ## Picking the right company Just because you’re a valuable, productive programmer doesn’t mean you’re going to get hired, of course. So next you need to find the right company. You can imagine three levels of managerial skill: - **Level 1:** These managers have no idea how to recognize effective workers, so they only judge people by hours worked. - **Level 2:** These managers can recognize effective workers, but don’t quite know how to create a productive culture. That means if you choose work long hours they won’t stop you, however pointless these long hours may be. But, they won’t force you work long hours so long as you’re doing a decent job. - **Level 3:** These managers can recognize effective workers and will encourage a productive culture. Which is to say, they will explicitly discourage working long hours except in emergencies, they will take steps to prevent future emergencies, etc.. **When you look for a job you will want to [avoid Level 1 managers](https://codewithoutrules.com/2016/10/14/job-you-dont-hate/).** However good your work, they will be happy to replace you with someone incompetent so long as they can get more hours out of them. So you’ll be forced to work long hours _and_ work on broken code. **Level 3 managers are ideal, and they do exist.** So if you can find a job working for them then you’re all set. **Level 2 managers are probably more common though, and you can get work/life balance working for them—if you set strong boundaries.** Since they can recognize actual competence and skills, they won’t judge you during your interview based on how many hours you’re willing to work. You just need to convey your skill and value, and a reasonable amount of dedication to your job. And once you’ve started work, if you can actually be productive (and if you work 40 hours/week you will be more productive!) they won’t care if you come in at 9 and leave at 5, because they’ll be happy with your work. Unlike Level 3 managers, however, you need to be explicit about boundaries during the job interview, and even more so after you start. Elsewhere I wrote up some suggestions about [how to convey your value](https://codewithoutrules.com/2018/07/10/boss-cares-about-hours/), and [how to say “no” to your boss](https://codewithoutrules.com/2018/08/16/how-to-say-no/). ## Employment is a negotiated relationship To put all this another way: employment is a negotiated relationship. Like it or not, you are negotiating from the moment you start interviewing, while you’re on the job, and until the day you leave. You are trying to trade valuable work for money, learning opportunities, and whatever else your goals are (you can, for example, [negotiate for a shorter workweek](https://codewithoutrules.com/3dayweekend/)). In this case, we’re talking about negotiating for work/life balance: 1. Level 1 managers you can’t negotiate with, because what they want (long hours) directly conflicts with what you want. 2. Level 2 managers you can negotiate with, by giving them one of the things they want: valuable work. 3. Level 3 managers will give you what you want without your having to do anything, because they know it’s in the best interest of everyone. Of course, even for Level 3 managers you will still need to negotiate other things, like [a higher salary](https://codewithoutrules.com/2018/05/01/negotiation-and-the-cap-theorem/). **So how do you get a job with work/life balance?** Focus on providing and demonstrating valuable long-term work, avoid bad companies, and make sure you set boundaries from the very start. * * * ### We all make mistakes, and I’ve got 20 years’ worth: from code that crashed production every night at 4AM, to accepting a preposterously bad job offer. ### Every painful failure taught me a lesson—but only after it was too late. ### You can do better! Join 3300 other programmers, and every week you’ll learn how to [avoid another of my mistakes](https://codewithoutrules.com/softwareclown/?ref=dev.to).
itamarst
75,220
Spring Security with JWT
Simple tutorial to show you how to use Spring Security with JWT
0
2019-01-14T17:43:08
https://dev.to/kubadlo/spring-security-with-jwt-3j76
java, spring, security, jwt
--- title: Spring Security with JWT published: true description: Simple tutorial to show you how to use Spring Security with JWT cover_image: https://jakublesko.com/wp-content/uploads/2019/01/key-in-mans-hand.jpg tags: java, spring, security, jwt --- Spring Security’s default behavior is easy to use for a standard web application. It uses cookie-based authentication and sessions. Also, it automatically handles CSRF tokens for you (to prevent man in the middle attacks). In most cases you just need to set authorization rights for specific routes, a method to retrieve the user from the database and that’s it. On the other hand, you probably don’t need full session if you’re building just a REST API which will be consumed with external services or your SPA/mobile application. Here comes the JWT (JSON Web Token) – a small digitally signed token. All needed information can be stored in the token, so your server can be session-less. JWT needs to be attached to every HTTP request so the server can authorize your users. There are some options on how to send the token. For example, as an URL parameter or in HTTP Authorization header using the Bearer schema: ```bash Authorization: Bearer <token string> ``` JSON Web Token contains three main parts: 1. Header – typically includes the type of the token and hashing algorithm. 2. Payload – typically includes data about a user and for whom is token issued. 3. Signature – it’s used to verify if a message wasn’t changed along the way. ## Example token A JWT token from authorization header will probably look like this: ```bash Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpc3MiOiJzZWN1cmUtYXBpIiwiYXVkIjoic2VjdXJlLWFwcCIsInN1YiI6InVzZXIiLCJleHAiOjE1NDgyNDI1ODksInJvbCI6WyJST0xFX1VTRVIiXX0.GzUPUWStRofrWI9Ctfv2h-XofGZwcOog9swtuqg1vSkA8kDWLcY3InVgmct7rq4ZU3lxI6CGupNgSazypHoFOA ``` As you can see there are three parts separated with comma – header, claims, and signature. Header and payload are Base64 encoded JSON objects. #### Header: ```json { "typ": "JWT", "alg": "HS512" } ``` #### Claims/Payload: ```json { "iss": "secure-api", "aud": "secure-app", "sub": "user", "exp": 1548242589, "rol": [ "ROLE_USER" ] } ``` ## Example application In the following example, we will create a simple API with 2 routes – one publicly available and one only for authorized users. We will use page [start.spring.io](https://start.spring.io/) to create our application skeleton and select Security and Web dependencies. Rest of the options are up to your preferences. ![Spring Initializr screenshot](https://jakublesko.com/images/blog/2019-01-14-spring-initializr.png "Screenshot of Spring Initializr with new application details") JWT support for Java is provided by the library [JJWT](https://github.com/jwtk/jjwt) so we also need to add following dependencies to the pom.xml file: ```xml <dependency> <groupId>io.jsonwebtoken</groupId> <artifactId>jjwt-api</artifactId> <version>0.10.5</version> </dependency> <dependency> <groupId>io.jsonwebtoken</groupId> <artifactId>jjwt-impl</artifactId> <version>0.10.5</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.jsonwebtoken</groupId> <artifactId>jjwt-jackson</artifactId> <version>0.10.5</version> <scope>runtime</scope> </dependency> ``` ## Controllers Controllers in our example applications will be simple as much as possible. They will just return a message or HTTP 403 error code in case the user is not authorized. ```java @RestController @RequestMapping("/api/public") public class PublicController { @GetMapping public String getMessage() { return "Hello from public API controller"; } } ``` ```java @RestController @RequestMapping("/api/private") public class PrivateController { @GetMapping public String getMessage() { return "Hello from private API controller"; } } ``` ## Filters First, we will define some reusable constants and defaults for generation and validation of JWTs. _Note: You should not hardcode JWT signing key into your application code (we will ignore that for now in the example). You should use an environment variable or .properties file. Also, keys need to have an appropriate length. For example, HS512 algorithm needs a key with size at least 512 bytes._ ```java public final class SecurityConstants { public static final String AUTH_LOGIN_URL = "/api/authenticate"; // Signing key for HS512 algorithm // You can use the page http://www.allkeysgenerator.com/ to generate all kinds of keys public static final String JWT_SECRET = "n2r5u8x/A%D*G-KaPdSgVkYp3s6v9y$B&E(H+MbQeThWmZq4t7w!z%C*F-J@NcRf"; // JWT token defaults public static final String TOKEN_HEADER = "Authorization"; public static final String TOKEN_PREFIX = "Bearer "; public static final String TOKEN_TYPE = "JWT"; public static final String TOKEN_ISSUER = "secure-api"; public static final String TOKEN_AUDIENCE = "secure-app"; private SecurityConstants() { throw new IllegalStateException("Cannot create instance of static util class"); } } ``` The first filter will be used directly for user authentication. It’ll check for username and password parameters from URL and calls Spring’s authentication manager to verify them. If username and password are correct, then the filter will create a JWT token and returns it in HTTP Authorization header. ```java public class JwtAuthenticationFilter extends UsernamePasswordAuthenticationFilter { private final AuthenticationManager authenticationManager; public JwtAuthenticationFilter(AuthenticationManager authenticationManager) { this.authenticationManager = authenticationManager; setFilterProcessesUrl(SecurityConstants.AUTH_LOGIN_URL); } @Override public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) { var username = request.getParameter("username"); var password = request.getParameter("password"); var authenticationToken = new UsernamePasswordAuthenticationToken(username, password); return authenticationManager.authenticate(authenticationToken); } @Override protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain, Authentication authentication) { var user = ((User) authentication.getPrincipal()); var roles = user.getAuthorities() .stream() .map(GrantedAuthority::getAuthority) .collect(Collectors.toList()); var signingKey = SecurityConstants.JWT_SECRET.getBytes(); var token = Jwts.builder() .signWith(Keys.hmacShaKeyFor(signingKey), SignatureAlgorithm.HS512) .setHeaderParam("typ", SecurityConstants.TOKEN_TYPE) .setIssuer(SecurityConstants.TOKEN_ISSUER) .setAudience(SecurityConstants.TOKEN_AUDIENCE) .setSubject(user.getUsername()) .setExpiration(new Date(System.currentTimeMillis() + 864000000)) .claim("rol", roles) .compact(); response.addHeader(SecurityConstants.TOKEN_HEADER, SecurityConstants.TOKEN_PREFIX + token); } } ``` The second filter handles all HTTP requests and checks if there is an Authorization header with the correct token. For example, if the token is not expired or if the signature key is correct. If the token is valid then the filter will add authentication data into Spring’s security context. ```java public class JwtAuthorizationFilter extends BasicAuthenticationFilter { private static final Logger log = LoggerFactory.getLogger(JwtAuthorizationFilter.class); public JwtAuthorizationFilter(AuthenticationManager authenticationManager) { super(authenticationManager); } @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws IOException, ServletException { var authentication = getAuthentication(request); if (authentication == null) { filterChain.doFilter(request, response); return; } SecurityContextHolder.getContext().setAuthentication(authentication); filterChain.doFilter(request, response); } private UsernamePasswordAuthenticationToken getAuthentication(HttpServletRequest request) { var token = request.getHeader(SecurityConstants.TOKEN_HEADER); if (StringUtils.isNotEmpty(token) && token.startsWith(SecurityConstants.TOKEN_PREFIX)) { try { var signingKey = SecurityConstants.JWT_SECRET.getBytes(); var parsedToken = Jwts.parser() .setSigningKey(signingKey) .parseClaimsJws(token.replace("Bearer ", "")); var username = parsedToken .getBody() .getSubject(); var authorities = ((List<?>) parsedToken.getBody() .get("rol")).stream() .map(authority -> new SimpleGrantedAuthority((String) authority)) .collect(Collectors.toList()); if (StringUtils.isNotEmpty(username)) { return new UsernamePasswordAuthenticationToken(username, null, authorities); } } catch (ExpiredJwtException exception) { log.warn("Request to parse expired JWT : {} failed : {}", token, exception.getMessage()); } catch (UnsupportedJwtException exception) { log.warn("Request to parse unsupported JWT : {} failed : {}", token, exception.getMessage()); } catch (MalformedJwtException exception) { log.warn("Request to parse invalid JWT : {} failed : {}", token, exception.getMessage()); } catch (SignatureException exception) { log.warn("Request to parse JWT with invalid signature : {} failed : {}", token, exception.getMessage()); } catch (IllegalArgumentException exception) { log.warn("Request to parse empty or null JWT : {} failed : {}", token, exception.getMessage()); } } return null; } } ``` ## Security configuration The last part we need to configure is Spring Security itself. The configuration is simple, we need to set just a few details: - Password encoder – in our case bcrypt - [CORS](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) configuration - Authentication manager – in our case simple in-memory authentication but in real life, you’ll need something like [UserDetailsService](https://www.baeldung.com/spring-security-authentication-with-a-database) - Set which endpoints are secure and which are publicly available - Add our 2 filters into the security context - Disable session management – we don’t need sessions so this will prevent the creation of session cookies ```java @EnableWebSecurity @EnableGlobalMethodSecurity(securedEnabled = true) public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.cors().and() .csrf().disable() .authorizeRequests() .antMatchers("/api/public").permitAll() .anyRequest().authenticated() .and() .addFilter(new JwtAuthenticationFilter(authenticationManager())) .addFilter(new JwtAuthorizationFilter(authenticationManager())) .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS); } @Override public void configure(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser("user") .password(passwordEncoder().encode("password")) .authorities("ROLE_USER"); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Bean public CorsConfigurationSource corsConfigurationSource() { final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", new CorsConfiguration().applyPermitDefaultValues()); return source; } } ``` ## Test #### Request to public API ```bash GET http://localhost:8080/api/public ``` ```bash HTTP/1.1 200 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: 0 X-Frame-Options: DENY Content-Type: text/plain;charset=UTF-8 Content-Length: 32 Date: Sun, 13 Jan 2019 12:22:14 GMT Hello from public API controller Response code: 200; Time: 18ms; Content length: 32 bytes ``` #### Authenticate user ```bash POST http://localhost:8080/api/authenticate?username=user&password=password ``` ```bash HTTP/1.1 200 Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpc3MiOiJzZWN1cmUtYXBpIiwiYXVkIjoic2VjdXJlLWFwcCIsInN1YiI6InVzZXIiLCJleHAiOjE1NDgyNDYwNzUsInJvbCI6WyJST0xFX1VTRVIiXX0.yhskhWyi-PgIluYY21rL0saAG92TfTVVVgVT1afWd_NnmOMg__2kK5lcna3lXzYI4-0qi9uGpI6Ul33-b9KTnA X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: 0 X-Frame-Options: DENY Content-Length: 0 Date: Sun, 13 Jan 2019 12:21:15 GMT <Response body is empty> Response code: 200; Time: 167ms; Content length: 0 bytes ``` #### Request to private API with token ```bash GET http://localhost:8080/api/private Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpc3MiOiJzZWN1cmUtYXBpIiwiYXVkIjoic2VjdXJlLWFwcCIsInN1YiI6InVzZXIiLCJleHAiOjE1NDgyNDI1ODksInJvbCI6WyJST0xFX1VTRVIiXX0.GzUPUWStRofrWI9Ctfv2h-XofGZwcOog9swtuqg1vSkA8kDWLcY3InVgmct7rq4ZU3lxI6CGupNgSazypHoFOA ``` ```bash HTTP/1.1 200 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: 0 X-Frame-Options: DENY Content-Type: text/plain;charset=UTF-8 Content-Length: 33 Date: Sun, 13 Jan 2019 12:22:48 GMT Hello from private API controller Response code: 200; Time: 12ms; Content length: 33 bytes ``` #### Request to private API without token You’ll get HTTP 403 message when you call secured endpoint without a valid JWT. ```bash GET http://localhost:8080/api/private ``` ```bash HTTP/1.1 403 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: 0 X-Frame-Options: DENY Content-Type: application/json;charset=UTF-8 Transfer-Encoding: chunked Date: Sun, 13 Jan 2019 12:27:25 GMT { "timestamp": "2019-01-13T12:27:25.020+0000", "status": 403, "error": "Forbidden", "message": "Access Denied", "path": "/api/private" } Response code: 403; Time: 28ms; Content length: 125 bytes ``` ## Conclusion The goal of this article is not to show the one correct way how to use JWTs in Spring Security. It’s an example of how you can do it in your real-life application. Also, I did not want to go too deep into the topic so few things like token refreshing, invalidation, etc. are missing but I’ll cover these topics probably in the future. **tl;dr** You can find the full source code of this example API in my [GitHub repository](https://github.com/kubadlo/jwt-security).
kubadlo
75,242
ClassicPress development version on your VVV environment
I love VVV or Varying Vagrant Vagrants. For developers is very useful also to s...
0
2019-01-28T09:54:58
https://daniele.tech/2019/01/classicpress-development-version-on-your-vvv-environment/
classicpress, opensource, php
--- title: ClassicPress development version on your VVV environment published: true tags: ClassicPress,Open Source,php canonical_url: https://daniele.tech/2019/01/classicpress-development-version-on-your-vvv-environment/ --- I love VVV or Varying Vagrant Vagrants. For developers is very useful also to start working with wordpress and discover new tools. The post [ClassicPress development version on your VVV environment](https://daniele.tech/2019/01/classicpress-development-version-on-your-vvv-environment/) appeared first on [Daniele Mte90 Scasciafratte](https://daniele.tech/eng).
mte90
75,402
Using Redis to Store Information in Python Applications
We're hacking into the new year strong over here at Hackers and Slackers, and...
0
2019-01-18T17:37:01
https://hackersandslackers.com/using-redis-with-python/
nosql, python, softwaredevelopment
--- title: Using Redis to Store Information in Python Applications published: true tags: NoSQL,Python,Software Development canonical_url: https://hackersandslackers.com/using-redis-with-python/ --- ![Using Redis to Store Information in Python Applications](https://res-2.cloudinary.com/hackers-and-slackers/image/upload/q_auto/v1/images/redis.jpg) We're hacking into the new year strong over here at Hackers and Slackers, and we've got plenty of new gifts to play with in the process. Nevermind how Santa manages to fit physically non-existent SaaS products under the Christmas tree. We ask for abstract enterprise software every year, and this time we happened to get a little red box. If you've never personally used Redis, the name probably sounds familiar as you've been bombarded with obscure technology brand names in places like the Heroku marketplace, or your unacceptably nerdy Twitter account (I assure you, mine is worse). So what is Redis, you ask? Well, it's a NoSQL datasto- wait, where are you... NO! Don't leave! It's not like THAT, I swear! ## What Redis is and When to Use It Redis stores information in the familiar key/value pair format, but the term ‘NoSQL’ is more of a technicality than an indicator of use cases synonymous with NoSQL databases of the past. Redis looks the part for the very purpose it serves: a box that you fill with crap which may or may not be important down the line. It’s the perfect place to put a Starbucks gift card or the clothes you’ve already worn which aren’t quite ready to be washed yet. ### All Users go to Heaven: Cloud Storage for User Sessions Perhaps the most common use case is a glorified **session cache**. Similar to the way users might store temporary app information in cookies, Redis holds on to information which is fleeting. The difference is we now own this information inside our very own box, thus the Redis motto: “_your box, your rules_.”\* \* I made this up: it holds zero truth. Because temporary user information is in our hands as opposed to a fickle browser, we can decide just _how_ temporary our “cache” is, having it persist across sessions or even devices. While local memory storage may as well be a place for throwaway information, and databases for persistent or eternal information, Redis is somewhere in between. As users interact and the information they create within our app evolves, we may choose at any point to promote information stored in Redis to a database, or perhaps have it stick around a little while longer. They’ll be thrilled to see their shopping cart still filled with the stupid things they almost bought while they were drunk yesterday. ### When Variables are on a Bagel, You can Have Variables Any Time In other words, Redis is great for solving the need of globally accessible variables throughout an entire application, on a per-user basis. Users who accidentally quit your app, move to a new context, or merely use your app for longer than your QA team are easier to manage when their temporary information is in a safe and global environment. Compare this to saving a user’s Pac-Man score to a global variable: the moment an app like Pac-Man crashes or restarts, that session is gone forever. Thus dies another three-letter app obscenity belonging to a leaderboard. ### Speaking of Leaderboards... Redis is great at counting in increments. This is probably made evident by the fact that it is a computer, and these are the things computers do. Something else that’s made great by counting: queues! Cues of tasks, notifications, chats, disappearing naked photos, etc: all of these things are ideally suited for our red box. ## Getting a Red Box of Your Own Playing around with a cloud-hosted Redis box will cost you maybe 5 bucks (monthly if you forget to cancel). Redis is open source so there are plenty of vendors to choose from with little differentiation between them. I’ll consider recommending whichever vendor offers to bribe me the most, but in the meantime I’ll leave the window shopping to you. Setting up Redis should feel like setting up a cloud SQL database, except smaller and cuter. You’ll be able to pick adorable features for your box of possibly-but-not-surely-worthless stuff, such as hosted region, etc. Once you're set up you should have a host URL for reaching your instance: ``` redis-1738.c62.us-east-1-4.ec2.cloud.fakeredisprovider.com:42069 ``` Now we’re cooking with gas. ## Using the redis-py Python Library The main Python Redis library is typed as `redis`, as in `pip install Redis`. The easiest way to connect in any case is via a URI connection string, like such: ``` r = redis.Redis( url='rediss://:password@hostname:port/0') ``` Note the unique structure of the URI above: - **rediss://:** precedes all Redis URIs; _NOTE THE TRAILING COLON._ - **password** comes next, with the interesting choice to bypass usernames. - **hostname** is the instance's URL... almost always a thinly veiled repurposed EC2 instance. That's right, we're being sold simple open source software hosted on AWS. Don't think about it. - **port** is your preferred port of call after pillaging British trade ships. Just making sure you're still here. - **/database** brings up the rear, which is the name of your database. As with regular databases, other connection methods exist such as via SSL certificates, etc. ### Storing and Getting Values This is your bread and butter for interacting with Redis: - **.set():** Set a key/value pair by either overwriting or creating a new value - **.get():** Retrieve a value by naming the associated key - **hmget():** Accepts a variable number of keys, and will return values for each if they exist - **hmset():** Set multiple values to a single key. - **hgetall():** Get all values for a key where a key has been assigned multiple values. It’s important to note that Redis by default returns bytes as opposed to strings. As a result, it is important to remember the encoding/decoding of values in order to retrieve them properly. For example: ``` # Setting a Value r.set('uri', str(app.config['SQLALCHEMY_DATABASE_URI']).encode('utf-8')) # Getting a Value r.get('uri').decode('utf-8') ``` If you happen to be remotely sane, you probably don't want to deal with encoding and decoding values over and again. Luckily we can ensure that responses are always decoded for us by setting the `decode_responses` parameter to `True` when setting up our Redis instance: ``` redis.StrictRedis(host="localhost", port=6379, charset="utf-8", decode_responses=True) ``` The [**redis-py** documentation](https://redis-py.readthedocs.io/en/latest/) actually goes wayyy deeper than the 5 methods listed above. If you ever somehow manage to cover all of it, I have many questions about the type of person you are. ## More Redis Libraries for Python If the above encoding/decoding seems annoying, you aren’t the first. That’s why libraries like [**Redisworks**](https://github.com/seperman/redisworks) exist. Redisworks allows for the seamless exchange of Python data types to and from Redis. Want to shove a Python dict down your box’s throat? Go ahead! You won’t even have to think about it very hard. There are plenty of similar libraries all aimed to make sad lives easier. Want more? How about Asyncio’s very own [asynchronous Redis library](https://asyncio-redis.readthedocs.io/en/latest/)? Or how about the similar **[aioredis](https://hackersandslackers.com/using-redis-with-python/aioredis.readthedocs.org)**, another Asyncio Redis plug-in, which also includes pure Python parsing, clustering support, and things I don’t even understand! There are truly [more Python libraries for Redis](https://redis.io/clients#python) than you could need. Finally, how could we ever forget **Flask-Redis**? We’ve [already covered this](https://dev.to/hackersandslackers/globally-accessible-variables-in-flask-demystifying-the-application-context-1aeg-temp-slug-2115303), but is easily the first and last Redis library any Flask developer will use. ## Your Box, Your Treasure, Your World **™** Now that we’ve uncovered this niche between cached data and stored data, the possibilities are endless. The world is your oyster full of things which you may or may not choose to shove in your box. Ok, fine. Perhaps this whole concept feels like a bit of an obscure niche hardly worthy of the words on this page. Just remember that feeling when the time comes that you too need a little red cube, and it will be waiting with love and compassion. A companion cube, if you will.
toddbirchard
75,438
How to Choose a Startup to Work For by Thinking Like An Investor
Advice for choosing a startup by thinking like an investor.
0
2019-01-16T00:08:42
https://triplebyte.com/blog/how-to-choose-a-startup-to-work-for
career
--- title: How to Choose a Startup to Work For by Thinking Like An Investor published: true description: Advice for choosing a startup by thinking like an investor. tags: career cover_image: https://cl.ly/a77d1d7edca5/Image%202019-01-14%20at%206.24.16%20PM.png canonical_url: https://triplebyte.com/blog/how-to-choose-a-startup-to-work-for --- I believe that most advice on choosing a startup to work for is wrong. Early employees at wildly successful startups suggest you [assume the value of your equity is zero](https://twitter.com/rabois/status/679722946919677952) and instead optimize for [how much you can learn](https://triplebyte.com/blog/interview-with-gmail-creator-and-y-combinator-partner-paul-buchheit). In this post I'll argue that evaluating how likely a startup is to succeed should actually be the most important factor in your decision to join one. As a former partner at Y Combinator, I know a lot about how investors do this. Now, as a founder and CEO of Triplebyte, I see how much less rigor the average job seeker applies to their decision and what they miss that investors would notice. First you should be sure you really want to work at a startup. This is not the right choice for everyone. Paul Buchheit, an early engineer at Google, says, _“If you're happy working where you are, and you don't have any ambition to do anything else, you're probably going to get paid less and work more if you leave \[for a startup\]. If getting paid less and working more is unappealing to you, then I would recommend staying where you are!”_ Joining a startup means giving up greater liquid compensation today for the chance of gaining other things that accumulate value over time. Things like personal networks, learning opportunities and equity. **You can learn new things and meet people working at any startup. However you will learn significantly more, build a stronger network, and accelerate your career trajectory much faster by joining a successful startup than an average one.** Successful startups attract the best resources in every category as they grow. They'll get the best investors, press coverage, and hire the best talent. The latter is especially important for your career. Initially, founders can only recruit talent from the limited pool of people who have the risk tolerance to join something early. As the startup succeeds, this pool grows and it becomes a magnet for the best talent. By joining early, you get to work alongside the smartest people and solve problems together without several layers of management between you. It's also only by joining a successful startup early that you can get remarkably steep career trajectories, like Jeff Dean, Marissa Mayer or Chris Cox. Successful startups grow faster than they can hire experienced and qualified people to keep up with growth. This creates vacuums of responsibility, vacuums that existing employees who might be “under qualified” on paper can step in and fill. If you fill the vacuum well, you can keep repeating until you rise up into the highest levels of leadership of what will eventually become a public company. Even if you leave before reaching those levels, there's a halo effect around you now. Companies will roll out the red carpet to hire you, and investors will be keen to fund your startup ideas. How To Choose a Startup ----------------------- Once you've decided to join a startup, the first obvious thing to look for is any evidence that it is already succeeding. This is what investors do. Unfortunately, at the early stages there's usually not much data like this. When there is, one thing we learned at YC was not to be fooled by large absolute numbers. What matters most is the growth rate. A startup with $1 million in monthly revenue which has been flat for the past year is less exciting than one with $100,000 in monthly revenue that was at $0 six months ago. You care about the trajectory a startup is on, not just where it is at this moment in time. Joining Facebook in 2006 would have been a better choice than Myspace even though the latter had more users at the time. Next, you need to evaluate the strength of the team and market. At the earliest stage, the team is just the founders. It's difficult to evaluate startup founders using traditional signals like college or work history. In fact, the kind of experience that looks good on a resume can be a negative signal for being suited to founding a company. Rising up the ranks at Google might mean you can only thrive in a well-structured environment working on a product with millions of users. Neither of those conditions apply to startups. However, it may also signal an outlying level of competence and competitiveness. Those are both good traits for a startup founder. That's what makes this so hard. There's not a single clearly defined shape of person you're trying to identify. My advice would be to focus on how impressive the accomplishments of the founders are relative to their peer group and environment. Starting a company and convincing investors to invest is an accomplishment that contains signal. It contains more signal if achieved by a 19 year old college dropout from Milwaukee than a Stanford computer science graduate. You also need to evaluate how well a founder understands the problem they're working on. Being an expert in the domain yourself makes this easy. You know exactly which questions to ask someone to call bullshit. If you're not, then take the approach of asking the founders to teach you about the problem they're working on. Be intellectually curious and see how well they can explain product choices they've made, how the market works, and who their competitors are. I particularly like asking what has surprised them since starting the company and which early assumptions turned out to be wrong. I would be suspicious if they claim every assumption turned out to be completely right. Evaluating the relationship between founders is as important as evaluating the founders themselves. The top cause of startup death during a Y Combinator batch was cofounder disputes. We evaluated this very closely during interviews. We were looking for red flags such as talking over each other and disagreeing on important issues. It was a particularly bad sign if it wasn't clear who the CEO was. Likewise it was a bad sign if, when asked, one person meekly answered they were the CEO while the other glared at them. I'd recommend you request to meet the cofounders together to answer your questions so you can feel out this dynamic. You can also get valuable data about the startup from the team that has been hired. They may know about any red flags or major issues the company is facing which the founders haven't told you about. This data is especially valuable because it is the hardest to get. Once you receive an offer, ask anyone you met during the interview to meet you for coffee. Then ask them hard questions such as, what are the biggest weaknesses of the founders? How have conflicts been handled? What major problems has the company faced thus far? If you are an engineer, I'd also ask specific questions around how product ideas are generated and work is assigned. Do not look for a perfectly smooth and egalitarian process here. I believe early stage startups work better as benevolent dictatorships. You want at least one of the founders to have strong product opinions and be the final product decision maker. Early on, speed of action matters most to a startup, and decisiveness is a good trait in startup founders because it creates motion. Don't miss out on joining a great early stage startup because the founder seemed too opinionated and involved in product decisions early on. You should also judge the quality of the product but don't always look for a perfect product. An imperfect product with bugs is fine if the team can explain why it still delivers a better experience than competitors. Moving quickly to launch software and sacrificing some amount of quality to gain users in a new or emerging market is a good strategy for a startup. This same tradeoff does not apply if the startup is in a crowded space with large incumbents who already have pretty decent products. [Airtable](http://airtable.com/) is a good recent example of a product that has seen success launching in a crowded market by taking time to build and iterate on the product. Next you need to evaluate how promising the market is. I won't cover how to do deep analysis of market size here. Estimating the true market size is an exercise composed of some parts rigorous analysis, guesswork, and persuasion. What is important is identifying the type of market the startup is in. There are two types of market for a startup. The first is a new market that is growing quickly e.g. blockchain. The second is an already known and large market with existing large incumbents e.g. real estate. The type of market a startup is in should also impact how you evaluate the strength of the team and product. Being the first mover in a new and growing market has huge advantages which can cover up a lot of deficiencies in both the team and the product. If it is early enough, investors won't have funded many competitors yet. If you believe strongly in the growth potential of a market and find a startup that is currently in the lead you can afford to be more forgiving on the quality of the team and product. If the startup grows, both will be upgraded anyhow. Operating in a market which is already known to be large and valuable will be less forgiving. There will be many well-funded competitors, and you'll need an exceptional team with access to capital and a correct vision for differentiation. [Opendoor](http://opendoor.com/), for example, was not an average founding team. Co-founder Keith Rabois was an early team member at several startups that went public, such as LinkedIn, Paypal and Square. CEO Eric Wu had deep domain expertise after founding and selling a real estate startup. Conclusion ---------- Simply joining any startup won’t provide you with great learning opportunities or a strong personal network by default. A startup on a flat growth trajectory with a weak team offers neither. A consequence of this is being willing to move startups if the one you are currently at stops growing. I think startup employees should re-evaluate the growth trajectory of their startup every year. I’m conflicted in offering this advice. As the founder of a hiring marketplace, more people looking for jobs each year increases our market size. As the founder and CEO of a startup, I want my employees to stay with us for a long time and have faith during the inevitable rough patches. There's a balance here. Every startup faces challenges, and if you're quick to jump ship you might give up a lot of upside. As a personal anecdote, my first startup was given an acquisition offer by Facebook in 2007. We declined in favor of taking another deal because Facebook seemed like a chaotic organization at the time with a lot of executive turnover and the conventional wisdom said it was overvalued. In hindsight, it may not have been a bad decision to join and stick through a temporarily turbulent time. **Predicting startup success is hard, and even professional investors fail at it most of the time. My advice here is intended to make you think more like them, in the hope that you'll build a portfolio of experiences over time that will maximize both your learning and your earnings. Good luck!** _Thanks to Elad Gil, Ammon Bartram and Charlie Treichler for reading early drafts of this essay._
harjeet
75,781
What did you learn today?
Hello dev folks, hope the source is with you. Ok so I was thinking since a career in software enginee...
0
2019-01-16T08:01:32
https://dev.to/luqman10/what-did-you-learn-today-4b43
whatilearnttoday
--- title: What did you learn today? published: true description: tags: whatilearnttoday --- Hello dev folks, hope the source is with you. Ok so I was thinking since a career in software engineering requires life long learning, why don't we share the new stuff we learn everyday? using #whatilearnttoday. It can be anything from a feature in a languge to non technical skills that can be applied to our field. What you share might save someone's life you know?
luqman10
76,012
Learning React and Typescript - asking for advice
I want to try React and Typescript. Is there a chatroom I can ask question about both or do I need tw...
0
2019-01-17T04:16:09
https://dev.to/oren/learning-react-and-typescript---asking-for-advice-1hie
react, typescript
--- title: Learning React and Typescript - asking for advice published: true description: tags: react, typescript --- I want to try React and Typescript. Is there a chatroom I can ask question about both or do I need two separate chats? I tried using Discord but it ask me to claim my account in order to post a message but I don't know what account I should claim (: Also, can someone refer to me to good example of React and TypeScript? maybe this repo? https://github.com/Lemoncode/react-typescript-samples Thanks!
oren
76,073
Combining multiple forms in Flask-WTForms but validating independently
How to create a Flask form made up of subforms and then to validate and save just those subforms.
0
2019-01-17T12:07:04
https://blog.whiteoctober.co.uk/2019/01/17/combining-multiple-forms-in-flask-wtforms-but-validating-independently/
flask, wtforms, python
--- title: Combining multiple forms in Flask-WTForms but validating independently published: true description: How to create a Flask form made up of subforms and then to validate and save just those subforms. tags: flask, wtforms, python canonical_url: https://blog.whiteoctober.co.uk/2019/01/17/combining-multiple-forms-in-flask-wtforms-but-validating-independently/ --- # Introduction We have a Flask project coming up at [White October](https://www.whiteoctober.co.uk/) which will include a user profile form. This form can be considered as one big form or as multiple smaller forms and can also be submitted as a whole or section-by-section. As part of planning for this work, we did a proof-of-concept around combining multiple subforms together in Flask-WTForms and validating them. Note that this is a slightly different pattern to "nested forms". Nested forms are often used for dynamic repeated elements - like adding multiple addresses to a profile using a single nested address form repeatedly. But our use case was several forms combined together in a non-dynamic way and potentially processed independently. This article also doesn't consider the situation of having multiple _separate_ HTML forms on one page. This document explains the key things you need to know to combine forms together in Flask-WTF, whether you're using AJAX or plain postback. # Subforms The first thing to know is how to combine multiple WTForms forms into one. For that, use the [FormField](http://wtforms.simplecodes.com/docs/1.0.1/fields.html#field-enclosures) field type (aka "field enclosure"). Here's an example: ``` from flask_wtf import FlaskForm import wtforms class AboutYouForm(FlaskForm): first_name = wtforms.StringField( label="First name", validators=[wtforms.validators.DataRequired()] ) last_name = wtforms.StringField(label="Last name") class ContactDetailsForm(FlaskForm): address_1 = wtforms.StringField( label="Address 1", validators=[wtforms.validators.DataRequired()] ) address_2 = wtforms.StringField(label="Address 2") class GiantForm(FlaskForm): about_you = wtforms.FormField(AboutYouForm) contact_details = wtforms.FormField(ContactDetailsForm) ``` As you can see, the third form here is made by combining the first two. You can render these subforms just like any other form field: ``` {{ form.about_you }} ``` (Form rendering is discussed in more detail below.) # Validating a subform Once we'd combined our forms, the second thing we wanted to prove was that they could be validated independently. Normally, you'd validate a (whole) form like this: ``` if form.validate_on_submit() # do something ``` (`validate_on_submit` returns true if the form has been submitted and is valid.) It turns out that you can validate an individual form field quite easily. For our `about_you` field (which is a subform), it just looks like this: ``` form.about_you.validate(form) ``` ## Determining what to validate We added multiple submit buttons to the form so that either individual subforms or the whole thing could be processed. If you give the submit buttons different names, you can easily check which one was pressed and validate and save appropriately (make sure you only save the data you've validated): ``` <input type="submit" name="submit-about-you" value="Just submit About You subform"> <input type="submit" name="submit-whole-form" value="Submit whole form"> ``` And then: ``` if "submit-about-you" in request.form and form.about_you.validate(form): # save About You data here elif "submit-whole-form" in request.form and form.validate(): # save all data here ``` If you have one route method handling both HTTP GET and POST methods, there's no need to explicitly check whether this is a postback before running the above checks - neither button will be in `request.form` if it's not a POST. ### Alternative approaches You could alternatively give both submit buttons the same name and differentiate on value. However, this means that changes to the user-facing wording on your buttons (as this is their value property) may break the if-statements in your code, which isn't ideal, hence why different names is our recommended approach. If you want to include your submit buttons in your WTForms form classes themselves rather than hard-coding the HTML, you can check which one was submitted by checking the relevant field's `data` property - see [here](https://stackoverflow.com/a/35776874/328817) for a small worked example of that. ### Gotcha: Browser-based validation and multiple submit buttons There's one snag you'll hit if you're using multiple submit buttons to validate/save data from just one subform of a larger form. If your form fields have the `required` property set (which WTForms will do if you use the `DataRequired` validator, for example), then the browser will stop you submitting the form until _all_ required fields are filled in - it doesn't know that you're only concerned with part of the form (since this partial-submission is implemented server-side). Therefore, assuming that you want to keep using the `required` property (which you should), you'll need to add some Javascript to dynamically alter the form field properties on submission. This is not a problem if you're using AJAX rather than postbacks for your form submissions; see below how to do that. # Rendering subforms in a template _The examples in this section use explicit field names. In practice, you'll want to create a [field-rendering macro](http://flask.pocoo.org/docs/1.0/patterns/wtforms/#forms-in-templates) to which you can pass each form field rather than repeating this code for every form field you have. That link also shows how to render a field's label and widget separately, which gives you more control over your markup._ As mentioned above, the subforms can be rendered with a single line, just like any other field: ``` {{ form.about_you }} ``` If you want to render fields from your subforms individually, it'll look something like this: ``` <label for="{{ form.about_you.first_name.id }}">{{ form.about_you.first_name.label }}</label> {{ form.about_you.first_name }} ``` As you see, you can't do single-line rendering of form fields and their labels for individual fields within subforms - you have to explicitly render the label. ## Displaying subform errors For a normal form field, you can display associated errors by iterating over the `errors` property like this: ``` {% if form.your_field_name.errors %} <ul class=errors> {% for error in field.errors %} <li>{{ error }}</li> {% endfor %} </ul> {% endif %} ``` In this case, `errors` is just a **list** of error strings for the field. For a subform where you're using the FormField field type, however, `errors` is a **dictionary** mapping field names to lists of errors. For example: ``` { 'first_name': ['This field is required.'], 'last_name': ['This field is required.'], } ``` Therefore, iterating over it in your template is more complicated. Here's an example which displays errors for all fields in a subform (notice the use of the <code>[items](https://docs.python.org/3/tutorial/datastructures.html#looping-techniques)</code> method): ``` {% if form.about_you.errors %} <ul class="errors"> {% for error_field, errors in form.about_you.errors.items() %} <li>{{ error_field }}: {{ errors|join(', ') }}</li> {% endfor %} </ul> {% endif %} ``` # Doing it with AJAX So far, we've considered the case of validating subforms when data is submitted via a full-page postback. However, you're likely to want to do subform validation using AJAX, asynchronously posting form data back to the Flask application using JavaScript. There's already an excellent article by Anthony Plunkett entitled "[Posting a WTForm via AJAX with Flask](https://medium.com/@doobeh/posting-a-wtform-via-ajax-with-flask-b977782edeee)", which contains almost everything you need to know in order to do this. In this article, therefore, I'll just finish by elaborating on the one problem you'll have if doing this with multiple submit buttons - determining the submit button pressed ## Determining the submit button pressed When posting data back to the server with JavaScript, you're likely to use a method like jQuery's [serialize](https://api.jquery.com/serialize/). However, the data produced by this method doesn't include details of the button clicked to submit the form. There are [various ways](https://stackoverflow.com/questions/4007942/jquery-serializearray-doesnt-include-the-submit-button-that-was-clicked) you can work around this limitation. The approach I found most helpful was to dynamically add a hidden field to the form with the same name and value as the submit button (see [here](https://stackoverflow.com/a/11271850/328817)). That way, Python code like `if "submit-about-you" in request.form` (see above) can remain unchanged whether you're using AJAX or postbacks.
sampart
76,173
Changelog: JSFiddle Liquid Tags Now Live
We have another new tag, the JSFiddle tag! JSFiddle is another place where you...
0
2019-01-17T21:16:10
https://dev.to/link2twenty/changelog-jsfiddle-liquid-tags-now-live-1d1b
meta, changelog
We have another new tag, the JSFiddle tag! [JSFiddle](https://jsfiddle.net) is another place where you can save and share your code online. It's a great place to make quick demos to then show off in your blog posts. The standard look is like so ``` {% jsfiddle https://jsfiddle.net/link2twenty/7m6ej0m9/ %} ``` {% jsfiddle https://jsfiddle.net/link2twenty/7m6ej0m9/ %} but you can also choose which tabs to include and in which order they should appear. The tabs you can have are `js,html,css,result`. ``` {% jsfiddle https://jsfiddle.net/link2twenty/v2kx9jcd/ result,html,css %} ``` {% jsfiddle https://jsfiddle.net/link2twenty/v2kx9jcd/ result,html,css %} Or even just have the results panel ``` {% jsfiddle https://jsfiddle.net/link2twenty/xyycj5ta/ result %} ``` {% jsfiddle https://jsfiddle.net/link2twenty/xyycj5ta/ result %}
link2twenty
77,963
Feature Flag pattern in java
In this article, we will briefly introduce the feature flag pattern in java to al...
0
2019-05-09T08:10:27
https://apiumhub.com/tech-blog-barcelona/feature-flag-pattern/
agilewebandappde, programming, softwaredeveloper
--- title: Feature Flag pattern in java published: true tags: Agile web and app de,programming,Software developer canonical_url: https://apiumhub.com/tech-blog-barcelona/feature-flag-pattern/ --- In this article, we will briefly introduce the feature flag pattern in java to allow uploading features to production that are not yet finalized. It is very common to see projects that use the Feature Branch mechanism in order to have all the development of an isolated feature and once it is finalized, it can be integrated into the release flow and be able to generate a version. Without going into a debate about the advantages and disadvantages of Feature Branch and Master Only, the Feature Branch strategy has a very expensive handicap, the feedback, if the feature takes, for example, a month to be finalized, that time will be at a minimum without being able to merge with the rest of the code, without being able to be tested with the rest of the functionalities, etc. Yes, it is clear, with Feature Branch we can also have “fast” feedback, but for this we would need to adjust our pipeline so that it was aware of any branch that was created and could run a completely new environment per branch, with the cost that this implies, not only at the level of money, but to prepare environments, data etc., and even then, we would not be testing the integration of that feature with the rest. Imagine that we can integrate our code and be able to put it in the release cycle of our application even without having the finished feature, one of the most obvious advantages is the quick feedback both in integrating our code with the rest of the application, as in minimizing the feared future merge conflicts. ## Feature flag pattern to the rescue! Feature Flag, also called Feature Toogle, is the perfect pattern to apply [continuous integration](https://dev.to/apium_hub/back-to-the-roots-towards-true-continuous-integration-part-one-14m7) to our long-running features. Through a condition in our code and a configuration file we could activate or not functionality of our application in runtime and once the feature is finished we could activate it. The simplest implementation would be a table in our database with the name of feature and status (Active or not) and in our code have an if (active) show else notshow. But this pattern goes much further, allowing to activate a feature for a group of users, country, user’s browser, behavior or randomly. Each of these implementations have different names, AbTesting, Canary Release .. but everything is about the same, exposing the feature or not depending on conditions. ## Feature Flag pattern For Java If our technological stack is made up of java or any language of the jvm we are lucky. [FF4J](https://ff4j.github.io/) is the implementation of the Feature Toggle pattern in all its glory. With hardly any effort we have integration with different types of data sources to be able to save our features with their status, it also gives us audit of how many times the features have been applied, both by total number, by user, by origin. It also has different decision strategies, WhiteList, BlackList, ClientFilter, ServerFilter, Weighting, etc … they also allow us to create our own strategy. And all this is configurable through a web console or through API, which greatly facilitates the creation and maintenance of the Features. ## Show me the code! Although in the repository of the project there are many examples connecting with different sources for the features (jdbc, cassandra, mongo etc) I found it interesting to create an example that does not exist. Let’s suppose we have a service for the configuration of all the features and other services are connected to this by http to consult the active features. In this example, I created a multimodule with two services: ## Ffconsole This service only has the administration console to create the features and a plugin to expose it through rest. To simplify the example, it is configured that way that the storage would be in memory and without any security (do not apply in productive environments ![😜](https://s.w.org/images/core/emoji/2.2.1/72x72/1f61c.png)). The only configuration that we have is in the class FF4jServletConfiguration.kt in which we register the bean of FF4J, the servlet and the web console and we note the class so that it imports the beans necessary to expose the api. Also to simplify the example, I have the feature already registered, this should be done through the administration panel. ``` @Bean fun getFF4j(): FF4j = FF4j().apply { createFeature("AwesomeFeature", true) } @Bean fun getFF4JServlet(): FF4jDispatcherServlet = FF4jDispatcherServlet().apply { ff4j = getFF4j() } @Bean fun ff4jDispatcherServletRegistrationBean(): ServletRegistrationBean = ServletRegistrationBean(getFF4JServlet(), "/ff4j-web-console/*") ``` ## My-awesome-web A spring boot application with a controller that applies one logic or another depending on whether the feature is active or not. This application will connect to ffconsole using http. The configuration is as simple as telling you what the storage is: ``` @Bean fun ff4j(): FF4j = FF4j().apply { featureStore = FeatureStoreHttp("http://ff4j-console:8080/api/ff4j") } ``` and in the controller we have an endpoint which will return an HTTP\_200 in case the feature is active and an HTTP\_404 in case it is not. ``` @GetMapping("/") fun home(): ResponseEntity = ff.operateWith("AwesomeFeature", ResponseEntity("Not available", HttpStatus.NOT_FOUND), ResponseEntity("This is the awesome Feature", HttpStatus.OK)) ``` This is the only code that once the feature is already implemented and tested enough in the productive environment, it should be cleaned and applied always without depending on whether the flag is active or not. Here is my only snag of this library, the api to work with the features sin some functionality, in the case that there is no release a runtimeException for example, or dealing with the features is something imperative, so I have taken the freedom and chose this example to make a small wrapper and make it a little more “functional”, with which I expose three methods to work with the features: ``` fun enabled(featureID: String): Boolean = getFeature(featureID) .map { it.isEnable } .orElse(false) fun getFeature(featureID: String): Optional = try { Optional.of(ff4j.getFeature(featureID)) } catch (nf: FeatureNotFoundException) { Optional.empty() } fun operateWith(featureId: String, nonExistsValue: T, existsValue: T): T = getFeature(featureId) .filter{it.isEnable} .map { existsValue } .orElse(nonExistsValue) ``` The project is assembled with a docker compose which with a make all generates the artifacts and raises two dockers to be able to make tests. the url of the adminstracion part is http: // apiumhub.com: 9090 / ff4j-web-console / and the url for the feature http: // apiumhub.com: 8080 / The whole project is in [github](https://github.com/jcaromiq/ff4j-sample). ## Conclusion: Feature Flag pattern in java This pattern should not be used as a hammer for all cases. The best strategy is always to be able to split the functionalities / user stories into smaller ones to be able to integrate them as soon as possible, but there are always cases that the feature can not be delivered until it is 100% finalized and it is there where it applies. The best strategy is always to be able to split the functionalities / user stories into smaller ones to be able to integrate them as soon as possible, but there are always cases that the feature can not be delivered until it is 100% finalized and it is there where it applies. The phase of cleaning the code once the feature is deployed and tested during the agreed time is very important, for the correct evolution and maintenance of the application. ## Bibliography: Feature Flag pattern in java - [FeatureToogle](https://martinfowler.com/bliki/FeatureToggle.html) - [FF4J wiki](https://github.com/ff4j/ff4j/wiki) If you would like to get more information about Feature Flag pattern in java, I highly recommend you to subscribe to our monthly newsletter by clicking [here](http://eepurl.com/cC96MY). ## And if you found this article about Feature Flag pattern in Java useful, you might like… [Be more functional in Java with Vavr](https://dev.to/apium_hub/be-more-functional-in-java-with-vavr-5b4i) [Scala generics I: Scala type bounds](https://dev.to/apium_hub/scala-generics-i--scala-type-bounds-38) [Scala generics II: covariance and contravariance](https://dev.to/apium_hub/scala-generics-ii-covariance-and-contravariance-in-generics-5dib) [Scala generics III: Generalized type constraints](https://dev.to/apium_hub/scala-generics-iii-generalized-type-constraints-58km) [BDD: user interface testing](https://apiumhub.com/tech-blog-barcelona/user-interface-testing/) [F-bound over a generic type in Scala](https://apiumhub.com/tech-blog-barcelona/f-bound-scala-generics/) [Microservices vs Monolithic architecture](https://apiumhub.com/tech-blog-barcelona/microservices-vs-monolithic-architecture/) [“Almost-infinit” scalability](https://apiumhub.com/tech-blog-barcelona/almost-infinite-scalability/) [iOS Objective-C app: sucessful case study](https://dev.to/apium_hub/protected-ios-objective-c-app-cornerjob-successfull-case-study-89e) [Mobile app development trends of the year](https://dev.to/apium_hub/mobile-app-development-trends-of-the-year) [Banco Falabella wearable case study ](https://apiumhub.com/tech-blog-barcelona/banco-falabella-wearable-ios-android/) [Mobile development projects](https://apiumhub.com/software-projects-barcelona/) [Viper architecture advantages for iOS apps](https://apiumhub.com/tech-blog-barcelona/viper-architecture/) [Why Kotlin ? ](https://dev.to/apium_hub/why-kotlin-language-android-why-did-google-choose-kotlin--639) [Software architecture meetups](https://dev.to/apium_hub/apiumhub-software-architecture-meetups-in-barcelona-31df) [Pure MVP in Android](https://dev.to/apium_hub/pure-model-view-presenter-in-android-1736) The post [Feature Flag pattern in java](https://apiumhub.com/tech-blog-barcelona/feature-flag-pattern/) appeared first on [Apiumhub](https://apiumhub.com).
apium_hub
76,477
Styling Form With Different States And Storybook
There are a lot of different and efficient ways how to improve web app development speed while implementing and testing new features. One of them is to be able to reuse UI components. To develop UI elements in isolation and then apply them to the project, I have tried and learned Storybook.
0
2019-01-19T13:19:54
https://dev.to/ilonacodes/styling-form-with-different-states-and-storybook-2foj
webdev, showdev, javascript, beginners
--- title: Styling Form With Different States And Storybook published: true cover_image: https://thepracticaldev.s3.amazonaws.com/i/lw91bbnydr1bfoqyaoh7.jpg description: There are a lot of different and efficient ways how to improve web app development speed while implementing and testing new features. One of them is to be able to reuse UI components. To develop UI elements in isolation and then apply them to the project, I have tried and learned Storybook. tags: webdev, showdev, javascript, beginners --- There are a lot of different and efficient ways how to improve web app development speed while implementing and testing new features. One of them is to be able to reuse UI components. To develop UI elements in isolation and then apply them to the project, I have tried and learned [Storybook](https://storybook.js.org/). The nice sides of this library that: - There are integrations for different JavaScript libraries and frameworks - It doesn’t change the core functionality and structure of the web application - It’s testable - It also supports further add-ons (to intersect Storybook with the development flow) and decorators (then to customize components that they work correctly in the app) How to apply and run Storybook playground to the project depending on the development platform you can find in its official documentation [here](https://storybook.js.org/basics/introduction/). As you read some of the blog posts, you have noticed that my specialization is React web applications. And the next example is also implemented with React ⚛️. After you finished adjusting, let’s add a few stories to the Storybook. For example, we have a simple sign up form with a title, a status message, two different input fields with belonged labels, and submit button. Let’s create a simple sign up form, mark up and style its elements in different states. First we need to add `<SignUpForm />` component, import **sign-up-form.css** with the corresponding styles: ```css .form { font-family: Roboto, sans-serif; margin: 0; padding: 0; } .form__title { letter-spacing: 1px; font-weight: 600; } .form__status { color: #666; margin-bottom: 20px; } .form__label { font-size: .8em; font-weight: bold; } .form__input { width: 200px; height: 25px; padding-left: 10px; border-radius: 10px; border: 1px solid darkgrey; } .form__button { width: 100px; height: 25px; border-radius: 10px; color: white; background-color: limegreen; border: 0; cursor: pointer; font-weight: bold; } .form__button–submitting { background-color: darkgrey; } .form__button–submitted { background-color: limegreen; } ``` Our form has three states: 1. initial: when form is displayed by default, awaiting for the user input 2. submitting: when the HTTP request is performed after submitting the form 3. submitted: the API call is finished and the server responded with success. Depending on the form status, some form elements will be showed/hidden or have different styles like: - while submitting the form and the submit button would be disabled and grey - if the form submitted, then the user will be notified about successful sign up through the shown message suggesting them to sign in. Here there is full implementation of the `<SignUpForm />` with the injected props from `<SignUpContainer />`: ```jsx // SignUpForm.js import React from 'react'; import './sign-up-form.css'; export const SignUpForm = ({onSubmit, state}) => { const submitting = state === 'submitting'; const submitted = state === 'submitted'; const buttonState = submitting ? 'form__button--submitting' : 'form__button--submitted'; return ( <form className="form" onSubmit={onSubmit}> <h1 className="form__title">Sign Up</h1> { submitted ? <div className="form__status"> You have been signed up successfully. Sign in? </div> : null } <label htmlFor="name" className="form__label">Name</label> <p> <input type="text" id="name" placeholder="Name" disabled={submitting} className="form__input" /> </p> <label htmlFor="email" className="form__label">Email</label> <p> <input type="email" id="email" disabled={submitting} placeholder="Email" className="form__input" /> </p> <p> <button disabled={submitting} className={`form__button ${buttonState}`}> Submit </button> </p> </form> ); }; ``` The `<SignUpContainer />` component is the parent container component that will manipulate the sign up form though the states and methods onto them. We will omit this component, as it is not related to storybook-based styleguide. The next step is to write stories for Storybook. It means to make specific functions that describe a specific state of the form UI: ```jsx // ./stories/index.js import React from 'react'; import {storiesOf} from '@storybook/react'; import {SignUpForm} from "../SignUpForm"; const noOp = (e) => { e.preventDefault(); }; storiesOf('Registration Form', module) .add('default', () => ( <div><SignUpForm state="idle" onSubmit={noOp}/></div> )) .add('submitting', () => ( <div><SignUpForm state="submitting" onSubmit={noOp}/></div> )) .add('submitted', () => ( <div><SignUpForm state="submitted" onSubmit={noOp}/></div> )); ``` And last to load all stories in the Storybook: ```jsx // .storybook/config.js import { configure } from '@storybook/react'; function loadStories() { require('../src/stories'); } configure(loadStories, module); ``` And now the signup form is entirely “storybooked.” Run your local server to check the result in the storybook. My variant is below: ![Storybook: default](https://thepracticaldev.s3.amazonaws.com/i/i7l5rqesfdn4fkn0bv9r.png) ![Storybook: submitting](https://thepracticaldev.s3.amazonaws.com/i/lv6lofwzllov7iagzb1c.png) ![Storybook: submitted](https://thepracticaldev.s3.amazonaws.com/i/wdg3bxopjwpsjn80pje8.png) I hope you are curious now to try out Storybook with React or another library to create a style guide for your app. Just leave a comment to share the approach how do you implement a style guide for your app? **Thank you for reading!** Code your best 👩‍💻👨‍💻 --- _Photo by Goran Ivos on Unsplash_
ilonacodes
76,646
Friday Blast #54
Timeouts and cancellations for humans (2018) - a precursor to the Trio article from last week, this o...
5,293
2019-01-20T14:43:28
https://horia141.com/friday-blast-54.html
fridayblast, links, post
--- title: Friday Blast #54 published: true tags: friday_blast,links,post canonical_url: https://horia141.com/friday-blast-54.html series: "Friday Blast" --- [Timeouts and cancellations for humans (2018)](https://vorpus.org/blog/timeouts-and-cancellation-for-humans/) - a precursor to the Trio article from [last week](https://dev.to/horia141/friday-blast-53-3ffc-temp-slug-3521549), this one deals with a more structured way to handle timeouts and cancellations in general. Like in that one, the notion of scope is married with the _cancel token_ of C# fame to get something really interesting. This one and the previous article have actually made me yearn for trying some of that scope based concurrency. I’ve also long had my suspicions that some of the ways in which timeouts are handled are not so great, and end up with additive timeouts if not downright incorrect code. [50 years in tech: the Exxon delusion (2018)](https://mondaynote.com/50-years-intech-part-4-the-exxon-delusion-d92ada84f6e) - war stories from somebody who has almost twice as much tech experience as I have lived. Interesting that even in the 70s there was the notion of _data being the new oil_. Things are so fast in the tech industry, even our cycles are quick. [Reinforcement learning - bandit problems (2018)](https://oneraynyday.github.io/ml/2018/05/03/Reinforcement-Learning-Bandit/) - I don’t think folks’ll get that much from this without some prior “bandits context”, but for me it was a refresher. A nice tidbit, I’ve wanted to write about for some time, which this article mentions. One can think of “A/B testing”, “bandit problems” and “reinforcement learning” as very related techniques, in increasing order of complexity. For the first one, everything is static and predetermined wrt the actions that must be taken. For the bandit case, we’re allowed to learn globally about which actions are good and which are not and adjust things accordingly. For the reinforcement learning case we can learn _locally_ (that is, consider the state in which the system is), and work from there. [Computing SVDs and pseudoinverses (2018)](https://www.johndcook.com/blog/2018/05/05/svd/) - the SVD is a workhorse of linear algebra, and a nice theoretical thing to boot - how many “things” apply to all matrices, for example? The pseudoinverses has the same relationship to the inverse as the SVD has to the diagnoalization. It pops up a lot in practice as well - in the same setups. Approximations of best solutions etc. [Why we built Docker? (2013)](https://www.youtube.com/watch?v=3N3n9FzebAA) - back when Docker was just 3 months old, it’s creator gave this very cogent talk about what sort of issues Docker would solve and what they hoped it would do to the tech ecosystem. In the end it did a bunch more than that, but somehow it feels like _it’s still not there yet_. Just 5 years ago.
horia141
76,966
Setting Up Vagrant for a Rails Application: Part 2
Configuring Vagrant for Rails Applications
0
2019-01-24T04:54:17
https://dev.to/denisepen/setting-up-vagrant-for-a-rails-application-part-2--2gkm
vagrant, devops, rails, codenewbie
--- title: Setting Up Vagrant for a Rails Application: Part 2 published: true description: Configuring Vagrant for Rails Applications tags: #vagrant, #devops, #rails, #codenewbie --- In a previous [post](https://dev.to/denisepen/setting-up-vagrant-for-a-rails-application-kgc) we configured the Vagrant File in preparation of using the vagrant box to host a rails application. In this post I will walk through how to configure he vagrant box and include your rails application to it. After the vagrant file has been created you can simply run *vagrant up* to start it. To ssh into it simply use the command *vagrant ssh* and you will see that you are now in your vagrant box. We are now in a fully functioning yet isolated operating system. ###Configuring The Vagrant Box Since I plan to run a rails application in this environment I need to add everything needed to run rails: git, rvm, ruby, rails, nodejs, and Postgresql. I took my instruction for setting up git, rvm, ruby, and rails from this [article](http://tutorials.jumpstartlab.com/topics/vagrant_setup.html). Install git: ``` sudo apt-get update sudo apt-get install git ``` Install curl: ``` sudo apt-get install curl \curl -sSL https://get.rvm.io | bash ``` Load rvm: ``` source /home/vagrant/.rvm/scripts/rvm rvm requirements ``` Install whatever version of Ruby you choose: ``` rvm install 2.1 ``` Set your Ruby default version: ``` rvm use 2.1 --default ``` Verify your Ruby version: ``` ruby -v ``` Install rails: ``` gem install rails ``` Check your rails version: ``` rails -v ``` Install bundler: ``` gem install bundler ``` Bundle your gems: ``` bundle install ``` Install node js: ``` sudo apt-get install nodejs ``` Install Your Database(s) - I used Postgresql ``` $ sudo apt-get install postgresql libpq-dev $ sudo su - postgres $ psql postgres=# CREATE USER vagrant; postgres=# CREATE DATABASE your_database_name; postgres=# GRANT ALL PRIVILEGES ON DATABASE "your_database_name" to vagrant; postgres=# \q $ exit ``` The database(s) that you created above should have the same name as the database(s) used in your rails application. Database names are set in the database.yml file. For example, my application's database.yml file has the following setup: ``` default: &default adapter: postgresql pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %> development: adapter: postgresql username: vagrant database: main_rest_development test: adapter: postgresql username: vagrant database: main_rest_test production: <<: *default database: main_rest_production ``` Please keep in mind that the **alignment of your text** in this file must be perfect or it won't be read correctly! Once all of this is done your vagrant environment is configured and ready to run your application. ##Creating An Application From Scratch If you will be creating an application from scratch you will need to do that on your local machine. ###Performed on you local machine - Not Vagrant You should open a new tab in your terminal to the same directory where you created your Vagrant file. To create a new application simply run *rails new your-app-name --database=postgresql*. This will create a new application in this directory and the database should be configured with the names and permissions you assigned above. ###Performed in Vagrant To run this application you must go into the other terminal tab where the vagrant box is running. Move (cd) into the '/vagrant_file' directory (or whichever directory you designated as the synced folder in the Vagrant file). Move (cd) into the new app's directory. Run *'bundle install'*. Then start your rails server using *rails s*. Your application should now be running on the port or url you designated. ##Using An Existing Application This must also be performed on the local machine. ###Performed on you local machine - Not Vagrant You should open a new tab in your terminal to the same directory where you created your Vagrant file. If you have an existing application that you want to use you can simply clone the code from github to your local machine: ``` git clone your_existing_app ``` You will need to update the database.yml file to have the correct database, and database names. ###Performed in Vagrant To run this application you must go into the other terminal tab where the vagrant box is running. Move (cd) into the '/vagrant_file' directory (or whichever directory you designated as the synced folder in the Vagrant file). Move (cd) into the new app's directory. Run *'bundle install'*. Then start your rails server using *rails s*. Your application should now be running on the port or url you designated. When you you are finished running your application you can stop it. Move into the Vagrant's root file and type *exit*. This will put you back into you local machine. On your local machine type *vagrant destroy* and your box is now stopped. It is quite alot of work to configure the Vagrant box to run your application. However, once configured this box can be used for any other rails application that needs the same setting in the future. It can also be copied and moved to other systems if necessary. Vagrant provides total isolation which will give your apps much needed stability.
denisepen
77,209
Collecting behavioural data with Segment, Mixpanel and Google Analytics
The logistics of collecting and analysing behavioural event data can be tricky ...
0
2019-01-22T12:09:02
http://blog.tdwright.co.uk/2019/01/22/collecting-behavioural-data-with-segment-mixpanel-and-google-analytics/
behaviouraldata, googleanalytics, mixpanel, segment
--- title: Collecting behavioural data with Segment, Mixpanel and Google Analytics published: true tags: behavioural data,Google Analytics,Mixpanel,Segment canonical_url: http://blog.tdwright.co.uk/2019/01/22/collecting-behavioural-data-with-segment-mixpanel-and-google-analytics/ --- The logistics of collecting and analysing behavioural event data can be tricky to get right. There are loads of services that offer to help, but now you’ve got two problems, right? In the second instalment of my series on behaviour data, I explain what tools I use and why. In [my first post on this subject](https://dev.to/tdwright/understanding-your-users--an-introduction-to-behavioural-data-2f1g), I tried to keep things agnostic towards technologies and platforms. In this post I’m going to start drilling into some specifics, using my experiences at work as a case study. For context, I work at [HeadUp Labs](https://www.headuplabs.com/). We make a mobile app that connects to wearable devices and helps users to make sense of their health data. It’s proving to be a popular app (go check it out!), so I feel like we’ve had the opportunity to put the tools we use through their paces. ## Our behavioural data toolchain Our app is a React Native project that talks to a .NET (Core) back end. Most behavioural data comes in via the front end, but because our users connect wearable devices that sync in the background, we also trigger behavioural events from the back end. To collate and distribute data from both parts of our application, we use [Segment](https://segment.com/). And this is all Segment does – receive data from one or more “sources” and send it on to one or more “destinations”. Fiendishly simple! In our case, there are two sources: 1. Our front end uses an npm package written for React Native. There are several out there, but [we used this one](https://www.npmjs.com/package/analytics-react-native). (NB: I think if we were to implement this from scratch, we’d probably opt to use [the official library](https://github.com/segmentio/analytics-react-native).) 2. Our back end (specifically, some of our Azure Functions, written in .NET Core) uses the [official .NET SDK](https://segment.com/docs/sources/server/net/). And we currently have two destinations: 1. [Mixpanel](https://mixpanel.com/), which we’ve used since just before we launched our app 2. [Google Analytics](https://analytics.google.com/), which we’ve added more recently Using Segment as an intermediary means we don’t have to worry about coding against Mixpanel’s SDK or the API for Google Analytics. We just have one service to worry about. Importantly for a single point of failure, the Segment API is [very reliable](https://status.segment.com/uptime). As a bonus, the Segment web app allows you to easily view your event stream in real-time and to debug individual events, which is super useful when you’re trying to figure out why something isn’t working. In theory, running everything through Segment also means that our sources and destinations are fully decoupled. Since all the configuration happens in Segment’s web app, adding a destination service should not require any code changes. (As we’ll see in one of the later posts in this series, the practical reality doesn’t _quite_ live up to the theoretical promise, but it does come close.) ![](https://i1.wp.com/blog.tdwright.co.uk/wp-content/uploads/2019/01/BehaviouralDataToolchain.png?resize=620%2C349)<figcaption>We use Segment to collect data from the front- and back-ends of our app and distribute it to Mixpanel and Google Analytics.</figcaption> We use Mixpanel and Google Analytics to achieve slightly different sorts of things. Some questions are more easily answered in one or the other, whilst Mixpanel also has features that stray into the interventional (e.g. triggered emails). Let’s look at them each in turn. ### Mixpanel The biggest strengths of Mixpanel is its ease of use and (although we didn’t fully realise it initially) its flexibility. Very quickly after setting everything up, we were able to spot some interesting trends in the way people were interacting with our app. The features we use most in Mixpanel are “Funnels” and “Insights”, which are both reporting tools. As the name might suggest, funnels allow you to track what proportion of users progress through a predetermined sequence of events. Insights are more generic reports, which allow you to plot metrics as tables or charts. To give you some idea of what you can do in Mixpanel, we… - Monitor weekly and monthly active user metric, using the number of unique instances of a collection of events as our measure. - Have a funnel for our user onboarding process, to help us identify the sources of “friction”, where we lose new users. - Track successful and failed attempts by users to connect their wearable device so we can identify if any of those processes need work. _If you read my last post on this topic, you’ll know that I like to consider whether events fall into one of four buckets when I’m deciding whether to track them. Using those criteria, we’d say that the first example uses “pulse” events, the second a series of “KPI” events, and the third a “grit” event. It’s not worth getting hung up on – it’s just something I find useful when I’m thinking about events._ A distinguishing feature of Mixpanel is the ability to trigger interventions based on the data coming in. This could be an email, push notification, or anything that can be triggered by a call to a webhook. We use this a little bit, but (for reasons I’ll discuss in a later post) not as much as we thought we would ### Google Analytics I’m sure the venerable Google Analytics won’t need as thorough an introduction. If you’ve ever run a website, no doubt you’ll have hooked it up to GA at some point. For those of you that haven’t used GA, I’ll do my best to explain. _NB: As I’ll explain in a later post, we’re not using the approach Google recommend for mobile apps. It’s possible that the official way of doing this results in a different set of features, so take this with a pinch of salt!_ GA is oriented primarily towards websites and specifically those that advertise and/or sell things. The distinction between a “page view” and an “event” matters here more than it does in Mixpanel. Additionally, all users are strictly anonymous, being represented instead by their membership of several predefined segmentations. We use Google Analytics for some questions precisely because of these differences. The fact that GA understands geography and can plot our user base on a map is actually a useful thing. And one that’s hard to do in Mixpanel. Likewise, the rigid distinction between page views and events means GA is able to work out “flows”. These map where users come from and go to. Mixpanel’s funnels are similar, but can only ever show a single sequence, which makes them less useful for exploratory work. Another thing GA does really well is the real-time view. Having a live view of how many users are active and what they’re doing is hypnotic and addictive. When we fire off our daily invitation for users to complete a mood check-in, we have been known to gather round and watch the user count skyrocket. It’s a great feeling! ### Summary As I said, we use Mixpanel and Google Analytics in tandem because each has its own strengths and weaknesses. At the heart of things, Mixpanel emphasises flexibility at the expense of having whizzy pre-built reports. Google Analytics makes a lot more assumptions about your app and the data it will produce, which means they are able to do some reports that are brilliant (e.g. user location plotted on a map), but also that some just won’t be a good fit. Luckily, using Segment as an intermediary point to collect and distribute our behaviour event data makes it easy to add services like Google Analytics without affecting existing services. If you’re not sure what to use for your analytics, you could do worse than browsing [the list of destinations supported by Segment](https://segment.com/docs/destinations/). For a start, the presence of a service on that list acts, I would argue, as a vote of confidence from a group of very experienced developers (i.e. the Segment team). The other reason why Segment supporting a service as a destination is good is (please excuse the tautology) that it means you can use it with Segment. Obvious, yes, but worth reiterating because Segment definitely deserves your consideration. Its nifty with just one destination, but the real value is in the flexibility it affords you when the time comes to change your setup. The next few articles in this series dive even deeper into technical territory and are pretty specific to Segment, Mixpanel and Google Analytics. In the next post, I’ll share some tips and observations I’ve picked up in my time working with these tools. In the post after that, I’ll cover some of the more advanced techniques they offer, as well as other related tools in their ecosystems. If you’re not so fussed about these tools in particular, stay tuned for future posts that will return to being technology-agnostic and will cover the sorts of analysis you can do with behavioural data. Whatever tools you decide to use, I hope today’s post has been useful. The aim of the game is to collect the right data, so we can answer the right questions, to make the right decisions about how to make your project the best it can be. It’s an exciting journey and I hope to see you next time. _Do you use Segment? Share your experiences in the comments._
tdwright
77,274
Top 5 DEV Comments from the Past Week
Highlighting some of the best DEV comments from the past week.
0
2019-01-22T15:44:16
https://dev.to/devteam/top-5-dev-comments-from-the-past-week-20o2
bestofdev
--- title: Top 5 DEV Comments from the Past Week published: true description: Highlighting some of the best DEV comments from the past week. tags: bestofdev cover_image: https://res.cloudinary.com/practicaldev/image/fetch/s--7VrAA2ln--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://thepracticaldev.s3.amazonaws.com/i/qmb5wkoeywj06pd7p8ku.png --- This is a weekly roundup of awesome DEV comments that you may have missed. You are welcome and encouraged to boost posts and comments yourself using the **[#bestofdev](/t/bestofdev)** tag. I love a good cathartic rant, and the **[Sh*tpost: can we stop saying "syntactic sugar"?](https://dev.to/jenninat0r/shtpost-can-we-stop-saying-syntactic-sugar-3i4j)** was no exception. @marek hopped in with some useful thoughts on the subject: {% devcomment 8871 %} There was a great conversation where people answered: **[What do you do while waiting for tests to finish running?](https://dev.to/jess/what-do-you-do-while-waiting-for-tests-to-finish-running-5898)**. At the top of the thread was a reply from @sergio who described the difference in test speeds for Rails/Elixir: {% devcomment 86ga %} In at truly great article, **[25 years of coding, and I'm just beginning](https://dev.to/dechamp/25-years-of-coding-and-im-just-beginning-442n)**, @tylerlwsmith empathized and described how they related to the author: {% devcomment 8751 %} In a discussion of Slack's new logo — **[Slack Has A New Face](https://dev.to/forstmeier/slack-has-a-new-face-e1f)** — @objque basically won the thread 🦆🦆🦆🦆: {% devcomment 87gn %} @derekjhopper rounds things off this week with a wonderful comment about how to think about making a contribution to open source. The entire **[Open source contribution for beginners?!](https://dev.to/coreyrodgers95/open-source-contribution-for-beginners-pf7)** article and discussion is worth a read: {% devcomment 87d1 %} See you next week for more great comments ✌
peter
82,839
Things you could do with Mix
Things you could do with Mix
0
2019-02-13T23:50:08
https://dev.to/drumusician/things-you-could-do-with-mix-504i
elixir, mix
--- title: Things you could do with Mix published: true description: Things you could do with Mix tags: elixir, mix --- Originally posted on [the guild](https://www.theguild.nl/things-you-could-do-with-mix/) # Things you could do with Mix ## Part 1: Creating Tasks Recently I gave my very first talk at a conference, Code Beam Lite Amsterdam, and had a very great time doing so. So why not share my thoughts in the form of a blog post as well… The subject, Mix, is something that everybody that uses Elixir has definitely used. But you might have not discovered the power (and fun!) this little tool can bring to your workflow. In this first part I’d like to start out simple. We’ll start by creating our very first mix task. Now, starting out with Hello World gets old pretty fast, so we’ll do something a little different. We’ll create a task that will show us the current time. I know, might also not be the greatest wonder in the world, because we could also just look at the time on our computer… But hey, do you want to learn how to create a mix task or not? Ok, let’s get cracking! Most tutorials will show you to create the task itself as that is very straightforward, but let’s not do that. Let’s write a test first! In order to write our first test, let’s use one of Mix’s best known tasks, mix new, to create our project. ``` mix new clock ``` After that we can create our test file: ``` mkdir -p test/mix/tasks vim test/mix/tasks/time_test.exs ``` And let’s create the boilerplate for our test: ``` defmodule Mix.Tasks.TimeTest do use ExUnit.Case, async: true describe "run/1" do test "it outputs the current time" do ... end end end ``` If we look at the docs for Mix Tasks it has all the info we need to write our test right at the top: _A Mix task can be defined by simply using Mix.Task in a module starting with Mix.Tasks. and defining the [`run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1) function_ Here is the boilerplate for our first function: ``` defmodule Mix.Tasks.Time do use Mix.Task def run(_argv) do end end ``` Now it’s time to actually create a test that we can use. So what do we expect from this task? At first let’s start by just outputting the current time as a string to stdout. In our tests it is not very useful to have the task output to stdout. We could of course use capture_io to catch stdout and test the output if it is what we expect. But I recently came across a nice blogpost by Jesse Cooke that points out a much nicer way to test shell output of mix. So basically you can replace the shell that Mix uses with the current process with the [`Mix.shell/1`](https://hexdocs.pm/mix/Mix.html#shell/1) function. So we’ll do just that and put that at the top of our test_helper.exs ``` # Get Mix output sent to the current # process to avoid polluting tests. Mix.shell(Mix.Shell.Process) ExUnit.start() ``` Once we do that we can use assert_received to test what ended up in the process mailbox. Nice! I know I know, we’re getting there. Just a little more. Here is the updated test: ``` defmodule Mix.Tasks.TimeTest do use ExUnit.Case, async: true describe "run/1" do test "it outputs the current time" do Mix.Tasks.Time.run([]) assert_received {:mix_shell, :info, [time]} assert time == # we need to define the output end end end ``` So we use some nice pattern matching to catch the output of the command. Now the only thing we need is to define the output we expect. We don’t want to bring in any dependencies like Timex at this point, and getting the current local time is a little tricky with the current standard library. There is some nice stuff coming in 1.8, but that hasn’t landed just yet. Luckily we can reach out to erlang to solve this problem for us and use the `:calendar.local_time` function. We can then use Elixir’s NaiveDateTime module to convert into a string very easily. So this is our final test: ``` defmodule Mix.Tasks.TimeTest do use ExUnit.Case, async: true describe "run/1" do test "it outputs the current time" do Mix.Tasks.Time.run([]) assert_received {:mix_shell, :info, [time]} current_time = :calendar.local_time |> NaiveDateTime.from_erl! |> NaiveDateTime.to_time |> to_string assert time == current_time end end end ``` With that in place we can run our test and make it pass by adding this to our task. Note that this test against current_time only works when using seconds granularity. :) For this example that is good enough. ``` defmodule Mix.Tasks.Time do use Mix.Task def run(_argv) do time = :calendar.local_time |> NaiveDateTime.from_erl! |> NaiveDateTime.to_time |> to_string Mix.shell.info(time) end end ``` And now inside your project you can run mix time to output the current time to stdout :). Now the task does not show up in the list of tasks when you run mix help. In the next part of this series we’ll explore how to do just that and also see if we can add some more features to our awesome clock. You can find the repository for this code on github: https://github.com/drumusician/clock Until next time!
drumusician
77,319
Google Chrome Licensing API returning 500 error. Need help!
Hi. Any Chrome Extensions developer here? I've developed a chrome extension and I'm trying to provid...
0
2019-01-22T17:35:45
https://dev.to/sunilc_/google-chrome-licensing-api-returning-500-error-need-help-3efn
help
--- title: Google Chrome Licensing API returning 500 error. Need help! published: true tags: help --- Hi. Any Chrome Extensions developer here? I've developed a chrome extension and I'm trying to provide Free Trial to users using Google License API. I'm taking the code here(https://stackoverflow.com/questions/25707443/chrome-web-store-payments-free-trial-for-extension) as reference. Getting OAuth token is working fine. But after that when I'm making a call to the license API, I'm getting below 500 error(in link). I'm not sure what I'm doing wrong. Any help is appreciated! https://thepracticaldev.s3.amazonaws.com/i/yviatoopipigebonhlup.png
sunilc_
77,356
OPS - Rethinking Cloud Infrastructure
Unikernels used to be in-accessible to the average developer due to their low level nature. OPS is a free open source tool created to fix that problem and let even non-developers run them locally on their laptop or cloud.
0
2019-01-22T21:46:39
https://dev.to/eyberg/ops---rethinking-cloud-infrastructure-c40
unikernels, serverless, containers, virtualization
--- title: OPS - Rethinking Cloud Infrastructure published: true description: Unikernels used to be in-accessible to the average developer due to their low level nature. OPS is a free open source tool created to fix that problem and let even non-developers run them locally on their laptop or cloud. tags: unikernels, serverless, containers, virtualization --- Unikernels have been a technology that have been predicted for the past few years to be the future of software infrastructure. Wait - hold up - what's a unikernel!? One way to think of a unikernel is what would happen if you converted your application into it's own operating system. It is a single purpose operating system versus a general purpose one like Linux. Another really large difference is that a unikernel is meant to only run one application per server. Since practically everything that is deployed to day is deployed onto virtual machines (eg: *all* of public cloud) unikernels propose that you don't need a full blown operating system for each application you want to deploy. This makes them run faster, more secure and they become way tinier. However, unikernels have not been without their problems. One of these problems that unikernels have traditionally suffered from is that to truly utilize them in the past you had to have been a systems developer or a low level c programmer. Not anymore. [OPS](https://ops.city) is a tool to help anyone build and run unikernels in whatever language or as whatever application they want. OPS simply loads ELF binaries just like Linux does but with unikernel benefits. Before 'unikernels' we had what were known as 'library operating systems'. OPS and the kernel that it uses 'Nanos' embraces that concept. They are both under heavy active development and have a full time team of paid kernel engineers working on it. (This has also been historically a big problem for the ecosystem.) For local development OPS works on linux or mac currently. To get started you'll want to install the cli tool - if you aren't a Go user you can just download the binary from the site but if you are a Go user than you can build it yourself as well once you obtain the [source](https://github.com/nanovms/ops) . ### Install ``` $ curl https://ops.city/get.sh -sSfL | sh ``` Let's try out a quick Node.js hello world. Put this into a file: ``` console.log("Hello World!"); ``` Then you can run it. ``` $ ops load node_v11.15.0 -a ex.js ``` Easy huh? Want to try something a bit more complicated? Let's try a Go webserver. Put this into main.go: ``` package main import ( "log" "net/http" ) func main() { fs := http.FileServer(http.Dir("static")) http.Handle("/", fs) log.Println("Listening...on 8080") http.ListenAndServe(":8080", nil) } ``` If you are on a mac specify the os as 'linux'. It's important to note that it's not running linux underneath - at all. Unikernels aren't containers. The only reason we specify GOOS as linux is that Nanos implements the POSIX standard (to some degree) and loads our programs as ELFs. ``` $ GOOS=linux go build main.go ``` You can verify this by looking at the filetype: ``` ➜ twe file main main: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped ``` Note that I built this on a mac whose native file type is a Mach-O. We mainly support ELF since that is what is deployed to production. ``` ➜ twe file main main: Mach-O 64-bit executable x86_64 ``` Now let's create a folder to put our static assets in: ``` $ mkdir static $ cd static ``` ``` <!doctype html> <html> <head> <meta charset="utf-8"> <title>A static page</title> </head> <body> <h1>Hello from a static page</h1> </body> </html> ``` Then let's specify a config.json for this: ``` { "Dirs" : ["static"] } ``` This simply states that we wish to put the directory 'static' into /static as a folder in our filesystem. There are many advanced options that you can explore with OPS config.json but please be aware that it is under active development and a lot of it can and will change. Now we are going to run it. OPS implements a very small wrapper around QEMU. Qemu allows us to run the resulting built image as a virtual machine. The virtual machine is the output of ops building your app as it's own unique little operating system. Have you ever wondered to build your own operating system? Well you just did. By default OPS will run your unikernel in what's known as 'user-mode' networking without KVM acceleration. Usermode networking is a bit slower than a proper bridged network (what you'd get on AWS or GCE) but we are playing around on your laptop so this is easy to get started with. Also, keep in mind that we run this as a non-root user by default and by default KVM will want root privileges. As a more advanced lesson later we'll show you how to configure that properly. Also keep in mind that on mac we don't have access to KVM, however, there is an equivalent - Intel HAX which we can use if we want. ``` $ ops run -p 8080 -c config.json server ``` Now that it is running you should be able to run curl against your server and wah-lah! ``` curl http://localhost:8080/hello.html ``` You've just built and ran your first Go webserver as a unikernel! How cool is that? Check out the github repo, star it and let's see what else can be created! {% github nanovms/ops %}
eyberg
77,436
Event Storage in Postgres, Multi-tenant
Buzzword Bingo sold separately.
0
2019-01-23T03:16:59
https://dev.to/kspeakman/event-storage-in-postgres-multi-tenant-25hc
postgres, sql, eventsourcing, showdev
--- title: Event Storage in Postgres, Multi-tenant published: true description: Buzzword Bingo sold separately. tags: #postgres #sql #eventsourcing #showdev --- I previously wrote about [Event Storage in Postgres](https://dev.to/kspeakman/event-storage-in-postgres-4dk2). One thing I did not address at that time was multi-tenant scenarios. Multiple tenants adds the potential for a lot of data. In particular, indexes can eventually get so large that they start affecting performance. (That's the hope, right? 🤑). The way we dealt with that in our previous product was by isolating the tenant's data within its own schema. However, this causes issues when we want to process events across tenants. For example, to provide rollup reports when multiple tenants belong to the same parent organization. Or to gauge activity on the system right now. When separating tenants by schemas, it is a pain to generate a query which grabs events across all tenants. ## Partitioning One way to have logically separated tenant data, but to have the convenience of a single Event table is to use Postgres's table partitioning. They have added a lot to this feature in recent releases. And in Postgres 11, this feature is especially strong. Here is a version of the Event table partitioned by Tenant ID. It looks almost the same! ### Main Table ```sql CREATE TABLE IF NOT EXISTS Event ( SequenceNum bigserial NOT NULL, TenantId uuid NOT NULL, StreamId uuid NOT NULL, Version int NOT NULL, Type text NOT NULL, Meta jsonb NOT NULL, Data jsonb, LogDate timestamptz NOT NULL DEFAULT now(), CONSTRAINT pk_event_sequencenum PRIMARY KEY (TenantId, SequenceNum), CONSTRAINT uk_event_streamid_version UNIQUE (TenantId, StreamId, Version) ) PARTITION BY LIST (TenantId); ``` _Constraints on the main table are required to include the partition key, which is `TenantId` here._ ### Tenant Partitions Unfortunately Postgres does not yet have auto creation of partitions. So before you insert data for a tenant, you first have to create a partition for it. Typically there is a provisioning process when adding a new tenant, so creation of the partition can simply be part of that process. Here is an example of creating a tenant with TenantId = 847ED1889E8B4D238EB49126EBD77A4D. ```sql CREATE TABLE IF NOT EXISTS Event_847ED1889E8B4D238EB49126EBD77A4D PARTITION OF Event FOR VALUES IN ('847ED1889E8B4D238EB49126EBD77A4D') ; ``` Under the covers, each partition has its own index. So you don't end up with a single giant index. When you query the data across tenants, Postgres can run queries on the partitions in parallel, then aggregate them. ### Inserting events You insert data just like you would insert it to a single table, and Postgres automatically routes it to the appropriate partition. ```sql INSERT INTO Event ( TenantId , StreamId , Version , Type , Meta , Data ) VALUES ( '847ED1889E8B4D238EB49126EBD77A4D' , 'A88F94DB6E7A439E9861485F63CC8A13' , 1 , 'EmptyEvent' , '{}' , NULL ) ; ``` ### Query by sequence To support reading events across tenant in order of occurrence, you can run a query like this. ```sql SELECT * FROM Event WHERE SequenceNum > 0 ORDER BY SequenceNum LIMIT 1000 ; ``` This query supports reading batches of up to 1000. I avoided using `OFFSET` since it is inefficient once the offset value gets large. And each listener usually keeps track of the last SequenceNum that it processed anyway. I could have added another condition to the WHERE clause like `AND SequenceNum <= 1000` instead of using `LIMIT`. But there could be skipped sequence numbers due to concurrency (see below). Although this is a minor point. ### Query by stream Query by stream is the same as before except we now also need to provide the TenantId. ```sql SELECT * FROM Event WHERE TenantId = '847ED1889E8B4D238EB49126EBD77A4D' AND StreamId = 'A88F94DB6E7A439E9861485F63CC8A13' ORDER BY Version ; ``` ## Other Goodies ### Notification of events You can trigger Postgres notifications whenever an event is added to any partition of the Event table. Here is what I use for that currently. ```sql DROP TRIGGER IF EXISTS trg_EventRecorded ON Event; DROP FUNCTION IF EXISTS NotifyEvent(); CREATE FUNCTION NotifyEvent() RETURNS trigger AS $$ DECLARE payload text; BEGIN -- { sequencenum }/{ tenant id }/{ stream id }/{ version }/{ event type } SELECT CONCAT_WS( '/' , NEW.SequenceNum , REPLACE(CAST(NEW.TenantId AS text), '-', '') , REPLACE(CAST(NEW.StreamId AS text), '-', '') , NEW.Version , NEW.Type ) INTO payload ; PERFORM pg_notify('eventrecorded', payload); RETURN NULL; END; $$ LANGUAGE plpgsql; CREATE TRIGGER trg_EventRecorded AFTER INSERT ON Event FOR EACH ROW EXECUTE PROCEDURE NotifyEvent() ; ``` When an event is inserted, it will trigger a notification to the channel `eventrecorded`. Here is an example payload. ```text # sequence num # / tenant id # / stream id # / version # / type 2/847ed1889e8b4d238eb49126ebd77a4d/a88f94db6e7a439e9861485f63cc8a13/2/EmptyEvent ``` _This payload format is very much inspired by MQTT topic paths._ The event itself might be too large to fit in the notification payload. So instead, I give enough data about the event for the listener to know if they want to go to the trouble of loading it. Usually listeners only care about the type of event and the sequence number. Then they typically load batches of events starting from the last sequence number they processed. So the middle bits might be a YAGNI violation. But it felt right to include in case a listener wants to load specific events instead of batching. To listen for event notifications, the SQL part is simple: ```sql LISTEN eventrecorded; ``` But the code part will vary depending on your language. Personally, the coding pattern I need to use with the Npgsql library feels a bit painful. I think it has more to do with the database details leaking through the library's abstractions. I ended up with a very imperative producer/consumer queue. It was not a great look in F# after all the error handling was added. But it'll do. ### Concurrent Events The unique constraint we added to the Event table, called `uk_event_streamid_version`, will actually catch concurrency violations for us via Optimistic Concurrency. For example, let's say a user tries to perform an operation on a stream. The logic loads the stream and it has 4 events. The last event's Version is 4. We run our code and decide, based on the current state of this stream, we should generate 2 new events. Since the last Version was 4, those events should have Version = 5 and Version = 6 respectively. Then we insert those events to Event table. Simultaneously, another user tries to perform a different operation on the same stream. This request is being processed on another thread (or another computer) and is unaware of the first user's request. It reads the same 4 events, and decides to generate 1 new event with Version = 5. However, the other user's operation committed their DB transaction just before ours. So when our code tries to save the event with Version = 5, it will fail. Postgres will trigger a unique index violation. As long as we appropriately calculate the expected Version for each new event before saving, the unique constraint will prevent concurrency conflicts. #### Sequence Gap When there is a concurrency violation, it will create a gap in SequenceNum. That's just the way auto increment sequences work in Postgres. Even if the transaction is aborted, the sequence is still incremented. Otherwise, managing rollbacks for sequences would complicate inserts and slow down performance. Don't obsess over sequence gaps -- it is only for ordering. > ⚠️ **Concurrent Writers** > > As pointed out in comments below (Thanks Alexander Langer!), this implementation is suitable for a "single writer". But due to the way Postgres sequences works, multiple concurrent writers can introduce a subtle bug: events can sometimes be committed in a different order from the SequenceNum order. Since listeners depend on loading events in SequenceNum order, this can cause listeners to miss events. > > See [below](#comment-node-148394) for an implementation which supports concurrent writers. Consequently, it will also generate gapless sequence numbers. It will likely have an impact on insert performance, but I haven't measured. ### Data Management Tenant data is now also pretty easy to individually remove or backup since it is actually in its own separate table. For example assuming that TenantId = 847ED1889E8B4D238EB49126EBD77A4D requested that we remove their data, it is pretty easy to get rid of it. ```sql -- DANGER DANGER DROP TABLE Event_847ED1889E8B4D238EB49126EBD77A4D CASCADE; ``` _Assuming you have some sort of rolling backup in place, eventually this will roll off of backups into oblivion too._ # Conclusion This method of partitioning the event streams by tenant can help performance as the dataset grows large. At some point we may have to go a step further and partition our data onto separate database nodes. This basic table structure also seems suited for node-based partitioning using something like Citus Data (recently acquired by Microsoft). But the above method should cover that awkward first scaling hurdle when one overarching index becomes a performance bottleneck. And it isn't much more effort than a single table! /∞
kspeakman
77,466
These are your biggest enemies
Obstacles that will prevent anyone from success
0
2019-01-23T08:37:01
https://dev.to/hussein_cheayto/these-are-your-biggest-enemies-41ip
education, motivation, inspiration
--- title: These are your biggest enemies published: true description: Obstacles that will prevent anyone from success tags: education, motivation, inspiration --- Through my humble experience, these are the biggest enemies if any ambitious person: <b>1- Luck</b> Luck can drastically change a person’s life, let’s say that one has won the lottery, he/she will become very famous and rich. However, keeping the money is the hardest part. If you want to build a successful company, you need to put luck away because if you want your success to last forever then you should be able to stand up and overcome all the challenges especially when everything is against you (life, luck, crisis). If you overcome all the obstacles while luck is against you, then what would life be if luck turned back to your side. If you rely on luck, then you might get lucky for a small period of time then you will begin to fall apart. <b>2- Smoking</b> Smokers should stop smoking if they want to become successful. Just look at all the famous people like Steve Jobs, Bill Gates, Dan Lok, Wes Brown etc… Do they smoke? Smoking prevents you from being successful because the money spent to buy cigarettes can be invested on yourself or on your company. Assuming you’re buying an average of one packet of cigarettes that cost 3$ every day. By a simple math calculation, you can find that you’re spending almost 11000$ in 10 years (3*365*10). You might think that this is a small number, but if you invest this sum on yourself therefore, you would’ve been able to double or triple this amount since you could’ve mastered a skill that is worth thousands of hundreds of dollars. My advice is, before buying a cigarette, think about how you could’ve used that money to your advantage. If you couldn’t find anything now, then put aside, everyday, 3$ and forget about them. When you find out what skill you can learn or master, then spend that money on it. <b>Related Article:</b> <a href="https://dev.to/hussein_cheayto/no-more-stress-using-this-trick-2o4k">No More Stress Using This Trick</a> <b>3- Alcohol</b> Too much alcohol will give you a hangover all day, therefore, you won’t be able to concentrate or work which leads you to lose a precious amount of time. You can learn more about time and how to be efficient <a href="https://dev.to/hussein_cheayto/7-things-that-i-wish-i-knew-when-i-was-younger-135g">here</a> <b>4- Overnights</b> Staying up all night is a bad habit. Try to become a day worker. Wake up early everyday and begin building your dreams. Sleeping all day will prevent you from being productive since you will feel lazy. Finally, I hope that this article was helpful for at least one person, therefore, I would be happy that my time spent writing this article would be effective and not considered as wasted time. <b>Related Article:</b> <a href="https://dev.to/hussein_cheayto/best-ways-to-get-great-ideas-884">Best Ways To Get Great Ideas</a>
hussein_cheayto
77,627
Architecting the Next Generation of Communication
With the shift to mobile and the statistics of the “younger” generation (hi t...
0
2019-01-23T23:32:22
https://blog.joshghent.com/architecting-the-next-generation-of-communication-1fb98545e32a
api, pubnub, softwaredevelopment, aws
--- title: Architecting the Next Generation of Communication published: true tags: api,pubnub,software-development,aws canonical_url: https://blog.joshghent.com/architecting-the-next-generation-of-communication-1fb98545e32a --- ![Image result for dilbert architecture](https://cdn-images-1.medium.com/proxy/1*c0rxJV86eRjbkhQ-f4BMww.gif) With the shift to mobile and the statistics of the “younger” generation (hi there) not using phone calls as a means of communication, there is a constant push towards reaching people in a platform agnostic way — via email, linkedin, twitter dm, you name it. The challenge arises when you need to create a platform that is scalable demands and flexible enough to hack in any other new communication streams later down the line — maybe we suddenly want support for MySpace messaging. The architecture I discuss below comes out of experience with this problem first hand, and the solutions we came up with — all to be delivered for a deadline. ### Way of Websockets! Since this needed to be real time in the case of IM or near-real-time in the case of SMS, websockets are the best way to go and a defacto standard for real-time operations on the web. For this, PubNub is a great choice since it already had a lot of the functionality baked in, such as different channels and mechanism to subscribe, send and receive on those channels. PubNub also had a mechanism called “PubNub functions” whereby any new websocket message on a channel matching a certain pattern would be handled by a function written in plain ol’ javascript. This meant you can fire SMS messages off to other systems that would handle the actual sending of the SMS message and another route that sends whatsapp messages to Twilio’s API for example. It provided immense flexibility, especially as you expand to different communication methods and channel types. Although PubNub has it’s own datastore in the background, you can only query it through their API, making it difficult to just dive into the DB and find let’s say all channels containing accountId 123. Additionally, you can only bring back 99 records at a time with PubNub which means producing accurate reporting a challenge. The solution was to introduce a second data source. This potentially opens up the problem of many sources of truth. This can be avoided by having an API stood in front of a non-relational database (I would recommend ElasticSearch) which all read operations would go through. ### Typescript Time With a project that is going to span many different API’s and services, Typescript would prove invaluable because it allowed us to reuse a lot of code whilst increasing developer productivity and reducing bugs. Sounds too good to be true right? Well, there were still bugs sure and productivity only took an upwards swing after all the developers got comfortable with it, but overall it was a fantastic move. One of the first things you should do is create a common “type” library that you can share across all of your services and systems that needed them. In this types library was all the interfaces and enums that were going to be used throughout the system. You can store everything there from error codes, to channel types and an interface for how a message was structured. You can then include this library in all your services to ensure consistency. ### Different channels To differentiate the communication types you have, group instant message, direct message, sms message, mass sms message, carrier pigeon etc. you can build this up as part of the channel name. Again, PubNub (who I promise aren’t sponsoring this post) gives great flexibility by allowing channel names to be whatever you want them to be. I would recommend they are built up with the platform, channel type and then a unique identifier .e.g, `production.sms.123456`. In your pubnub function, you can then check the channel type within the channel name, using regular expression, and handle the message accordingly. Channels should be created per group of participants per channel type. For example creating a new sms to a contact creates a new channel, sending an sms to the same contact again will not create a new channel. But, creating a group with Bob, June and Sally called “Sales Call” and then another with the same people but called “Another sales call”, would create two different channels. This is how many other chat applications are built which in accordance with [Jakob’s Law](https://lawsofux.com/jakobs-law.html), is what you want to do. ![](https://cdn-images-1.medium.com/max/1024/0*uy2HVNILokIO_fsG) Now that we have a basic instant message and SMS system, we had a new problem to solve — How do we get notifications to the user? Emit a message of a different type on the existing channels sounds like an obvious solution but it assumes the user is subscribed to channel. Fortunately, one way you can solve this is with a “notification” channel. Each account should be assigned a notification channel. Every time a message is sent, it is also sent to the participants notification channel. For example, if Bob creates a new group chat with June and Sally, it will send a new message on June and Sally’s notification channels informing our application “hey there is a new channel you need to subscribe to!”. This will then trigger a process in the app to subscribe to that channel in the background. When Bob then sends a message on that channel, it sends another message on both the participants (June and Sally) notification channels. When this message is received by the application you can then pop a desktop or mobile notification depending on the platform. ![](https://cdn-images-1.medium.com/max/647/0*DilygOA_B31jva_5) Additionally, you can use this notification channel to send other kinds of messages like when a channel has been read, or when the user mutes, leaves or hides a channel. Utilizing the PubNub function again, these notifications can be captured and forward them onto a CRUD API which saves them in DynamoDB. This allows us to provide a consistent experience across any device that the account uses. Some may be wondering why we don’t just call the API directly, but instead go through PubNub, this is to cater for the case that a user has both our mobile and the desktop application open at the same time. Sending the message via the notification channel means if you hide a channel on the desktop, it will immediately be hidden on the mobile app. ### Authentication ![](https://cdn-images-1.medium.com/max/861/0*b4LSzT0YAF4jZf6O) Authentication can be a big hurdle when breaking up a monolithic architecture into microservices, this is the situation my company found itself in. Prior to developing these SMS/IM systems, users were authenticated to our backend using a username, password and license key. All requests to the API used these parameters. This was not an option when authenticating with PubNub as firstly we did not want to give them access to our accounts database, and second because it’s not an option on their system. A token based system was the only way. We considered a number of different options for token based authentication but eventually settled on JWT because of its flexibility, ease of implementation and security. Combined with this, we had found Kong along with the JWT plugin to be fantastic at handling all the traffic we threw at it. An enormous amount of work went into not only overhauling the API to accept JWT authentication but also to change all our apps to handle JWT’s. Additionally, we required a refresh strategy for these tokens, for example, if a person remains logged in for a number of days it could be the case that your JWT token we have cached is now expired. This means we need to refresh the token. On any request from our application, we check how long the JWT has until it expires. If it is a day or less away then we refresh the token first. We leverage the JWT to store information we need for requests, for example, when a request comes into an API, then we will most likely need the accountId, we can find this in the JWT without having to pass anything through with the body of a request. There is more to tell with the architecture of a deceptively simple system, if you have ever had to architect your own communication platform, how did you do it? I’d be interested to find out and build a knowledge base. * * *
joshghent
78,220
Customising Reveal.js beyond creating a personalised theme
As I start preparing for a couple talks I will be giving for 2019, I realised that my first talk for...
0
2019-01-28T05:26:06
https://www.chenhuijing.com/blog/customising-revealjs-beyond-theming/
frameworks, css
--- title: Customising Reveal.js beyond creating a personalised theme published: true tags: frameworks,css canonical_url: https://www.chenhuijing.com/blog/customising-revealjs-beyond-theming/ --- As I start preparing for a couple talks I will be giving for 2019, I realised that my first talk for 2019 (at CSSConf China) will end up being my 50<sup>th</sup> slidedeck built with [Reveal.js](https://github.com/hakimel/reveal.js/). I’d used this presentation framework since I first started giving talks, and [wrote about the basics back then](https://www.chenhuijing.com/blog/revealjs-and-github-pages/). After 50 iterations, my slides these days are pretty heavily customised so I thought it’d be nice to write up some of the things I do to them. With them. Whatever. In that earlier article, I included a very brief paragraph on creating your own theme. Although Reveal.js has been updated multiple times over the past 3 years, the basic gist of things remain the same. I still stand by my original suggestion to delete the bundled themes, and customise off `simple.scss`. As of v3.7.0, the `css` folder looks something like this: ```bash css/ |-- print/ | `-- … |-- theme/ | |-- source/ | | `-- … | `-- template/ | |-- mixins.scss | |-- settings.scss | `-- theme.scss `-- reveal.scss ``` There are a number of preset theme files and I suggest using `simple.scss` as the base for your custom theme. Within the `template` folder there are 3 `scss` files contain base styles and variables that are imported into the theme stylesheets in the `source` folder. Note that what I’m describing is the structure that this framework provides out the box. You are completely free to change things up but you _may_ need to modify the `Gruntfile.js` because that’s what handles the compilation of source files. This is what my custom theme file looks like: ```css /** * Custom theme for Reveal.js presentations. * Copyright (C) 2018 Chen Hui Jing, https://www.chenhuijing.com/ */ // Default mixins and settings ----------------- @import '../template/mixins'; @import '../template/settings'; // --------------------------------------------- // Include theme-specific fonts @import url('../../lib/font/magnetic-pro/magnetic-pro.css'); @import url('../../lib/font/magnetic-pro-black/magnetic-pro-black.css'); // Theme template ------------------------------ @import '../template/theme'; // --------------------------------------------- // Customised styles for this presentation .reveal { … } ``` The presentation itself by default loads 2 stylesheets, `reveal.css` and `THEME.css`, the latter being a customised theme file generated from `.scss` files from the `source` folder. These stylesheets are referenced in the `index.html` file like so: ```html <!doctype html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"> <title>Your awesome presentation</title> <meta name="description" content="A short description of what the talk is about"> <link rel="stylesheet" href="css/reveal.css"> <link rel="stylesheet" href="css/theme/jing.css"> <link rel="stylesheet" href="lib/css/zenburn.css"> ``` If Sass is not your thing, then you can add your own custom CSS file and reference it directly as well. Reveal.js is highly customisable. ## Creating custom layouts Now that basic setup is somewhat covered, I’m moving on to some of the fun stuff I tend to do with my presentations. By default, the content on the slides are centred on the page, but now that we have Grid, placing items on the page has become much easier. Taking a cue from presentation software which generally offers a number of templates to choose from for laying out different types of slides, we can create several layouts that we apply via CSS classes instead. For example, I have a layout class for a 2-column layout that looks like this: ```css .l-double { display: grid; grid-template-columns: 1fr 1fr; } ``` I have a couple other layout classes for patterns that occur multiple times throughout my slides, and there are also instances where I make one-off adjustments with inline CSS on the individual slides as well. For me, my slides serve as a tool to help me get my points across when I’m giving a talk, so I often think of them as single-use. Could I have wrote up proper CSS classes for one-off layouts? Sure. But I’d rather spend more time writing the content itself. But that’s just me. You do you. Another layout class I have is for multiple items in a row: ```css .l-multiple { display: flex; justify-content: space-around; } ``` I want to take this chance to highlight a bug in Blink that has been left open for the longest time, [Chrome doesn’t preserve aspect ratio for flexed images in auto-height flex container](https://bugs.chromium.org/p/chromium/issues/detail?id=721123), and does annoy me quite a bit. I had started to learn Flexbox a couple years back when Chrome was still my main browser and **learnt the wrong thing** because I thought this bug was expected behaviour. It’s not. <figcaption>Firefox vs. Chrome for <code>flex: 1</code></figcaption> ![](https://www.chenhuijing.com/assets/images/posts/revealjs/flex-bug-640.jpg) Anyway, I’ve since learnt to double check odd behaviour by going straight to the specification, then checking across browsers to see who’s implementing it accordingly. My main browser is now Firefox. Just putting it out there. 🤷 <figcaption>Firefox vs. Chrome for <code>flex: auto</code></figcaption> ![](https://www.chenhuijing.com/assets/images/posts/revealjs/flex-bug2-640.jpg) The best way to see the aforementioned bug in action is to load up [Jen Simmon’s flex shorthand examples page](https://labs.jensimmons.com/2017/02-008.html) in Firefox (correct), Chrome (incorrect), Safari (incorrect) and Edge (correct) for comparison. ## Hacking the `<style>` tag I also end up wanting to show code examples in my presentations, though I try my best to keep the code short, displayed in a large font size with appropriate syntax highlighting. Sometimes, to better illustrate the effect of the code, I’ll pair the code with an image or better still, a video clip of what the code does. One of the advantages of using a HTML framework like Reveal.js is the ability to add some live-coding to the slides themselves. I “borrowed” this technique from [Una Kravets](https://una.im/) after seeing her slides from [The Power of CSS Talk](https://codepen.io/una/pen/Wjvdqm). There’s nothing quite like seeing the effects of a code change happening live on the slides themselves, at least for me. And after poking through Una’s code, which she so kindly put up on Codepen, I could see that it involved using the `contenteditable` attribute on the `style` tag. ```html <style contenteditable="true"> /* Put CSS here and watch your page update as you type */ </style> ``` With some additional styles, we could make the markup above into an editable code block that applies valid CSS rules onto the page. Some people might raise the validity of doing this, but I consider it a bit of a grey area. The current HTML specification states that the `style` element can only be used in 2 contexts, where metadata content is expected, or in a `<noscript>` element that is the child of a `<head>` element. In other words, `<style>` tags should only show up in the `<head>` of a page. BUT, in the [HTML5.2 specification](https://www.w3.org/TR/html52/document-metadata.html#the-style-element), one more context is specified: in the body, where flow content is expected. So officially, this approach is invalid HTML now, but it will be valid in future. The only style required is a `display` value that generates a box (basically, any valid value except `none`). But it’s probably a good idea to pretty it up a bit more, so let’s throw in some color and padding. ```css style { display: block; background-color: #2d2d2d; color: #ccc; padding: 1em; white-space: pre; width: 100%; overflow: auto; } ``` ## Live-coding CSS on presentation slides So far, I’ve used 2 layouts for live-coding on my slides, a 2-panel and a 3-panel. The 2-panel will consist of the markup I want to target on the left, and the CSS to be applied on the right. The markup for this layout looks like this: ```html <div class="livecode livecode-2p"> <div class="result"></div> <div class="code"><style class="code-editor" contenteditable="true"></style></div> </div> ``` And for the 3-panel: ```html <div class="livecode livecode-3p"> <pre class="markup"><code></code></pre> <div class="result"></div> <div class="code"><style class="code-editor" contenteditable="true"></style></div> </div> ``` Reveal.js comes with the syntax highlighting library, [highlight.js](https://highlightjs.org/), which is enabled by default. So as long as you loaded `zenburn.css`, anything with `<pre>` and `<code>` will have styles applied to them. Most of my styles are layout-related. I always run my slides in Firefox Nightly on stage, so I don’t mind using many of the newer CSS properties. Though it is a good idea to run through the whole deck before going on stage, just to make sure things aren’t broken. You never know. ```css .livecode { display: grid; grid-gap: 0.5em; margin: 0; padding: 0; .result { max-height: 100%; overflow-y: scroll; width: 100%; border: 1px dashed $headingColor; } .code { text-align: left; width: 100%; font-family: 'Dank Mono', monospace; font-size: 75%; color: #efdcbc; background-color: #3f3f3f; padding: 0.5em; border-radius: 0.25em; overflow-y: scroll; } } .code-editor { display: block; height: 100%; white-space: pre-wrap; } ``` I’m using Grid, so the `.livecode` class sets that up, then I have 2 different classes that specifies the grid and item placement for each layout respectively. ```css .livecode-2p { grid-template-columns: 50% 50%; grid-template-rows: 1fr; max-height: 65vh; } .livecode-3p { grid-template-columns: 50% 50%; grid-template-rows: max-content 1fr; grid-template-areas: 'a b' 'a c'; max-height: 70vh; .markup { grid-area: b; } .result { grid-area: a; } .code { grid-area: c; } } ``` Depending on the example you’re trying to demonstrate, there may be additional styling tweaks and overrides to ensure your example works as expected, because Reveal.js itself has quite a lot of styles. But if you’re going to be live-coding CSS on stage, I’m going to make the assumption the cascade is not an issue for you. ## Wrapping up I’d received some feedback after my talks that being able to see the effects of CSS I’m explaining in real-time helps in understanding, so I wanted to share exactly how I did up my slides. Another thing I’ve started to experiment with is to ditch slides altogether and do the presentation with Firefox DevTools. We’ll see how that turns out 😬. Happy CSS-ing, my friends!
huijing
78,341
A Parable of the Polar Vortex
The best time to shovel your driveway is immediately after it snows, but failin...
0
2019-01-28T17:48:23
http://www.heidiwaterhouse.com/2019/01/27/a-parable-of-the-polar-vortex/
bestpractices, life, minnesota, releasepreparation
--- title: A Parable of the Polar Vortex published: true tags: Best practices,Life,minnesota,release preparation canonical_url: http://www.heidiwaterhouse.com/2019/01/27/a-parable-of-the-polar-vortex/ --- The best time to shovel your driveway is immediately after it snows, but failing that, any time before you have driven on it. Driving on fresh-fallen snow compacts it into something between “ice” and “pure evil”, depending on the temperature, humidity of the air, humidity of the snow, etc. I just went out and shoveled and chipped our driveway down to bare concrete because we’re about to get another 6 inches of snow, and the very last thing you want is to add fresh snow to existing ice. That’s how you get glaciers. I was thinking about technical debt, because physical labor induces contemplation. Of course, the best time to clear up technical debt is right after you create it, when everything is crisp and fluffy and even the heaviest snow isn’t _stuck_ to anything. ![About a foot of snow covering a porch with an outdoor table and chairs](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/signal-2018-01-22-145937-300x225.jpg)<figcaption>Unsquished snow</figcaption> If we could all fix technical debt as soon as we incur it, that would be amazing. But more likely, someone had to go get milk, or go to the airport, or deploy for testing or a major client, and the snow has been driven on and something has been changed in the code. ![A snowy driveway with tire tracks and footprints on it.](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20180112_144734-300x225.jpg)<figcaption>Driveway in the process of glaciation</figcaption> Now that compressed snow/ice technical debt is something that’s going to take real effort to deal with, and the longer you leave it, the more likely it is to get driven on again and coded against again, and get even worse. In snow-shovelling, we have a tool for that: ![Wooden-handled tool with a hoe-shaped metal cutting edge, scraping compacted snow off a driveway](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20190127_140225-1-300x225.jpg) I call this an ice-chopper. If it has a proper name, I don’t know it. At the end of a long wooden handle is a metal head that would look like a hoe, if a hoe didn’t have any bend in it. The way you use it is you either get under the ice and scrape it off, or, if it’s gotten very bad, you drive the edge down into the ice to break it up, bit by bit. It’s quite a lot of work, and that amount of work relates directly to how often you do it and how often the snow gets away from you, like a codebase. Some things are beyond our control, like _plow berms_, which are the compacted ridges at the end of the driveway caused by the street plows pushing the excess snow to the edges of the road, which are, by definition, the end of our driveways. That’s where the worst of the ice is. ![Ice chopper lifting up 3 cm slabs of broken ice.](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20190127_141046-1-300x225.jpg) There are more and less favorable conditions for removing technical debt. Like it’s hard and risky to do it right before a big launch, because you don’t know what all the load pressures will be. For shoveling, cold days are actually best, because the ice and snow are so dry that they are more brittle. I try to maximize for that, and for driving, and for one other factor…. I was shoveling this afternoon because it’s going to snow 6-9 inches tonight. It is better for _the rest of my winter_ to clear the driveway down to bare concrete today, because then, when I go to take the next snowfall off, I won’t have any glaciers, or slippery spots, or high areas that can catch my shovel. Shoveling off a fresh fall will be work, but it won’t have hidden, dangerous, and complicated parts, because I just removed them all, while I could still see them. Much like software projects, winter in Minnesota is long, and we can’t count on favorable conditions. It’s entirely possible we won’t have a day above freezing for the next month. So even though it was 5 degrees Farenheit, addressing my technical debt will pay off for me. ![Mostly looks like flat snow.](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20180403_221140-300x225.jpg)<figcaption>An almost-pristine driveway</figcaption> ![One shoveled path through 6+ inches of snow](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20180403_221353-e1548629703875-225x300.jpg)<figcaption>Start where you are</figcaption> ![A shoveled driveway, by the light of the garage door](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20180403_233919-300x225.jpg)<figcaption>Do what you can</figcaption> Thank you for indulging my extended metaphor, and do think about when you want to clear your technical debt. You can’t help incurring it – it’s the cost of making things, much like student debt is the cost of higher education for most Americans. But you can think about when it’s useful to remove it, and be mindful of your tools and needs. Speaking of tools, a side note. ![Orange/deerskin colored suite mittens with knitted black and white mittens that fit in them](http://www.heidiwaterhouse.com/wp-content/uploads/2019/01/20190127_143211-1-300x225.jpg)<figcaption>Utility is my favorite beauty</figcaption> These are choppers that came from the local farm-supply store. They are a very old form-factor, and very efficient. The knitted mittens go next to your skin. The suede-y deerskin mittens go over them. They are both, ideally, a little oversized so there is lots of trapped air insulating your hands. The deerskin (or moosehide or caribou, depending on where you are) protects your hands from the wind and traps the air inside. The woolen knitted mittens trap more air and provide warmth and insulation. When you’ve been working hard, and have broken the Arctic prohibition (DO NOT SWEAT), you can come inside and take the pieces apart and they will dry/air out much better than any combined/technical/nylon wondermitten would.
wiredferret
78,457
EF Core Auto Migration with No Data Loss
Live Migration support for EF Core
0
2019-01-28T06:51:51
https://dev.to/akashkava/ef-core-auto-migration-with-no-data-loss-1i35
efcore, entityframeworkcore, showdev, help
--- title: EF Core Auto Migration with No Data Loss published: true description: Live Migration support for EF Core tags: ef-core,entity-framework-core,showdev,help --- EF Core has dropped support for Auto Migration, which is very useful in prototyping applications. Migration support is too laborious in EF Core. # Introducing EF-Core-Live-Migration ## Features 1. Creates Missing Table 2. Creates Missing Columns 3. If column already exists and if data type is different, then old column is renamed and new column column will be created with transfer of data from old, transfer can be lossy, 4. Renames old indiexes and creates new ones 5. Creates indexes based on foreign keys ## Installation ```ps PM>Install-Package NeroSpeech.EFCoreLiveMigration ``` ## Configure ```c# public void Configure(IApplicationBuilder app, IHostingEnvironment env){ if(env.IsDevelopment()){ app.UseDeveloperExceptionPage(); using (var scope = app.ApplicationServices.GetService<IServiceScopeFactory>().CreateScope()) { using (var db = scope.ServiceProvider.GetRequiredService<AppModelContext>()) { MigrationHelper.ForSqlServer(db).Migrate(); } } } } ``` ## Limitations Works only with SQL Server right now. ## Old Name Attribute If you decide to rename a column, you can mark the property with `[OldName("name")]`, so while migration, it will rename existing column if there is no data loss. In case of data loss, existing column will be saved as other column and data will be transferred to new column. ## Help Wanted The API was written without Migration API introduced later in EF Core. I need help in rewriting API with Migration API so it can be used with any backend.
akashkava
78,498
Create your own eCommerce Shop in Laravel
Bagisto is an Open Source Laravel eCommerce package that has totally reinvented the wheel of eCommerce ecosystem providing multi warehouse feature out of the box.
0
2019-01-28T07:48:37
https://dev.to/pathaksaurav/create-your-own-ecommerce-shop-in-laravel-3jhc
githunt, laravel, opensource, webdev
--- title: Create your own eCommerce Shop in Laravel published: true description: Bagisto is an Open Source Laravel eCommerce package that has totally reinvented the wheel of eCommerce ecosystem providing multi warehouse feature out of the box. tags: githunt, laravel, opensource, webdev --- The new entry in Open Source eCommerce frameworks, [Bagisto - Laravel eCommerce](https://bagisto.com/en/) has recently launched the [beta version](https://github.com/bagisto/bagisto/releases/) of their new release. Folks at Bagisto considered this as a major update. Addition to that, the new [multi-vendor extension](https://bagisto.com/en/laravel-multi-vendor-marketplace/) allows you to attract vendor to sell on your marketplace. Also with multi warehouse inventory, they have totally redefined the way you manage your multiple inventory sources in eCommerce. Check it out on GitHub: {% github bagisto/bagisto %}
pathaksaurav
78,570
The 7 Most Popular DEV Posts from the Past Week
A round up of the most-read and most-loved contributions from the community this past week.
0
2019-01-28T14:23:37
https://dev.to/devteam/the-7-most-popular-dev-posts-from-the-past-week-4kn7
top7, bestofdev
--- title: The 7 Most Popular DEV Posts from the Past Week published: true description: A round up of the most-read and most-loved contributions from the community this past week. tags: icymi, bestofdev cover_image: https://thepracticaldev.s3.amazonaws.com/i/sfwcvweirpf2qka2lg2b.png --- Every Monday we round up the previous week's top posts based on traffic, engagement, and a hint of editorial curation. The typical week starts on Monday and ends on Sunday, but don't worry, we take into account posts that are published later in the week. ❤️ #1. Effective Technology Transitioning to Docker isn't always the easiest. Alex shares a list of helpful commands, and their use cases. {% link https://dev.to/alex_barashkov/20-docker-commands-use-cases-for-developers-2d9g %} #2. The TLDR Nicole shares a brief history on color theory and gives us some advice on picking a color or color palette when designing a project. {% link https://dev.to/nzonnenberg/basic-color-theory-for-web-developers-15a0 %} #3. A real programmer would be like.... Andy reflects on what it means to be a programmer and how job titles aren't a prerequisite to be defined as one. {% link https://dev.to/andygeorge/i-am-not-a-real-programmer-1ogo %} #4. !important You know you've done it. Emma helps us understand CSS specificity so we _stop_ slapping on `!important` and actually understand why our styles aren't getting applied. {% link https://dev.to/emmawedekind/css-specificity-1kca %} #5. 76% ^^^ that's the amount of web pages loaded with HTTPS. This post walks us through _why_ we should use HTTPS in development, as well as the technical considerations: generating certificates, reverse proxies, etc. {% link https://dev.to/kmaschta/https-in-development-a-practical-guide-175m %} #6. Your Discretion When considering the scope of a project, you may realize that a css framework is unnecessary. Sarthak shares resources to use instead of frameworks (from powerful CSS properties to code snippets) as well as ways to remove unused CSS. {% link https://dev.to/teamxenox/do-we-really-need-a-css-framework-4ma6 %} #7. Solid Advice Whether or not you agree with Darragh's view on JWT, it's hard to argue with their main takeaway: be sure you understand why you're using a piece of technology (and its limitations) before you use it! {% link https://dev.to/darraghor/be-careful-of-the-jwt-hype-train-3e81 %} _That's it for our weekly wrap up! Keep an eye on dev.to this week for daily content and discussions...and if you miss anything, we'll be sure to recap it next Monday!_
jess
78,702
Help serving assets over HTTP/2 for a Gatsby Netlify hosted site
Hi all. I threw out this Tweet into the Twitterverse, but thought it would be wise to ask for help...
0
2019-01-29T03:50:49
https://www.iamdeveloper.com/posts/help-serving-assets-over-http2-for-a-gatsby-netlify-hosted-site--nc3/
help, gatsby, netlify
--- title: Help serving assets over HTTP/2 for a Gatsby Netlify hosted site published: true tags: help, gatsby, netlify cover_image: https://thepracticaldev.s3.amazonaws.com/i/244k7ct9qj9jnvd4otjd.jpg --- Hi all. I threw out this Tweet into the Twitterverse, but thought it would be wise to ask for help here as well. {% twitter 1090084616189423616 %} I have a Gatsby site deployed to Netlify, and some of my assets are being served over HTTP/1.1. I know that Netlify [supports HTTP/2 by default for sites enabled with HTTP2](https://www.netlify.com/blog/2017/07/18/http/2-server-push-on-netlify/). I know that I need to add entries into my `_headers` file, e.g. ``` / Link: </js/example-script.js>; rel=preload; as=script Link: </css/example-style.css>; rel=preload; as=style ``` but it would be a pain to update this after every deploy. Is anyone aware of a gatsby plugin that might do this, or how do you go about handling this with your Gatsby site when hosted on Netlify? I can probably generate the `_headers` file as part of my build process, but my gut tells me someone has already done this 😉 The source code is here if anyone is interested. {% github https://github.com/nickytonline/iamdeveloper.com %} Photo by [Lukas Juhas](https://unsplash.com/photos/2JJz3u_R_Ik?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/help?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
nickytonline
78,714
Voice Assistant using Dialogflow with Kotlin
Voice Assistant
0
2019-01-29T05:28:59
https://dev.to/fevziomurtekin/voice-assistant-using-dialogflow-with-kotlin-3p98
kotlin, dialogflow
--- title: Voice Assistant using Dialogflow with Kotlin published: true description: Voice Assistant tags: kotlin, dialogflow --- ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/kl1jo37h27jl8x4jjw1f.png) <br><br> <p>Hello to everyone,</p> In this article, I'm going to talk about the step-by-step implementation of a simple assistant application with Kotlin using the Dialogflow platform. ## What is Dialogflow? <p>Google 2016 year-end (formerly Api.ai) company is a platform used to develop chatbots for multiple devices with AI side voice and speech interfaces.</p> <p>Let's get to know the components to better understand Dialogflow.</p> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/anz1og6h1cwcyxhau0pt.png) <br><br> ### Agent <p>It helps you process configured data to return the appropriate response to users' entries.</p> The Agent consists of 3 different components. - <b>Training Phrases : </b> Defines sample expressions that users say. Dialogflow uses these expressions and expands with similar expressions to create a matching model. - <b>Action and Parameters : </b> (We'll explain this item in the intent item.) - <b> Responses </b> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/0w3jwva1vtfjb7eh1g9s.png) <br><br> ### Intent <p>Intent represents the match that corresponds to what the user says. You can create different intentions according to our purposes.</p> To better explain; this is where we create different types of questions in this component and create answers to these questions. ### Fulfillment <p>Fulfillment is the place where the necessary actions are taken to return the answer to the question we ask from any of our web services.</p> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/u6h5cxmvvixk1f1yesgn.png) <br><br> If we have some idea of the components, let's start the project step by step. ## Development Stages <p>Scenario of the project we will develop:</p> > <b>User:</b> Hello / Hey / Hi <br> > <b>Assistant:</b> Welcome! how can I help you? <br> > <b>User:</b> Who are you? / Can you explain yourself? <br> > <b>Assistant:</b> Step assistant. I'm a week. Within Turkey you tell me the name of any city, it will also help you to learn about the weather :)<br> > <b>User:</b> How is the weather in Bursa? How is Bursa? / Bursa - (to be mentioned in other provinces instead of Bursa.) <br> > <b>Assistant:</b> In Bursa, the weather is 8 degrees partly cloudy. <br> #### <b>Creating our new agent </b> [First, we go to link to create our agent.](https://console.dialogflow.com/api-client/) <br><br> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/nuailex32jhbu0xx0aj2.png) <br><br> #### <b>Create Intent</b> <p> After creating an agent, we determine our scenario and determine what answers our assistant will answer.</p> We're creating Intent -> Create Intent and creating our new intention. ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/xcksbxwzs6kzjd04a9xr.png) <br><br> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/xkuuc9i53u4trxordpie.png) <br><br> Then we identify and record the questions and answers. As you can see, I entered the questions that we would get the same answers and identified one answer. #### <b>Web service response to the answer to our question</b> <p>This stage is optional. You can also create and use intentions without using a web service.</p> Select Fullfilment; We can ensure that the answers are received from the inline editor or from our webservice. >I've also used webhook and I've taken advantage of the current weather api for <a href>http://i.apixu.com</a>. You can reach the codes from. ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/1atjadl0s5bul3wxiyd7.png ) <br><br> Then, with the intent we just created, we determine which questions will return data from the web service. ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/p6pur019bevrkuiao3yx.png ) <br><br> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/qxdmpcs556qv91vw56kz.png ) <br><br> The important part here is; Bursa (variable) to make the selected part and make it a key. I've assigned geo-city. According to this key, I also do the necessary service. For more information about Webhook, [please visit](https://github.com/fevziomurtekin/dialogflow-voice-assistant/tree/master/webhook) #### Integration of Kotlin and Dialogflow platform <p> We follow app -> build.gradle and add dialogflow and java client v2 libraries.</p> ~~~ // DialogFlow SDK depencies <br> implementation 'ai.api:sdk:2.0.7@aar' <br> implementation 'ai.api:libai:1.6.12' <br> // Java v2 <br> implementation 'com.google.cloud:google-cloud-dialogflow:0.67.0-alpha' <br> implementation 'io.grpc:grpc-okhttp:1.15.1' <br> ~~~ > Java Client v2 is the java client for dialogflow. (You can also use the v1 version, but v1 will be available on October 23, 2019.) For us to use the Java Client library, we would like to create a json from the Google IAM console. To create this json, First of all, we are entering the project that we created in IAM console. ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/mcqhlt9d4qf30xogr1cx.png ) <br><br> ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/opbtyxggk4ffe5ww6hit.png ) <br><br> After clicking the edit first, we call it creating a new key. ![Crepe](https://thepracticaldev.s3.amazonaws.com/i/t5562c82ztdl9v0gs46q.png) <br><br> Json. It will automatically come down to our computer after creation. This file in our application to create a raw directory and put the .json file into raw. ~~~ private fun initAsisstant() { try { val stream = resources.openRawResource(R.raw.asistan) val credentials = GoogleCredentials.fromStream(stream) val projectId = (credentials as ServiceAccountCredentials).projectId val settingsBuilder = SessionsSettings.newBuilder() val sessionsSettings = settingsBuilder.setCredentialsProvider(FixedCredentialsProvider.create(credentials)).build() client = SessionsClient.create(sessionsSettings) session = SessionName.of(projectId, uuid) } catch (e: Exception) { e.printStackTrace() } } ~~~ <p> By reading our json file created in our activity, we define our client.</p> We then create a class that will allow our message to communicate with Dialogflow. ~~~ class RequestTask : AsyncTask<Void, Void, DetectIntentResponse> { var activity: Activity? = null private var session: SessionName? = null private var sessionsClient: SessionsClient? = null private var queryInput: QueryInput? = null constructor(activity: Activity,session:SessionName,sessionsClient: SessionsClient,queryInput: QueryInput){ this.activity=activity this.session=session this.queryInput=queryInput this.sessionsClient=sessionsClient } override fun doInBackground(vararg params: Void?): DetectIntentResponse? { try { val detectIntentRequest = DetectIntentRequest.newBuilder() .setSession(session.toString()) .setQueryInput(queryInput) .build() return sessionsClient?.detectIntent(detectIntentRequest) } catch (e: Exception) { e.printStackTrace() } return null } override fun onPostExecute(result: DetectIntentResponse?) { (activity as MainActivity).onResult(result) } } ~~~ The return message is in the Activity function on the On Result function. #### Integrating TextToSpeech and SpeechToText ~~~ private fun sendMicrophoneMessage(view:View){ val intent: Intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) intent.putExtra( RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM ) intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault()) intent.putExtra( RecognizerIntent.EXTRA_PROMPT, getString(R.string.speech_prompt) ) try { startActivityForResult(intent, SPEECH_INPUT) } catch (a: ActivityNotFoundException) { Toast.makeText( applicationContext, getString(R.string.speech_not_supported), Toast.LENGTH_SHORT ).show() } } ~~~ Speech to Text function. ~~~ private fun sendMicrophoneMessage(view:View){ val intent: Intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) intent.putExtra( RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM ) intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault()) intent.putExtra( RecognizerIntent.EXTRA_PROMPT, getString(R.string.speech_prompt) ) try { startActivityForResult(intent, SPEECH_INPUT) } catch (a: ActivityNotFoundException) { Toast.makeText( applicationContext, getString(R.string.speech_not_supported), Toast.LENGTH_SHORT ).show() } } ~~~ Speech to Text function does not work long after it detects the text. ~~~ private fun initAsisstantVoice() { asistan_voice= TextToSpeech(applicationContext,object : TextToSpeech.OnInitListener { override fun onInit(status: Int) { if (status!=TextToSpeech.ERROR){ asistan_voice?.language=Locale("tr") } } }) } ~~~ #### Reading messages and adding layout <p>RequestTask class returned to the value of the return function of our activity onResult function. We add the response value in our onResult function to our layout.</p> >If the user type is BOT, we have TextToSpeech done. ~~~ fun onResult(response: DetectIntentResponse?) { try { if (response != null) { var botReply:String="" if(response.queryResult.fulfillmentText==" ") botReply= response.queryResult.fulfillmentMessagesList[0].text.textList[0].toString() else botReply= response.queryResult.fulfillmentText appendText(botReply, BOT) } else { appendText(getString(R.string.anlasilmadi), BOT) } }catch (e:Exception){ appendText(getString(R.string.anlasilmadi), BOT) } } ~~~ ~~~ private fun appendText(message: String, type: Int) { val layout: FrameLayout when (type) { USER -> layout = appendUserText() BOT -> layout = appendBotText() else -> layout = appendBotText() } layout.isFocusableInTouchMode = true linear_chat.addView(layout) val tv = layout.findViewById<TextView>(R.id.chatMsg) tv.setText(message) Util.hideKeyboard(this) layout.requestFocus() edittext.requestFocus() // change focus back to edit text to continue typing if(type!= USER) asistan_voice?.speak(message,TextToSpeech.QUEUE_FLUSH,null) } ~~~ # Application <a href="https://medium.com/@fevziomurtekin/kotlin-ile-dialogflow-kullanarak-sesli-asistan-yap%C4%B1m%C4%B1-205ef81e78d0?_branch_match_id=580614660615400010"> ![Dialogflow kullanarak kotlin ile sesli asistan](http://img.youtube.com/vi/SkkNB5XDq8I/0.jpg) </a> # Medium <a href="https://medium.com/@fevziomurtekin/kotlin-ile-dialogflow-kullanarak-sesli-asistan-yap%C4%B1m%C4%B1-205ef81e78d0?_branch_match_id=580614660615400010"> ![Dialogflow kullanarak kotlin ile sesli asistan](https://cdn-images-1.medium.com/max/720/1*HDxwPlSqkUnpCef4sUk4zg.png) </a> # Github <a href="https://github.com/fevziomurtekin/dialogflow-voice-assistant"> ![Github project](https://thepracticaldev.s3.amazonaws.com/i/adfe65awjzgvn4urxlgb.png) </a> With Dialogflow, I tried to explain the voice assistant as much as I could. I hope I've been successful. In the later stages of the project, I aim to make the Google Asisstant, Slack integrations more complicated. After completing these integrations to discuss with new articles.
fevziomurtekin
78,746
SigSlot
A C++ library for signalling and slotting
0
2019-01-29T09:57:08
https://dev.to/dwd/sigslot-4iff
cpp
--- title: SigSlot published: true description: A C++ library for signalling and slotting tags: cpp --- ## SigSlot Introduction SigSlot is a library I adopted several years ago as a solid implemention of the "[Observer Pattern](https://en.wikipedia.org/wiki/Observer_pattern)" in C++. It's available on [GitHub](https://github.com/dwd/sigslot), is free to use, and consists of a single small header file. It supports classical observer patterns with lambda and pointer-to-member connections, and also has support for use as awaitable objects in coroutines. ## A little history Originally, such things were difficult if not impossible to write in C++, and the most common implementations were either non-standard or relied on specialist preprocessors. The essential concept is that you have a signal object: ```c++ sigslot::signal0<> my_signal; ``` ... which other things can listen to: ```c++ my_signal.connect(this, &MyListener::slot); ``` ... and the signal can then emit: ```c++ my_signal.emit(); // Calls its listeners. ``` Sarah Thompson wrote the original SigSlot, having decided that the new "ISO C++" standard had sufficient flexibility in the templating language to support the concept - and indeed, it did. She was, as I understand it, explicitly trying to mirror Qt's signal and slot preprocessor. But ISO C++ - that's C++03, in modern terms - lacked a number of useful features that would make the programmer's life smoother. Sarah's design gave signals parameters - a useful and common feature - but while she could use templates so the type of the parameters didn't matter, she had to use a different template for the different numbers of parameters. Lambdas didn't exist either, which made for a much more natural fit. Finally, while Sarah had made the library thread-safe, thread primitives were not covered in C++03, and so several different models needed to be provided for. Nevertheless, when I came across the library I was pretty impressed with the simplicity - a single header file contains the entire library, and it's simple to use. So I updated things a bit - in no small part in order to learn the new (to me) C++11. ## SigSlot C++11 When C++11 was widely available (in 2014), I took Sarah's SigSlot library and updated it with several features from the newer standard. First, I used "variadic templates", which meant that there was now a single "signal" template no matter what the number or type of arguments the signal took. Second, I gave it support for lambdas as well as the (somewhat obscure) pointer-to-member syntax. I didn't do threading, and I later noticed other things could have been improved too. Still, it was handy, and the library shrank a lot. While the library was always a single header file, by using a bit of C++11, I shrank it from 2,500 to just over 500 lines of code. ## And Now, C++17... My last changes have been to introduce features from C++14 and C++17, as well as go over and polish some of the C++11 bits I'd missed. The first thing to go was the different threading models - I'm now using C++11 thread primitives, and as a result, the last remaining wart in the usage has gone. So, too, did some of the templates - without the different threading models, there was no need for some of the template hierarchy (and if fact I'd missed some simplifications back in 2014). I also added some coroutine support - this runs on both Windows using MSVC, and UNIXes using CLang. Despite adding about 100 lines of coroutine support, the library is still smaller - just 430 lines now. Using the library hasn't really changed much since Sarah's version - indeed, this version is closer to hers than to my C++11 version in usage in some ways. In others, though, it's markedly different. ```c++ // Only one template, parameters are signal arguments: sigslot::signal<int> my_signal; // ...using lambdas, for example: my_signal.connect(this, [this](int i){ std::cout << "I was signalled with " << i << std::endl; }); // Emitting is by either emit(...) or simply calling the signal: my_signal(4); // Or, do a coroutine: coroutine_type<int> coroutine() { int i = co_await my_signal; co_return i; } auto c = coroutine(); // This awaits, so stops immediately. my_signal.emit(5); // Now it's continued and is ready to return its value: std::cout << "I was signalled with " << c.get() << std::endl; ``` There are more extensive examples in the source tree, including examples for both classical and coroutine based signals which compile clean on both UNIX and Windows. ## Why? Other signal/slot libraries exist - Boost has one, for example. So why do I persist with this one? Partly, it's the small, neat implementation - a design which predates any of my involvement. But a small library like this is great for me to learn the newer features of C++. It's a fun playground for me, and one that yields useful code. I use SigSlot extensively in my XMPP Thing, Metre. Feel free to play with it - or use it, even better - it has copyright disclaimed (ie, it's as close to "public domain" as it's possible to get in modern copyright law), and it's available at [GitHub](https://github.com/dwd/sigslot).
dwd
79,279
How To Handle This Type Error
TypeScript Puzzle with explanation.
0
2023-01-14T02:52:40
https://dev.to/danjfletcher/how-to-handle-this-type-error-2nc1
typescript, puzzle, programming
--- title: How To Handle This Type Error published: true description: TypeScript Puzzle with explanation. cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eshuco2erledei22bpqu.png tags: TypeScript, Puzzle, Programming --- # Can you spot the Type Error? ```ts function getProperty<T>(obj: T, propertyName: string) { return obj[propertyName]; } ``` I posed this question on Twitter recently and it received some interesting responses: {% embed https://twitter.com/danfletcherdev/status/1613536297850929153?s=20&t=sjv7dFDz0XYa3y00bzQ4-A %} The solution was intended to be rather simple, and in a way it is. However there is some interesting nuance to this problem that I hadn't considered until people started commenting. Which then took me on a journey for a much more complicated solution. For example, this comment, pointed out a flaw in my simplified version: {% embed https://twitter.com/Nartc1410/status/1613601410586648577?s=20&t=pl1DBnR4PYHuwzN2qQ7Ntw %} I thought it would be easier to unpack all of this in longform rather than a series of tweets. # What is the Type Error? I hope you tried to figure it out for yourself first, but here's the answer: > Element implicitly has an 'any' type because expression of type 'string' can't be used to index type 'unknown'. No index signature with a parameter of type 'string' was found on type 'unknown'. What this error is saying, is that you can't index the `obj` parameter using `propertyName`. Essentially TypeScript is helping you avoid a few mistakes. ## Mistake 1 `obj` might not be an object (remember arrays are objects in JavaScript too) which would cause a runtime error when we try to do `obj[propertyName]`. For example, what would prevent `getProperty(123, 'name')`? Nothing. But TS prevents it with a Type Error, thankfully 🙏 ## Mistake 2 Even if `obj` is an object, it might not have the key that we pass as the `propertyName` argument. For example: ```ts const prop = getProperty({name: 'Dan'}, 'age') ``` In this case, `prop` would end up `undefined`, since `'age'` isn't a property of `{name: 'Dan'}` (and TS would infer `any` as the return type from the function by the way -- also not good!) # The Fix Now that we can spot the error, and why it's an issue, how can we fix this? One approach might be to: ## Narrow the type of T ```ts function getProperty<T extends {}>(obj: T, propertyName: string) { return obj[propertyName]; } ``` What we've done is *narrowed* the type of `T` so that it's more specific. Now it has to be of type `object`. This solves the first issue mentioned above, however it only changes the message of the Type Error to be: > Element implicitly has an 'any' type because expression of type 'string' can't be used to index type '{}'. No index signature with a parameter of type 'string' was found on type '{}'. So instead what you could do is: ```ts function getProperty<T extends Record<string, any>(obj: T, propertyName: string) { return obj[propertyName]; } ``` `Record<string, any>` is basically saying that `T` is an object, where the keys are strings, but the values can be anything. This will eliminate the Type Error and now the code will compile! But this has a major problem... ```ts function getProperty<T extends Record<string, any>>(obj: T, propertyName: string) { return obj[propertyName]; } // prop is now `any` const prop = getProperty({name: 'Dan'}, 'age') // Oops! prop is undefined, but no Type Error 😬 const shouldBeString: string = prop ``` As you can see we still have the second issue mentioned above, which is that the `getProperty` method doesn't guarantee that the property we pass exists on the object. A much better approach is to use `unknown` instead of `any`: ```ts function getProperty<T extends Record<string, unknown>>(obj: T, propertyName: string) { return obj[propertyName]; } // prop is now `unknown` const prop = getProperty({name: 'Dan'}, 'age') // Prop is still undefined // But now, TypeScript will force us // to narrow it's type before we use it. // So this is a Type Error! const shouldBeString: string = prop ``` By using `unknown` instead of `any` TS will force us to narrow the type before we use it. The snippet above will be a Type Error since `unknown` can't be assigned to type `string`, so now you have to write a predicate or assertion to check that `thing` actually is of type `string`. > 💡**Tip**: In general if you feel the need to use `any` in TypeScript, you should almost *always* use `unknown` instead. This is useful to have, and I want to come back to this approach but there's another, simpler way to fix the issue. ## Use keyof The solution that I was trying to tease out of people on Twitter was to use `keyof T` as the type constraint for `propertyName` instead of using `string`. Here is what that looks like: ```ts function getProperty<T>(obj: T, propertyName: keyof T) { return obj[propertyName]; } ``` Quick refresher on the `keyof` operator: In layman's terms, `keyof` is simply saying that a type must have a **property of** the type on the right hand side of the operator. So for example if `T` is `{name: 'Dan', age: 32}` then `keyof T` is really just a union of string literals like this: ```ts 'name' | 'age' ``` Remember that a string literal such as `'name'` can be a type in TypeScript. So by constraining `propertyName` to be `keyof T` we're saying that `propertyName` must match one of the keys on the given `obj`. This solves both of the issues mentioned above! ```ts function getProperty<T>(obj: T, propertyName: keyof T) { return obj[propertyName]; } // Now the Type Error is here, where we want it // 'age' does not exist on type `{name: 'Dan'}` const prop = getProperty({name: 'Dan'}, 'age') ``` The nice thing about the solution above, is that it also allows the return type to be inferred properly. Here's a couple examples: ```ts // type will be 'string' | 'number' getProperty({name: 'Dan', age: 32}, 'name') // type will be 'string' | 'number' getProperty({name: 'Dan', age: 32}, 'age') ``` That's pretty neat right!? <img src="https://media.giphy.com/media/n3p6JiIG0TzCU/giphy-downsized-large.gif"> ### This has problems too! First of all, it would be nice if we received a more specific type back. Rather than `'string' | 'number'` it would be nice if we got the actual type of the property we're pulling out of the object, right? But that's not the only problem. There's a another issue here too. This response from Twitter explains it well: {% embed https://twitter.com/Nartc1410/status/1613601410586648577?s=20&t=pl1DBnR4PYHuwzN2qQ7Ntw %} So with `keyof T` alone, this function doesn't always cause Type Errors when we might expect it too. Here are a few examples where it works as expected, and others where it doesn't: ![A screenshot of TypeScript code detailing the examples discussed above](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q6d0dcoctm0nga8wq92.png) Why? Because JS is weird and everything that's not an object still kind of acts like one. For example, try this stuff in your JS console: ```js '123'['charAt'] // f charAt() { [native code] } 123['toExponential'] // f toExponential() { [native code] } true['valueOf'] // f toValueOf() { [native code] } ``` So this implementation still has some weird behaviour. But what if we combined the two approaches? ## Use both extends and keyof We're almost at the final solution! Let's try combining both approaches from above and see what happens. But first lets clean this function up a bit. The types are becoming a bit unruly: ```ts // Start by extracting the type definition to a new type alias type GetProperty = <T>(obj: T, propertyName: keyof T) => typeof obj[keyof T] // Then update the function declaration to use the type const getProperty: GetProperty = (obj, propertyName) => { return obj[propertyName]; } ``` Ok great, now we can focus on just the types without the noise of the function itself getting in the way. So lets add that `<T extends Record<string, unknown>>` bit to this implementation and see what we get: ```ts type GetProperty = <T extends Record<string, unknown>> (obj: T, propertyName: keyof T) => typeof obj[keyof T] const getProperty: GetProperty = (obj, propertyName) => { return obj[propertyName]; } // These now cause type errors! getProperty('123', 'charAt') getProperty(123, 'toExponential') getProperty(true, 'valueOf') // works as it did above but type is still 'string' | 'number' const age = getProperty({name: 'Dan', age: 32}, 'age') ``` Now we've addressed every issue that's been pointed out except for one! It would be nice if when we used the `getProperty` function that the result would be the type of the property on the original object. For example I would expect that: ```ts let age = getProperty({name: 'Dan', age: 32, admin: true}, 'age') ``` Would actually return a type `number` and not `'string' | 'number' | 'boolean'`. In order to do that, we can add a second type parameter to our generic called `K` which can `extends` the `keyof T` - meaning that `K` must be a type in the key of `T`. Like this: ```ts type GetProperty = <T extends Record<string, unknown>, K extends keyof T> (obj: T, propertyName: keyof T) => typeof obj[keyof T] ``` Ok, so this hasn't really changed anything yet. We have to actually make use of `K`! We want the second argument to this function to be `K` and the return to be `T[K]`, like this: ```ts type GetProperty = <T extends Record<string, unknown>, K extends keyof T> (obj: T, propertyName: K) => T[K] ``` So now when we use this function it has none of the issues discussed above, plus we get a return type that's been narrowed to it's most specific type (like `number` for example): ```ts type GetProperty = <T extends Record<string, unknown>, K extends keyof T> (obj: T, propertyName: K) => T[K] const getProperty: GetProperty = (obj, propertyName) => { return obj[propertyName]; } // works as it did above but now, type is a number!! let age = getProperty({name: 'Dan', age: 32, admin: true}, 'age') ``` # Conclusion I've been on a mission to level up my competence in TypeScript lately. So learn and practice something new every morning before work (doing the #100DaysOfCode challenge). But sharing this journey in public and posing questions like the original Tweet it's really helped bring a lot of depth to my learning. I honestly thought that the answer to my question was an easy one, and it turned out to have a way deeper answer than I anticipated. I hope you enjoyed this little journey that I went on, and maybe even deepened your understanding of TypeScript too! I couldn't have written this article alone so kudos to these gems: - https://twitter.com/Nartc1410 (for the insights and poking holes in my logic) - https://twitter.com/bitknight (for proof reading, insights and suggestions) - https://twitter.com/rgolea (suggesting solutions) --- Like what you read? Want to support me? A cup of coffee goes a long way 🙏 Why not buy me one? https://ko-fi.com/danjfletcher
danjfletcher
79,386
Skipping tests
Sometimes you might want to skip a particular test while executing others for s...
352
2019-01-31T06:11:05
https://wangonya.com/blog/skipping-tests/
python, testing, beginners
Sometimes you might want to skip a particular test while executing others for some reason. Maybe the database guy isn't done setting up and that particular test requires a database connection. Instead of having to wait, you can just write the test and instruct pytest to skip it, giving the appropriate reason so it doesn't look like you just skipped a failing test to keep your test suite green. There's a couple of ways to do this. The simplest is to use the `@pytest.mark.skip` decorator like so: ```python import pytest def test_stuff(): # this test will be executed pass @pytest.mark.skip(reason="just testing if skip works") def test_other_stuff(): # this one will be skipped pass ``` Running your tests should produce the output below: ```shell collected 2 items skip.py .s [100%] ===================== 1 passed, 1 skipped in 0.05 seconds =============== ``` Notice the little `s`. It shows the test that was skipped. Pytest also tells us that `1 passed, 1 skipped`. If you need a more verbose output, you can use the `-rs` flag `pytest skip.py -rs`: ```shell collected 2 items skip.py .s [100%] ==================== short test summary info =============== SKIP [1] skip.py:14: just testing if skip works =================== 1 passed, 1 skipped in 0.02 seconds ========== ``` The test above was skipped even before it started. This isn't always ideal. You can have more control over how the test is skipped by using the `pytest.skip(reason)` function: ```python import pytest def setup_stuff(): return False def test_stuff(): # this test will be executed pass def test_other_stuff(): # this one will be skipped if setup_stuff() returns false if not setup_stuff(): pytest.skip("setup failed") else: pass ``` ```shell collected 2 items skip.py .s [100%] =================== short test summary info ============================= SKIP [1] skip.py:12: setup failed ==================== 1 passed, 1 skipped in 0.05 seconds ================= ``` If you prefer to check that the condition is satisfied before the test starts, then you can use `skipif`: ```python import pytest def setup_stuff(): return False def test_stuff(): # this test will be executed pass @pytest.mark.skipif(not setup_stuff(), reason="setup failed") def test_other_stuff(): # this one will be skipped if setup_stuff() returns false pass ``` ```shell collected 2 items skip.py .s [100%] =================== short test summary info ============================= SKIP [1] skip.py:12: setup failed ==================== 1 passed, 1 skipped in 0.05 seconds ================= ``` There are many other ways you can customize your tests to skip depending on certain conditions as explained in the [docs](https://docs.pytest.org/en/latest/skipping.html).
wangonya
79,442
Why Field Service Management Business Must Have Mobile Apps
5 Reasons Why your Field Service Management Needs Mobile Apps. This blog highlights major five of those reasons
0
2019-01-31T10:30:12
https://dev.to/appdevelopmentagency/why-field-service-management-business-must-have-mobile-apps-186e
mobileapps
--- title: Why Field Service Management Business Must Have Mobile Apps published: true description: 5 Reasons Why your Field Service Management Needs Mobile Apps. This blog highlights major five of those reasons tags: Mobile Apps --- ![](https://i.imgur.com/IW05FAz.png) You must have heard the popular phrase “There’s an app for that!”- coined and later trademarked by Apple in the early days of iPhone. And while the catchphrase has now been around for almost a decade, it is today more relevant than ever. After all, mobile apps, initially conceived as consumer products have long seeped into the business environment and have become a part of the business structure. You can hardly find any enterprise today that doesn’t use mobility solutions at one point or the other. But before we get to the benefits of mobile apps, let’s first draw closer to what kind of apps we are talking about. To keep things generic, consider a business that has sends employees to remote locations for various purposes- it can be meeting with clients, surveys, etc, who then report back to the office at end of the day. This kind of field service structure is used by a large number of businesses of varying industries but remains highly inefficient as you would see, has much room for improvement at all levels. ##Better communication Generally, there are a large number of workers in the field at any given time with little or no coordination. This often leads to overlaps and coordinating manually can waste a lot of time. Mobile apps offer a common platform through which they can access their tasks and bring transparency to avoid any overlaps. Also, when two or more workers need to coordinate, these applications can be instrumental in keeps all the tasks in the loop. ##Less paperwork Paperwork is cumbersome, time-consuming and is tough to manage. For instance, what if a worker gets on-site and realizes he/she doesn’t have a particular form? Mobile apps, with their digital forms, offer a seamless work workflow and in the process deliver an added layer of security. Be it collecting data from customers, generating invoices, authenticating through signature, the workers can accomplish a lot more a single device than a bag full of papers. ##Operational efficiency Not only digitization helps workers on the field but also the back-office operations. Generally, once workers submit documents, it takes a lot of manual work to integrate such data into the system, which is highly inefficient. With mobile applications, such businesses can automate a lot of tasks and cut operational bottlenecks at various levels. ##Management and human resources To keep track of on-site employees remains a major challenge for businesses that is closely related to productivity and puts them at risk of fraudulent entries. With mobile apps having location-based services, businesses can track in real-time where each of their employees is and make the most accurate assessment. For employees, it becomes a convenient channel to reach out to their managers. Be it requesting on-field help, applying for leave or simply signing off for the day, all such tasks can be done from within the app while having to manually go to the office. ##Customer satisfaction It's not just the workers who hate paperwork but also the customers. Be it requesting/tracking services, making payments or simply furnishing details, digital services are known to deliver hassle-free services and thus drastically improve customer satisfaction. That said, many businesses consider enterprise app solutions to be a tool only for the large corporations with deep pockets. And to fair, mobile applications are cheap to build. But when you do a detailed ROI analysis, you would find that such applications deliver higher efficiency and save costs- that ultimately more than compensate for the upfront investment in mobile app development. And if you add to that the benefits of better employee management and customer satisfaction, such applications that are built by [top app developers](https://www.appdevelopmentagency.com/top-mobile-app-development-companies/) seem like a way for business progress rather than a mere tool for convenience.
appdevelopmentagency
79,458
Do you know there is a version of VSCode available without MS branding, telemetry or licensing?
What do you think of VSCodium? Have you used it? Is the branding,...
0
2019-01-31T11:51:09
https://dev.to/spences10/do-you-know-there-is-a-version-of-vscode-available-without-ms-branding-telemetry-or-licensing-32a1
discuss
--- title: Do you know there is a version of VSCode available without MS branding, telemetry or licensing? published: true tags: discuss cover_image: https://thepracticaldev.s3.amazonaws.com/i/k6zca948lsmhdnf0cdgo.png --- ## What do you think of [VSCodium]? ## Have you used it? ## Is the branding, licencing and telemetry a big deal? I have been using VSCodium for the last week now and I can honestly say apart from the branding I can't find any differences between VSCode and VSCodium <!-- LINKS --> [vscodium]: https://github.com/VSCodium/vscodium
spences10
79,473
Everything is connected - or why I'm focusing on Relational Applications
My point of view in and out of software about the inter-connectivity of things
0
2019-01-31T13:14:24
https://dev.to/tomavelev/everything-is-connected---or-why-im-focusing-on-relational-applications-2ino
rdbs, connectionsinlife, datastructure
--- title: Everything is connected - or why I'm focusing on Relational Applications published: true description: My point of view in and out of software about the inter-connectivity of things tags: RDBS, connections-in-life, datastructure --- <!-- wp:paragraph --> <p>Everything is connected with everything else one way or another. You don't have to be a fan of conspiracy theories, to be a believer in some specific religion, to be exceptionally intelligent or have some "super" gene to be able to become aware of it, to see it, to comprehend it, to live it. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>If you've watched Discovery Chanel or Animal Planet about the cycles in the nature - you'll have seen that - the number of any living being inside the food chain could cause variations in the other layer. If there are too little plants, the grass-eating animals will become less (or will migrate), and that influences the hunting animals. If there are too many predatory animals, this will diminish the number of grass eating animals and that will influence positively on the grass and the plants. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Connections and a cause and effect relations between events, habits, activities can be found also in the human society at micro level - in small scale, even in a single human life, and also at a macro level - where are we going as a species, as a civilization - in terms of numbers, technology, relationship between each other. And actually, if one is to study history, could see numerous repetitions. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Money could be used as a indicator for connectivity in the human society. Blockchain raised 10 years ago and right now there are A LOT of crypto-currencies and their price in terms of fiat currency is in more than 99% linked together. When one crypto price goes up, others follow, when others go down, others sooner or later go down too. 2008 was a year of crisis that showed how connected are the banks around the world. The World is so connected so the politics influence the smallest economics - in the smallest countries around that are located near the storm of influence - and this is about life-long life hood, productions that have raised families, villages and even medium to big size cities.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In the technology sector - Oracle placed a bet on relational databases and technologies around 20 - 25 years ago and won. Today, thanks to the success of the relational databases - the relations between all kinds of data is embedded to the lowest level all around the Internet and can be seen and perceived in all the big sites - Facebook - connecting friends, people, organizations, pictures, events, interests, comments, etc, Linkedin - linking people and the technological skills they have and with each other and with human resources, e-commerce sites - linking categories, products, user reviews, prices, blogging platforms (used for more than just blogging) - linking topics, articles, posts, pages, writers - powering more than one - third of the Internet and so on.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>I myself have a small contribution (probably I am not the only one in doing this) in implementing some custom relations. One of my sites/products : <a href="http://kakvoiadesh.com/app/index.jsp">kakvoiadesh.com</a> is focused around the food products and items and the good ingredients, the bad ingredients, the organs of human body, the allergies and the diseases that the food items are beneficial or harmful to and so on. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>What I also found that, at least for now, not everything can be modeled to computer models and relations (not that I know a lot of mathematics, but still). That's why I wrote a book - <a href="https://www.amazon.com/My-Path-Health-Toma-Velev-ebook/dp/B06XNP3B8C">My Path to the Health</a>. There are things that the human mind cannot comprehend like the power of the human mind and intention and internal motivation and strength (psychology). </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>I have written and recorded myself before (audio and video) about the different ways to handle data: <br> <a rel="noreferrer noopener" href="https://tomavelev.com/blog/Developer%E2%80%99s+Life+21.03.2018+-+ways+of+handling+data?l=en_US" target="_blank">https://tomavelev.com/blog/Developer%E2%80%99s+Life+21.03.2018+-+ways+of+handling+data?l=en_US</a> <a href="https://tomavelev.com/blog/Developer%E2%80%99s%20Life%2021.03.2018%20-%20ways%20of%20handling%20data?l=en_US"></a><a href="https://tomavelev.com/blog/Developer%E2%80%99s+Life+21.03.2018+-+ways+of+handling+data?l=en_US"></a></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>And a product that I am focusing a lot now is <a href="https://tomavelev.com/GeneratorApp/">https://tomavelev.com/GeneratorApp/</a> It generates a SQL, a database layer in some language (PHP, Java, Kotlin), some more layers here and there, and a User Interface with the basic operations over your data definitions. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The last few years I focused a little bit more on Android. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>I am fully aware that this OS may fade out of existence, but the other part - the core code about processing data will most likely still be usable - no matter what the interface to the user will arise - Voice Services like Assistant or Alexa, or Messanger bots - Like Facebook Messanger, Viber or Telegram or something totally new. They all will need some layer for processing the data in a structured way and currently the RDBS are one of the finest way to structure your information.</p> <!-- /wp:paragraph -->
tomavelev
79,583
Build a Fluent Interface in Java in Less Than 5 Minutes
(Skip to the good part!) Building a Basic Class Building classes in...
0
2019-01-31T22:56:14
https://dev.to/awwsmm/build-a-fluent-interface-in-java-in-less-than-5-minutes-m7e
java, beginners, tutorial, oop
[(Skip to the good part!)](#good-part) ## Building a Basic Class Building classes in Java is _**EASY!**_ All you need is a class declaration inside a file with the same name as the class (but with a `.java` at the end of it). A minimal example would be something like ```java // MyClassName.java // class must be public and created in a file called <classname>.java public class MyClassName { // no other code needed! } ``` You can put the above code into a file named `MyClassName.java` and load it in the `jshell` with the `/open` command: ```java jshell> /open MyClassName.java jshell> MyClassName j = new MyClassName() j ==> MyClassName@6c3708b3 ``` ...that's it! Because `MyClassName` (like all classes) extends `Object`, we get some built-in methods if we type `j.` in the `jshell` and hit the `tab` key: ```java jshell> j. equals( getClass() hashCode() notify() notifyAll() toString() wait( ``` ## Building a Custom Constructor Of course, this isn't very exciting. Often, we'll create our own _constructors_ to give objects a certain _state_ (instance variables, etc.). Let's add a few private variables which we can set via a constructor: ```java // MyClassName2.java // class must be public and created in a file called <classname>.java public class MyClassName2 { private int instanceInt; private double instanceDouble; private String instanceString; public MyClassName2 (int myInt, double myDouble, String myString) { this.instanceInt = myInt; this.instanceDouble = myDouble; this.instanceString = myString; } } ``` Now if we try to create an instance of `MyClassName2` in the `jshell` using the default no-argument constructor, we get an error: ```java jshell> /open MyClassName2.java jshell> MyClassName2 j2 = new MyClassName2() | Error: | constructor MyClassName2 in class MyClassName2 cannot be applied to given types; | required: int,double,java.lang.String | found: no arguments | reason: actual and formal argument lists differ in length | MyClassName2 j2 = new MyClassName2(); | ^----------------^ ``` ...this is because the default zero-argument constructor is _only_ provided by Java if no other constructors have been defined. Since we defined our three-argument (`int`, `double`, `String`) constructor above, that's the only one we have access to at the moment. So let's create a `MyClassName2` object: ```java jshell> MyClassName2 j2 = new MyClassName2(42, 3.14, "bonjour le monde!") j2 ==> MyClassName2@6c3708b3 ``` ...great! It worked! ## Getters and Setters ### Getters To get anything _out of_ this object, though, we'll need some _getters_. Let's add those: ```java // MyClassName3.java // class must be public and created in a file called <classname>.java public class MyClassName3 { private int instanceInt; private double instanceDouble; private String instanceString; public MyClassName3 (int myInt, double myDouble, String myString) { this.instanceInt = myInt; this.instanceDouble = myDouble; this.instanceString = myString; } // getters are methods, so they have a return type public int getMyInt() { return this.instanceInt; } // we use 'this' to refer to the object calling the method public double getMyDouble() { return this.instanceDouble; } // ('this' isn't strictly necessary here, but it can make the code clearer) public String getMyString() { return this.instanceString; } } ``` Now, we can create an object with some parameters and extract those parameters later! ```java jshell> /open MyClassName3.java jshell> MyClassName3 j3 = new MyClassName3(17, 1.618, "hola mundo!") j3 ==> MyClassName3@335eadca jshell> j3.getMyInt() $5 ==> 17 jshell> j3.getMyDouble() $6 ==> 1.618 jshell> j3.getMyString() $7 ==> "hola mundo!" ``` ### Setters _Setters_ let us _change the state_ of an object. In other words, they let us change the values held in an objects instance variables. Setters are just methods that take a parameter and set an instance variable equal to that parameter. Let's add some: ```java // class must be public and created in a file called <classname>.java public class MyClassName4 { private int instanceInt; private double instanceDouble; private String instanceString; public MyClassName4 (int myInt, double myDouble, String myString) { this.instanceInt = myInt; this.instanceDouble = myDouble; this.instanceString = myString; } // getters are methods, so they have a return type public int getMyInt() { return this.instanceInt; } // we use 'this' to refer to the object calling the method public double getMyDouble() { return this.instanceDouble; } // ('this' isn't strictly necessary here, but it can make the code clearer) public String getMyString() { return this.instanceString; } public void setMyInt (int newInt) { this.instanceInt = newInt; } public boolean setMyDouble (double newDouble) { this.instanceDouble = newDouble; return true; } public String setMyString (String newString) { this.instanceString = newString; return this.instanceString; } } ``` Setters should describe (in the method name) what instance variable they're setting, and they usually only take a single parameter (which will be assigned to the instance variable indicated by the name of the method). But the philosophy on return values from setters falls into several camps: - If your setter _will always, without fail_ successfully set the value, some people will return `void` from a setter method. - If there's a chance your setter could fail, you could also return `boolean`, with `true` indicating that the instance variable was successfully assigned the provided value - Or, you could just return the instance variable in question at the end of the method. If it's successfully been changed, the return value will reflect that. These three philosophies have been applied, in order, to the setters given above. Let's see them in action: ```java jshell> MyClassName4 j4 = new MyClassName4(19, 2.71828, "hello world!") j4 ==> MyClassName4@72b6cbcc jshell> j4.getMyInt(); j4.setMyInt(23); j4.getMyInt() $10 ==> 19 $12 ==> 23 jshell> j4.getMyDouble(); j4.setMyDouble(1.2345); j4.getMyDouble() $13 ==> 2.71828 $14 ==> true $15 ==> 1.2345 jshell> j4.getMyString(); j4.setMyString("aloha honua"); j4.getMyString() $16 ==> "hello world!" $17 ==> "aloha honua" $18 ==> "aloha honua" ``` We can see (in the non-existent step `$11`) that `setMyInt()` returns void. No value is shown in the `jshell`. `setMyDouble()` returns `true`, though, as the value was successfully changed. We can see that in the before (`$13`) and after (`$15`) steps. Finally, `setMyString()` returns the value of `this.instanceString` at the end of the method. Since the instance variable has successfully been changed, the return value reflects that. <a name="good-part"></a> ### Fluent Interfaces But we can return any sort of value we want from a method! There's no reason why we couldn't return `-14` or `"r2349jp3giohtnr"` or `null` from any one of those setters. There's no restriction to the kind of value we can return. __So what happens if we return `this`?__ Let's try it! Let's change our `setMyInt()` method to return `this`, which is a reference to the object which is calling the method (in this case, the `setMyInt()` method): ```java public MyClassName5 setMyInt (int newInt) { this.instanceInt = newInt; return this; } ``` I removed the rest of the class for brevity, but it should be the same as `MyClassName4` (with `4` changed to `5` everywhere). So how does this method work? Well, it returns the object which called the method, which in this case is a `MyClassName5` object, so we need to make that the return value. Let's try this method and see what happens: ```java jshell> MyClassName5 j5 = new MyClassName5(0, 0.0, "zero") j5 ==> MyClassName5@5d76b067 jshell> j5.setMyInt(-1) $21 ==> MyClassName5@5d76b067 ``` Look at the return values from these two statements in the `jshell` -- they're the same! They both return the object `MyClassName5@5d76b067`. If that's true, shouldn't we be able to call another method on the value returned from `setMyInt()`? Let's try it! ```java jshell> j5.getMyInt() $22 ==> -1 jshell> j5.setMyInt(2).setMyDouble(2.2) $23 ==> true jshell> j5.getMyInt(); j5.getMyDouble() $24 ==> 2 $25 ==> 2.2 ``` Look what happened there! Chaining the methods worked! But the return value (in `$23`) was `true` because `setMyDouble()` still returns a `boolean`. If we change all of our setters to return a `MyClassName5` object, we should be able to chain them together in any order! Let's make a `MyClassName6` with setters that return `MyClassName6` objects, similar to what we did above. Again, I'll only write the setters here to save space: ```java public MyClassName6 setMyInt (int newInt) { this.instanceInt = newInt; return this; } public MyClassName6 setMyDouble (double newDouble) { this.instanceDouble = newDouble; return this; } public MyClassName6 setMyString (String newString) { this.instanceString = newString; return this; } ``` Let's try it! ```java jshell> MyClassName6 j6 = new MyClassName6(6, 6.6, "six") j6 ==> MyClassName6@131276c2 jshell> j6.setMyString("seven").setMyDouble(77.77).setMyInt(777) $28 ==> MyClassName6@131276c2 jshell> j6.getMyString(); j6.getMyDouble(); j6.getMyInt() $29 ==> "seven" $30 ==> 77.77 $31 ==> 777 ``` That's it! All we had to do to be able to chain methods together was to return `this`. Returning `this` lets us call method after method in series, and all of those methods will be applied to the same (original) object! An idea which is quite similar to fluent interfaces is the _Builder pattern_, which works almost the same as the setters above, but that is a topic for another article. Be sure to let me know in the comments if you have any questions or critiques!
awwsmm
79,729
let, async, await as variable
if i assign let some other value, how interpreter still understand its original use.
0
2019-02-01T13:41:34
https://dev.to/krshna/let-async-await-as-variable-57fp
javascript
--- title: let, async, await as variable published: true description: tags: javascript --- ![](https://thepracticaldev.s3.amazonaws.com/i/jrf51xu87wlkpwpn47ac.png) if i assign `let` some other value, how interpreter still understand its original use.
krshna
134,969
Create a Simple Dark Mode with CSS Filters
So there's this thing you probably haven't heard of called "dark mode". Yeah, you're right, they're...
0
2019-07-10T06:53:26
https://dev.to/boyum/create-a-simple-dark-mode-with-css-filters-4ji4
css, html, frontend
--- title: Create a Simple Dark Mode with CSS Filters published: true description: tags: CSS, HTML, Frontend --- So there's this thing you probably haven't heard of called "dark mode". Yeah, you're right, they're everywhere. Let me teach you how to grace the interwebs with even more instances of light on dark! 🌓 **Caveat**: This tutorial uses the power of [CSS Filters](https://developer.mozilla.org/en-US/docs/Web/CSS/filter), thus isn't supported by every browser. Per July 10 2019, [94.36% of users support it worldwide] (https://caniuse.com/#search=css%20filter). ## How does filters work? The CSS `filter` property lets us add effects with functions such as monochrome, blur, saturate and a few others. Each of these functions take parameters, and because of this, the possibilities are practically endless. [The CSS filter MDN page](https://developer.mozilla.org/en-US/docs/Web/CSS/filter) has a few examples and describes them quite well. ```css .invert-me { filter: invert(100%); } ``` Another filter function is the ✨`invert()`✨ function. It takes a percentage (or decimal number between 0 and 1). MDN tells us this: _[The filter function] inverts the samples in the input image. The value of amount defines the proportion of the conversion. A value of 100% is completely inverted. A value of 0% leaves the input unchanged. Values between 0% and 100% are linear multipliers on the effect. The lacuna value for interpolation is 0._ This sounds perfect for creating a simple dark mode! After all, black inverted is white (and vice versa), right? Sure! Let's try just inverting the entire page and see how that looks: {% codepen https://codepen.io/sindre/pen/QXzPdj %} It looks kind of ok already! Apart from the inverted image, the page looks alright. How could we stop the image from inverting? We might be able to do some magic trick by moving the images out of the container, then position them correctly with absolute positioning, however that sounds terrible for ux, dx and a11y, and possibly more abbreviations, so we need to find another way. We could also invert only parts of the page, let's say every `<p>`, `<hx>`, `<strong>`, `<ul>` and so on, however this might create new problems when trying to invert the background as there might be margins between the elements. ## The solution But wait! The invert of white is black and the invert of black is white... Could this mean that any color inverted an even number of times is just the same color? Of course that's the case! Let's just invert all images back to their original form when inverting the wrapper! {% codepen https://codepen.io/sindre/pen/WqJxGZ %} That seems to work! Now every image will be inverted back. We can also use the class `no-dark-mode` to invert back other elements, such as videos, elements with CSS background images or parts of the page that are already light on dark in color.
boyum
79,815
Build an image classification app with NativeScript-Vue and Azure Custom Vision API
Disclaimer: This is my first post, please feel free to leave any comments and suggestions in the co...
0
2019-02-01T23:01:07
https://dev.to/edlgg/build-an-image-classification-app-with-nativescript-vue-and-azure-custom-vision-api-n2c
javascript, nativescript, azure
--- title: Build an image classification app with NativeScript-Vue and Azure Custom Vision API published: true description: tags: javascript, nativescript, azure --- ![banner](https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/banner.jpg?raw=true) Disclaimer: This is my first post, please feel free to leave any comments and suggestions in the comments. Prerequisites: Knowing Vue, knowing what is an API. [Github repo with everything](https://github.com/edlgg/NativeScript-Vue-MedicineClassifier) ## Introduction I've been working with Vue.js for a couple of months now. Since I heard about NativeScript-Vue I've been looking for an opportunity to try it out. This week I got that opportunity. Im currently taking a business class and at some point we where asked for business ideas. A girl in my team said that it would be cool to have an app that let you take a picture of some medicine and see what it was for, its characteristics and similar medicines. For me it sounded interesting since it would be easy to do a proof of concept with Azure's Custom Vision API. ## Planning I've been told that I should think about the specific problem I have before choosing which technologies I'm going to use. However, for this project I knew I wanted to try NativeScript-Vue and Azure's Custom Vision API so the decision was made. Objective: Build a simple app that takes the picture of a medicine and tells you which one it is. Since this is a proof of concept and was made basically just out of curiosity in 1 day it won't be very elegant and will only work with 3 types of medicine (At least for me, you can train your model on anything you want). ## Design This app is divided into 2 main parts: 1. Back End (Azure's Custom Vision API) Using this API is free and extremely easy. The hardest part about this is getting the pictures of the things you want to classify. I found 3 different medicines that I ended up using and took about 300 pictures of each one. I uploaded them and trained the model. The only thing we need from the API is the URL and the Prediction Key. [Azure's Custom Vision API](https://customvision.ai "Azure's Custom Vision API") ![API-Data](https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/API.jpg?raw=true) 2. Front End (NativeScript-Vue) This is where the meat of the application is. Although, in reality is not going to be a lot of work. We basically need to do N things. 1. Create a basic UI 2. Set up the data model with the picture and medicine name 3. Make use of the camera 4. Send the picture to the API for classification and display the classification The UI will allow you to press a button and take a picture. After that it will display the image you took and the name of the medicine. Something like this: <img src="https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/1.jpg?raw=true" alt="drawing" width="200"/> <img src="https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/2.jpg?raw=true" alt="drawing" width="200"/> ## Code To code the app we will use the NativeScript's web based IDE. You can access it [here](https://play.nativescript.org/) or at play.nativescript.org Before we start you need to do the following: 1. Create an account 2. Create a new Vue Project by clicking new at the top left 3. Change the name of the project to something you like 4. Get rid of the unnecessary HTML, CSS and JS until it looks like this HTML We got rid of some labels we weren't going to use ``` <template> <Page class="page"> <ActionBar title="Home" class="action-bar" /> <StackLayout class="home-panel"> <Label textWrap="true" text="Play with NativeScript!" /> </StackLayout> </Page> </template> ``` JS We left the same ```javascipt <script> export default { data () { return { }; }, } </script> ``` CSS We got rid of one class. ``` <style scoped> .home-panel { vertical-align: center; font-size: 20; margin: 15; } </style> ``` To try the app while you press on QR code at the top and scan the code using the app that it tells you there to download. It should look like this. <img src="https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/step1.jpg?raw=true" alt="drawing" width="200"/> ### UI First we need to remove the label we had and add the image, the button and a label to display the name of the medicine. This is pretty straightforward since NS has the needed elements pre made. You can look at the docs [here](https://nativescript-vue.org/en/docs/introduction/). We will have placeholders in the elements for now. Also, I changed the title in the ActionBar to something relevant. The template should now look like this: ``` <template> <Page class="page"> <ActionBar title="Medicine Classifier" class="action-bar" /> <StackLayout class="home-panel"> <Image class="mainImage" src="https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/example.jpg?raw=true" /> <Button class="button" text="Take Picture" height="80" width="300" /> <Label class="data" text="7 Azahares" height="50" width="350" backgroundColor="#8fad88" /> </StackLayout> </Page> </template> ``` We will also add some CSS so it doesn't look so ugly. The CSS I wont explain since its out of the scope of this post but it is very basic CSS. ``` <style lang="scss" scoped> .home-panel { vertical-align: center; font-size: 20; margin: 15; } .page { background-image: linear-gradient(to right, #4D7C8A, #7F9C96); } .actionBar { background-color: #1B4079; color: #ffffff; } .mainImage { margin: 200px; margin-bottom: 25px; margin-top: 25px; border-radius: 15px; padding: 5rem; object-fit: contain; } .button { margin-bottom: 50px; } .data { border-radius: 15px; font-size: 22; font-weight: bold; text-align: center; } </style> ``` Result: <img src="https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/step2.jpg?raw=true" alt="drawing" width="200"/> ### Data model What we need to do right now is make the static data we have dynamic. To do that we need to create the variables we are going to use and bind them to the relevant elements. We basically just have 2 things that change the image and the predicted name. We will also add some v-if's so the elements only show if there is something set. Make sure to add the : in front of src and text since we are now binding it to a variable. JS ``` data() { return { pictureFromCamera: "https://github.com/edlgg/NativeScript-Vue-MedicineClassifier/blob/master/postImages/example.jpg?raw=true", predictedName: "testName" }; } ``` Template ``` <StackLayout class="home-panel" orientation="vertical"> <Image v-if="pictureFromCamera" class="mainImage" :src="pictureFromCamera" /> <Button class="button" text="Take Picture" height="80" width="301" /> <Label v-if="predictedName" class="data" :text="predictedName" height="50" width="350" backgroundColor="#8fad88" /> </StackLayout> ``` The app should look exactly the same as before but know we can change the values of or variables via a method call. ### Set up the camera This is were it starts to get interesting. We need to be able to take a picture and store it in our pictureFromCamera. We need to add methods to the Vue instance and add the takePicture method. Then, we need to add an @tap to the button so it runs the method when we press on it. We can also set the pictureFromCamera and predictedName to null so it doesn't load anything at the beginning. IMPORTANT: For the camera to work you need to add the nativescript-camera package. To do that just click on the + sign at the top right of your file explorer. Then click add NPM package and search for 'nativescript-camera'. After that select latest version and click add. To include it you need to add it to the top of script as show below. I used [this article](https://www.raymondcamden.com/2018/11/15/working-with-the-camera-in-a-nativescript-vue-app) to learn how to use the camera. JS ``` import * as camera from "../nativescript-camera"; export default { data() { return { pictureFromCamera: null, predictedName: null }; }, methods: { takePicture() { camera.requestPermissions(); camera.takePicture({ width: 108, height: 162, keepAspectRatio: true }).then(picture => { this.pictureFromCamera = picture; }); } } } ``` What this method does is take a picture and then save it on our data model. Feel free to change the width and height so it fits your phone. Template ``` <Button class="button" text="Take Picture" height="80" width="301" @tap="takePicture" /> ``` After that you should be able to take an image and display it. ### Custom Vision API call For this I'm assuming that you already set up your API [here](https://customvision.ai "Azure's Custom Vision API") and you have the URL and Key mentioned before at the beginning of the article. This is probably the most complicated part of the whole project. Since we are sending an image we can't use the normal http module that NS uses for basic http calls. Instead, we are going to use nativescript-background-http. Please add it in the same way we added the last package. Other then that we are going to user the imageSourceModule and fileSystemModule to save images and access our phone file system. We need to include them in the script. JS ``` import * as camera from "../nativescript-camera"; import * as bghttp from "../nativescript-background-http"; const imageSourceModule = require("tns-core-modules/image-source"); const fileSystemModule = require("tns-core-modules/file-system"); export default { ... } ``` To be able to send the picture to the API the way I did it was to save the image in the device and then made a bghttp call using the path of the saved image. The docs show you [here](https://docs.nativescript.org/ns-framework-modules/image-source) how to save an image to the device and you can learn [here](https://github.com/NativeScript/nativescript-background-http/) how to use the bghttp module. Remember to set your URL and Key. This is the modified method: ``` takePicture() { camera.requestPermissions(); camera .takePicture({ width: 108, height: 162, keepAspectRatio: true }) .then(picture => { this.pictureFromCamera = picture; const source = new imageSourceModule.ImageSource(); source.fromAsset(picture).then(imageSource => { const folder = fileSystemModule.knownFolders.documents().path; const fileName = "picture.png"; const path = fileSystemModule.path.join(folder,fileName); const picsaved = imageSource.saveToFile(path, "png"); if (picsaved) { console.log("Saved"); var session = bghttp.session( "image-upload"); var request = { url: "YOUR-URL", method: "POST", headers: { "Content-Type": "application/octet-stream", "Prediction-Key": "YOUR-KEY" } }; try { var task = session.uploadFile(path, request); } catch (err) { console.log(err); } task.on("responded", data => { const result = JSON.parse(data.data).predictions[0].tagName; this.predictedName = result; }); } else { console.log("Failed"); } }); }) .catch(err => { console.log("Error: " + err.message); }) } ``` Take a couple of minutes to go trough the function. It is nothing complicated. It just saves the image and then makes an http call with the saved image. At the end it reads the prediction from the response and sets it in our model. ## Conclusion The app is now finished. You should be able to take a picture with your phone and call the Custom Vision API. I hope you liked the article and if you think there is anything I should add, remove or change please let me know in the comments. Thank you!
edlgg
80,222
Why did I write another React Native boilerplate
I was not disappointed with what I found, but they just left me wanting for more...
0
2019-02-03T17:46:28
https://dev.to/amitm30/why-did-i-write-another-react-native-boilerplate-47cb
reactnative, boilerplates, typescript, templates
--- title: Why did I write another React Native boilerplate published: true description: I was not disappointed with what I found, but they just left me wanting for more... tags: reactnative, boilerplates, typescript, templates --- *Most of the boilerplate solutions that I found online were either too simple, just a few npm commands put together, or were too heavily loaded and included too many external dependencies / libraries.* I have written a [detailed article](https://medium.com/@amitm30/why-did-i-decide-to-write-another-react-native-boilerplate-4ead11b7fb93) on it. Please share you views / suggestions: https://medium.com/@amitm30/why-did-i-decide-to-write-another-react-native-boilerplate-4ead11b7fb93
amitm30
80,802
gRPC and Protocol Buffers as an alternative to JSON REST APIs
gRPC is an open-source remote procedure call framework and Protocol Buffers is a mechanism for serial...
0
2019-02-05T12:23:19
https://dev.to/franzejr/grpc-and-protocol-buffers-as-an-alternative-to-json-rest-apis-3cg3
api, distributed
--- title: gRPC and Protocol Buffers as an alternative to JSON REST APIs published: true description: tags: api, distributed --- gRPC is an open-source remote procedure call framework and Protocol Buffers is a mechanism for serializing structured data. Both were developed by Google and are used in their internal and public APIs. Other big players such as Cisco and Netflix already benefit from this technology for mission critical applications. In this post, we will learn the core features of gRPC and Protobuf, and compare to JSON REST APIs. Read more on: https://www.smoothterminal.com/articles/grpc-and-protocol-buffers-as-an-alternative-to-json-rest-apis
franzejr
81,128
A Humility Training Exercise for Technical Interviewers
Humility is an important quality in technical interviewers.
0
2019-02-05T21:40:51
https://triplebyte.com/blog/a-humility-training-exercise-for-technical-interviewers
career, interview, advice
--- title: A Humility Training Exercise for Technical Interviewers published: true description: Humility is an important quality in technical interviewers. tags: career, interviews, advice cover_image: https://cl.ly/a3dced0259e3/download/Image%202019-02-05%20at%204.20.34%20PM.png canonical_url: https://triplebyte.com/blog/a-humility-training-exercise-for-technical-interviewers --- **_tl;dr Humility is an important quality in technical interviewers. Our data shows that interviewers who are strongly confident in their own abilities give less consistent interview scores. Interviewers who are aware of their own weaknesses (and of how noisy interviews can be) in contrast, give more consistent scores. We've developed an exercise to help train interviewers in this area._** **Programming interviews are noisy.** Two interviewers judging the same candidate will often reach decidedly different conclusions about the candidate's skill, even in the same specific area. This noise is a significant obstacle to interview accuracy. Reducing this noise is one of the primary goals when training technical interviewers. # Training Interviewers We train a lot of interviewers at Triplebyte. We employ a team of 40 experienced engineers to conduct interviews with candidates as they go through our platform. When we train new members of this team, we focus on several things. We make sure that interviewers are strong and up-to-date in the areas they will be measuring (it's surprisingly hard, sometimes, to distinguish a candidate who gives an unusual answer because they are an expert in an area from someone who gives an unusual answer because they don't know what they are talking about). We make sure that interviewers have clear guidelines for what skills they are assessing (this is the best defense against pattern matching bias in interviewers). **However, I now think it's equally important to train interviewers in humility <sup id="ref1">[[1]](#fn1)</sup>.** **Lack of recognition of your own weaknesses is a major source of interview noise.** This is true because overconfident interviewers judge candidates too harshly. The field of software engineering is broad enough that no single engineer can master it all. However, we all convince ourselves that the areas that we have mastered are the most important. And we don't fully respect another engineer if they are weak in an area where we are strong (even if they are very strong in other areas). In interviews, this manifests as a bias against candidates whose technical strengths are dissimilar to those of their interviewers. We measure this at Triplebyte by having multiple interviewers observe and grade the same interview. The effect persists even when interviewers grade areas unrelated to their own strength, and even when they use structured grading rubrics. **Interviewers just give lower scores to candidates who are not like them.** This is noise. It makes interviews less accurate, and we need to reduce it. **_The solution, we've found, is to train interviewers in humility. Interviewers who are aware of their own weaknesses (and aware of how noisy interviews can be) are less influenced by areas other than the ones they are supposed to be evaluating, and give more consistent scores._** # An Exercise to Build Humility **So, how can you train interviewers to be humble?** How can you make yourself more humble? The answer, I think, is to experience what a candidate goes through. Interviewing for a job is humbling. You get grilled. You have to remember things you've not thought about in years. Smart people point out embarrassing flaws in your logic and code. You never know quite as much as you thought you did. And almost everyone fails a good percentage of their interviews. **We've developed an exercise that we use to let our interviewers experience being a candidate.** At first we tried simply asking interviewers to interview each other. This did not work, however, because they were not able to give honest feedback. If you interviewed your co-worker and ended up thinking that they were kind of bad, would you tell them this honestly? Most people in this situation wouldn't. We needed to get around this somehow. The solution came unexpectedly. The exercise actually started while I was trying to hire for our interview team. Part of the evaluation process that I used was asking candidates to interview me (it got really meta), and, in order to make these interviews more interesting, I gave a mix of (my attempt at) good and bad answers. I immediately noticed how this got them to give honest (and humbling) feedback--even on my “good” answers. **From this experience, we developed the following exercise to train all our interviewers in humility:** * Pair up with a co-worker, and have them ask you some of their favorite interview questions. * Tell them in advance that you are going to intentionally answer some of the questions poorly (role-playing answers that a weak candidate might give). * Then, as the interview progresses, do exactly this. About half the time give your best answer. The other half of the time give an intentionally poor answer. * After the interview is over, ask your co-worker to critique your answers. **What this does is free your co-worker to be 100% honest.** They don't know which parts of the interview were really you trying to perform well. Moreover, they are on the hook to notice the bad answers you gave. If you gave an intentionally poor answer and they don't “catch” it, they look a little bad. So, they will give an honest, detailed account of their perceptions. **Be careful with this exercise!** I've done this a bunch, and it's deeply humbling. It almost always results in someone you respect pointing out things you're bad at. And it has some potential to create conflict. I think it should probably only be done inside teams with a good degree of internal trust (the danger is convincing team members that other team members are not very good). But the result is powerful. It highlights clearly both the extent to which strong engineers are weak in certain areas, and the extent to which interviewers jump to conclusions about what a candidate means. I think everyone who conducts interviewers should put themselves through this exercise. # Conclusion **It's important for technical interviewers to be humble.** This creates a better experience for the candidate, and it also makes interviews more accurate. The best interviewers are aware of their own limitations, and have a healthy appreciation of how capricious the process can be. To get better at these things, interviewers need to spend more time as candidates, being interviewed themselves. It's hard to create this experience among co-workers, but we've come up with a (dangerous!) exercise that does a pretty good job. I'd love it if people tried this exercise more broadly. I think it might be something that should become standard for interview teams at most companies. If you give it a try, email me at [ammon@triplebyte.com](mailto:ammon@triplebyte.com) and let me know how it went. * * * <sup id="fn1">[1]</sup> _I've interviewed over 1000 people since starting Triplebyte. Some of the them probably don't feel that I was humble when I spoke with them. All I can say to this is... I'm sorry if I did a bad job interviewing you. Everything I write about here I apply to myself._[↩](#ref1 "Jump back to footnote 1 in the text.")
ammonb
81,354
Job Search So Far and Learning to love Programming Again for The Right Reasons
So it has been a year since I graduated from my 4 year university with a Bachelors in computer scienc...
0
2019-02-06T19:27:17
https://dev.to/bcampos103/job-search-so-far-and-learning-to-love-programming-again-for-the-right-reasons-10j3
jobsearch, motivation, computerscience, programming
--- title: Job Search So Far and Learning to love Programming Again for The Right Reasons published: true description: tags: #jobsearch #motivation #computerscience #programming --- So it has been a year since I graduated from my 4 year university with a Bachelors in computer science and even though I have to agree that the curriculum was a bit on the bland side I am still grateful for what I could learn from there. Although I haven't made much progress since. I have not dived into any project ideas and have been wandering aimlessly into looking into different technologies such as React, Go, etc. in the year following my graduation but not really putting them into practice mostly due out of desperation to pique the interest on any hiring manager or whatnot. It was this desperation that made me start to hate programming; Hating it just because it couldn't get me a job because I figure that way I can have a reason to come up solutions to problems because I would simply wait to have problems to solve handed to me. Of course, this is not a healthy mindset to have and it impacted me greatly. The rest of 2018 was spent with me hating my position in life because my current situation wasn't the most ideal; still working at the grocery store where I was employed since the latter half of my sophomore year of college, barely making ends meet and barely revisiting the programming languages that I learned, feeling the joy I once felt coming up solutions with, but swiftly dropping that and leaving them alone for months. Depression, frustration, and anxiety became facts of life for me. I decided to go out more to meet new people to see if it would alleviate me of this state of mind and I manage to meet a small group of people there which I became friends with rather quickly. After meeting new friends and going to new events and just hanging out with them I realized that they all had hobbies that they were successful at and met new individuals with all because they loved to do it, but most importantly they kept doing it because they disciplined themselves to keep doing what they love. They described as having to fight to keep themselves engaged doing what they like. And of course realizing my position and why I am here after some self-reflecting it made a lot of sense. I had the motivation, but motivation was only half of the solution; I needed to discipline myself. I realized that I needed to keep reminding myself of how I felt when I learned a new programming language and coming up with solutions to problems as much as I did in college only this time I had to discipline myself to do them out of my own volition instead of having to rely on someone else telling me what to do. Coming into 2019 I decide to to just that. Actually sit down, do what I do, make what I like, and worry about everything else later on. Be happy with my position in life because that is the real first step to improving it. I might not be the most proficient writer but I felt like I had to make this post to vent and to see if there were others in the same boat as I was. Hope you are doing well.
bcampos103
81,489
Creative Computing: Why kids should learn logical programming
In the last years, we have entered into an Artificial Intelligence and Automation Age. This...
0
2019-02-07T13:34:28
https://dev.to/lechenco/creative-computing-why-kids-should-learn-logical-programming-4a71
discuss, learn
--- title: Creative Computing: Why kids should learn logical programming published: true description: tags: #discuss #learn --- In the last years, we have entered into an Artificial Intelligence and Automation Age. This future/present image has been the target of discussions and fears, since a lot of working classes will not exist anymore because of the automation of a large number of services. You can check out which jobs are more likely to disappear in [Will Robots Take My Job?](https://willrobotstakemyjob.com/) website. However, if jobs will vanish we just have to create more jobs, right? The computers systems will take care of the simplest and repetitive areas, so we just have to move on to more complex and creatives one. Because, for most than the artificial intelligence evolves, it never will take our [humanity for us](https://www.youtube.com/watch?v=ajGgd9Ld-Wc). But, for that happen we must prepare our children for a future where their creative and social skills will be more required than their skills for repetitive and massive tasks (that's the computer's domain). One way of doing that is to teach our kids logical programming with tools like [Scratch](https://scratch.mit.edu/). That way, the kids are able to solve a problem in many ways, using their logical skills in a ludic and funny exercise. This strategy, together with others, can be able to prepare them for a future where these skills will be necessary. If you have another point of view or solution, please make a comment in the space below.
lechenco
81,572
Github credentials for beginners
A guide using the credential helper of the git command line.
0
2019-02-20T15:25:45
https://dev.to/paulc_creates/github-credentials-for-beginners-3k2j
git, github, commandline
--- title: Github credentials for beginners published: true description: A guide using the credential helper of the git command line. tags: git, Github, command line, --- ![Control Panel](https://images.unsplash.com/photo-1532200547849-1e9352d72b98?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=1650&q=80) ###### Photo by Jp Valery on Unsplash Today I’m documenting how to enter your GitHub credentials only once for a certain period. This way, you can avoid the tedious operation of entering your username and password every time you push to your GitHub repo. This is especially helpful when you’re working on a remote computer or in the cloud. I do not want to have my password stored in a computer that I don’t own. I value security and by having it cache only for a certain period, gives me a little bit more piece of mind. We will use a tool available in the Git CLI, called credential helpers. Git credentials authenticate the connection over non-SSH protocols. It informs Git to remember your username and password. Git ships with some default helpers that can be used to achieve this process, avoiding the tedious typing of your username and password when prompted every time you push to your Github repo. * Store – stores your credentials on disk protected only by file permissions. * Cache – stores your credentials in memory for a certain period. * Osxkeychain – if available, will use the OSX Keychain app to fetch the credentials. I believe this is used mostly on Mac. I will only provide the instruction for cache as this is what I prefer to use. I will try and break down each parameter passed. You can also refer to this [Github Help Doc](https://help.github.com/articles/caching-your-github-password-in-git/) in caching your Github password in Git. Let’s get to this. ``` $ git config --global credential.helper 'cache --timeout=3600' # Set the cache to timeout after an hour ``` **$ git config** = is the command used to set Git configuration values on either global or local level. **--global** = means you want to apply this configuration to the user’s OS. Global configuration values are stored in a hidden file. One advantage of this is you don’t have to set it again when working with another project on your computer. **credential.helper** = tells Git to remember your username and password every time it talks to Github. **‘cache –timeout=3600’** = will store your credentials for an hour. Well, that is all for now. I will surely be coming back until I memorize this code.
paulc_creates
81,622
Los 5 lenguajes de programación más fáciles de aprender
Post original: https://www.campusmvp.es/recursos/post/los-5-lenguajes-de-programacion-mas-faciles-de...
0
2019-02-19T18:14:41
https://www.campusmvp.es/recursos/post/los-5-lenguajes-de-programacion-mas-faciles-de-aprender.aspx
aprendizaje, spanish
--- title: Los 5 lenguajes de programación más fáciles de aprender published: true tags: Aprendizaje,Spanish canonical_url: https://www.campusmvp.es/recursos/post/los-5-lenguajes-de-programacion-mas-faciles-de-aprender.aspx cover_image: https://www.campusmvp.es/recursos/image.axd?picture=/2019/1T/Logos-Lenguajes-Prog/Portada.png --- >Post original: https://www.campusmvp.es/recursos/post/los-5-lenguajes-de-programacion-mas-faciles-de-aprender.aspx Aprender a programar puede ser una tarea ardua, en cambio, no es tan difícil como parece. El acceso a la información es prácticamente ilimitado: hoy en día existen una gran cantidad de recursos tanto _online_ como _offline_, además de comunidades de desarrolladores y expertos en programación que comparten su conocimiento. Sin embargo, a la hora de aprender a programar es tan importante escoger el lenguaje adecuado como el proceso de aprendizaje. En un [_post_ anterior](https://dev.to/campusmvp/cmo-aprender-a-programar-3-lenguajes-por-los-que-empezar-ln0-temp-slug-1806261 "cómo aprender a programar-por dónde empezar") os hablamos de cómo aprender a programar. En este artículo encontraréis **los 5 lenguajes de programación más fáciles de aprender**. Antes de continuar conviene aclarar a qué nos referimos con la palabra **fácil** , pues su significado varía de un lenguaje a otro, es decir, lo que hace que un lenguaje de programación sea "fácil" de aprender cambia de uno a otro. Por ejemplo: algunos lenguajes cuentan con una sintaxis intuitiva; otros, si bien pueden ser teóricamente más complejos, el hecho de tener una comunidad muy activa puede compensar esa dificultad. Además, se trata de una cuestión bastante subjetiva, pero creemos que la selección es interesante ya que contempla lenguajes que no tienen sintaxis complicadas ni requieren grandes conocimientos previos para arrancar con ellos. ## JavaScript ![](https://www.campusmvp.es/recursos/image.axd?picture=/2019/1T/Logos-Lenguajes-Prog/JavaScript.png) JavaScript es el lenguaje [más utilizado y con más demanda actualmente](https://www.campusmvp.es/recursos/post/5-razones-por-las-que-todo-programador-deberia-aprender-JavaScript.aspx), y se encuentra integrado en numerosas aplicaciones. Si deseas dedicarte al desarrollo web, aprender JavaScript es un indispensable ya que se ejecuta nativamente en cualquier navegador, por lo que no necesitas compilarlo. Solo necesitas un bloc de notas para empezar. Se trata de un lenguaje débilmente tipado, lo hace que resulte más fácil de aprender, aunque también más fácil que puedas meter la pata. Su sintaxis es similar a la de otros lenguajes, como C, C++, Java o C#, por lo que también sirve como puerta de entrada para luego seguir estudiando lenguajes de programación más complejos. > **Nota** : no confundir JavaScript con Java. Nuestro experto tutor de JavaScript te lo aclara en [este post](https://www.campusmvp.es/recursos/post/Java-y-JavaScript-son-lo-mismo.aspx "JavaScrip y Java no son lo mismo"), básico pero necesario. ### Usos - Desarrollo web - Desarrollo backend - Aplicaciones IoT - Otros ### Pros - Sencillo - Múltiples posibles aplicaciones - Multiplataforma - Es un estándar - Puerta de entrada hacia otras tecnologías ### Contras - Débilmente tipado - Más difícil detectar bugs de tipo lógico ## Java ![](https://www.campusmvp.es/recursos/image.axd?picture=/2019/1T/Logos-Lenguajes-Prog/Java-Logo.png) Java se utiliza tanto en aplicaciones web como de escritorio, en servidores, etc... No en vano su eslogan siempre fue "Escribe una vez, ejecuta en todas partes" cuando esto no era ni mucho menos lo habitual como ahora. Este lenguaje de programación orientado a objetos basado en clases, **está siempre en los primeros puestos en las clasificaciones de popularidad y de demanda de empleo**. Su inmensa popularidad se refleja en que: - Java dispone de una de las comunidades de desarrolladores más grandes y activas, por lo que nunca te sentirás solo. - Especialmente las grandes empresas siempre están buscando gente con conocimientos de Java. - Existen más de [15.000 millones de dispositivos que ejecutan Java](https://www.campusmvp.es/recursos/post/es-cierto-que-3-mil-millones-de-dispositivos-ejecutan-java.aspx) Aprender Java entraña más dificultad que JavaScript porque tiene muchos tipos de datos y miles de clases en sus paquetes. Pero, al fomentar una base sólida de conocimientos de programación analítica, Java sigue siendo un lenguaje de programación fantástico, aunque un poco más difícil, para principiantes. ### Usos - Desarrollo web backend - Desarrollo de escritorio - Desarrollo móvil ### Pros - Popularidad y demanda - Lenguaje estable - Gran comunidad de apoyo ### Contras - Perdona mucho menos los fallos. - Hay que aprender también la plataforma Java, con decenas de miles de clases, lo que puede ser desafiante - Precisa mayor capacidad de pensamiento analítico. ## Python ![](https://www.campusmvp.es/recursos/image.axd?picture=/2019/1T/Logos-Lenguajes-Prog/Phyton-Logo.png) **Python es un gran lenguaje de programación fácil para principiantes**. Utilizado en aplicaciones web y de escritorio, Python ha aumentado mucho en popularidad en los últimos años gracias a ser **el lenguaje más utilizado en Machine Learning e Inteligencia Artificial**. Este lenguaje dinámico es compatible con programación orientada a objetos, procedimientos y programación funcional. Además, es un lenguaje open-source y, al igual que Java, dispone de una devota comunidad. Gracias además a su flexibilidad y versatilidad, Phyton es un lenguaje recomendado para principiantes. ### Usos - Aplicaciones web - Aplicaciones de escritorio. - Machine Learning e Inteligencia Artificial ### Pros - Sintaxis sencilla - Avanzas enseguida - Gran comunidad ### Contras - No es indicado para ciertos tipos de desarrollo, por ejemplo, para aplicaciones móviles ## C# ![](https://www.campusmvp.es/recursos/image.axd?picture=/2019/1T/Logos-Lenguajes-Prog/C-Sharp-Logo.png) **C# es una opción increíble para principiantes**. Hay una manera muy rápida y sencilla de probarlo: basta [descargar Visual Studio Community](https://visualstudio.microsoft.com/es/vs/community/ "Visual Studio Community"). C # se puede usar para una gran variedad de propósitos, desde el desarrollo web hasta las aplicaciones de consola y gracias a la plataforma .NET se puede crear prácticamente de todo: apps de escritorio, servidores, cloud, móviles... La sintaxis de C# se base en C++ (y en Java), por lo que a priori podría parecer un lenguaje complejo para principiantes. Sin embargo, las opciones de autocompletado de Visual Studio, la auto-creación de proyectos y la facilidad de uso de su entorno de desarrollo en general, son aspectos que hacen que este lenguaje sea una buena opción para las personas que se inician en la programación. ### Usos - Aplicaciones web backend - Aplicaciones de escritorio - Aplicaciones móviles - Aplicaciones Cloud ### Pros - Ampliamente usado - Visual Studio allana mucho el aprendizaje y disminuye los fallos - IDE fácil de usar ### Contras - Hay que aprender también la plataforma .NET con decenas de miles de clases, lo que puede ser desafiante - El despliegue en ciertos entornos puede ser complejo ## Ruby on Rails ![](https://www.campusmvp.es/recursos/image.axd?picture=/2019/1T/Logos-Lenguajes-Prog/Ruby-Logo.png) Ruby on Rails es fácil de leer, ya que está diseñado para parecerse al inglés, lo cual supone una gran ventaja para cualquier persona sin experiencia de programación. Ruby es un lenguaje orientado a objetos dinámico que **se usa mucho en el desarrollo web**. Aprender **Ruby on Rails** (Ruby es el lenguaje de programación y Rails es un _framework_ de aplicación web que funciona con dicho lenguaje) es muy, muy fácil ya que no es necesario aprender cientos o miles de clases. Y además facilita mucho enlazar a datos y otras operaciones normalmente complejas. Es el lenguaje elegido por muchas empresas que empiezan ya que se puede decir que **no tiene barreras de entrada**. ### Usos - Desarrollo web ### Pros - Curva de aprendizaje casi plana - Gran rapidez de desarrollo de aplicaciones web, ves resultados rápido ### Contras - Ha caído bastante en desuso en los últimos años ante nuevas opciones - La comunidad es menor que la de otros lenguajes >Post original: https://www.campusmvp.es/recursos/post/los-5-lenguajes-de-programacion-mas-faciles-de-aprender.aspx
campusmvp_es
81,741
Awesome Developer Streams
A cool list of awesome developers streaming.
0
2019-02-08T18:03:35
https://dev.to/lauragift21/awesome-developer-streamers-5h9n
githunt, developers, twitch
--- title: Awesome Developer Streams published: true description: A cool list of awesome developers streaming. tags: #githunt, #developers, #twitch --- Hey Developers, I recently found out about Twitch Streaming and what's awesome is I can get to watch a couple of my favorite developers live coding on Twitch. I also found this awesome list made by [@bitandbang](https://github.com/bnb) with a list of awesome streamers you can check out if this is your kinda thing. {% github bnb/awesome-developer-streams %} Check it out and you can also send PR for streamers that are not already on the list. 😎😎
lauragift21
81,760
Testing For Success vs. Failure
When you build something, and tweak it to satisfy all of the scenarios it can cover, how/when/how muc...
0
2019-02-08T19:42:48
https://dev.to/punch/testing-for-success-vs-failure-30cp
help
--- title: Testing For Success vs. Failure published: true tags: help --- When you build something, and tweak it to satisfy all of the scenarios it can cover, how/when/how much do you test it for failure? Let's say that I build a monitor to watch the ratio between two specific metrics, and I want it to alert me when that ratio drops below 0.8, rather than 1 (indicating that there is no issue), or 0.9 (indicating that we might have something righting itself, i.e. an autoscaling host being killed off as it's no longer needed). I've built this monitor and tweaked the thresholds based on historical examples of: 1. Times when we wanted to be alerted, based on what was going on, and what the ratio looked like at that time 2. Times when we expect the ratio to not be 1, but we don't need to alert, as we have scheduled a change during that time period I've researched this, tested in, even did some new example tests of #1 and #2. Based on everything I've tested thus far, the new monitor I've built satisfies everything and would have alerted on all times when we wanted it to, and would have ignored all of the times we wanted it to ignore the metric ratio. I present the results of my testing, my research, my reasonings, and the monitor, to my manager, who says: ### "You need to come up with an example of where this monitor fails." # _Is he right?_ Remember, I have: * tested different metrics, and combinations/ratios, to find the optimal way to monitor these scenarios * tweaked my thresholds to satisfy when I do and do NOT want the monitor to alert us Testing for unknown unknowns is always difficult. In this case, I'm being asked to make a monitor that is _completely perfect, and **will not need to be tweaked in the future** even if our infrastructure changes_. Can/should this be done? How/why/why not?
punch
140,888
Rust Ownership and Borrowing
Introduction to Rust Ownership and Borrowing
0
2019-07-18T13:26:54
https://dev.to/saiumesh/rust-ownership-and-borrowing-3o5d
rust
--- title: Rust Ownership and Borrowing published: true description: Introduction to Rust Ownership and Borrowing tags: Rust --- Before we talk about Rust Ownership and Borrowing, I would like you to know that [techempower ](https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=query) has released round 18 results last week and **Rust** has clearly dominated in 4 of 5 categories, which shows how powerful Rust has become over the years. Rust is a very powerful programming language but, learning Rust is not easy. it has a steep learning curve. The first thing you would find difficult to learn in Rust is probably going to be **Ownership and Borrowing**. Today we will try to learn basics of it. **This article is aimed at learning basics of it so, use this code in production with caution**. Rust neither has manual memory management nor garbage collector. Then the next obvious question would be, how does Rust manage memory 💭💭. Here comes Rust's most unique point, Ownership and Borrowing. let's see how does it work with some examples. before we see any black and blue text, we need know about two types of variables: 1. **Static allocated** types ex: int 2. **Heap Allocated** types ex: array(Vector in Rust) For any statically allocated types, Rust Ownership and Borrowing doesn't apply but for heap allocation it does apply. let's checkout some examples. ```rs fn do_something(input: i32) -> i32 { input * 2 } fn main() { let number: i32 = 1; let result = do_something(number); println!("result is {} for number {}", result, number) } ``` [run it on playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=a661ed1ed4e2edd0e382883f64659fae) if you run above program you would get following output, as expected. ```rs result is 2 for number 1 ``` now let's replace ***i32 (statically allocated)*** with ***String (Heap allocation)*** in above example. ```rs fn do_something(input: String) -> String { // do something with string input } fn main() { let name: String = String::from("Hello!!"); let result = do_something(name); println!("result is {} for string {}", result, name) } ``` [run it on playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=d16d1be6b9b126a5067dd2234bc8fed6) once you run above program you will encounter first O & B problem as follows. ```rs --> src\main.rs:11:52 | 9 | let name: String = String::from("Hello!!"); | ---- move occurs because `name` has type `std::string::String`, which does not implement the `Copy` trait 10 | let result = do_something(name); | ---- value moved here 11 | println!("result is {} for string {}", result, name) | ^^^^ value borrowed here after move ``` **Any heap allocated variable will have it's scope, and generally the scope is of lifetime of it's function(ownsership). but, if you pass any heap variable to another function, not only the variable but also it's scope will be passed to that function(ownsership) along with it's lifetime. this means, main() will not have access to name of type String once it's passed down to do_something(). once do_something() is done with it's execution, input variable of type String memory will be dropped. This is how Rust maintains memory management without managing manually.** Now, To solve above problem, we will send reference of the string but not actual variable. ```rs fn do_something(input: &String) -> String { // do something with string input.to_string() } fn main() { let name: String = String::from("Hello!!"); let result = do_something(&name); println!("result is {} for string {}", result, name) } ``` [run it on playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f87567843586206219d2bfe6e7922774) if you run above program you would get following output. ```rs result is Hello!! for name Hello!! ``` any variable in rust is immutable unless say otherwise. now let's try to mutate input and let's see what happens. ```rs fn do_something(input: &String) { input.push_str("world!!") } fn main() { let name: String = String::from("Hello!!"); do_something(&name); println!("result is {} for name", name) } ``` [run on playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=996ce4ac1573d7b7a3202fe6f9faf82a) if you run above program you would get following output. ```rs error[E0596]: cannot borrow `*input` as mutable, as it is behind a `&` reference --> src\main.rs:3:5 | 2 | fn do_something(input: &String) { | ------- help: consider changing this to be a mutable reference: `&mut std::string::String` 3 | input.push_str("World!!") | ^^^^^ `input` is a `&` reference, so the data it refers to cannot be borrowed as mutable ``` since borrowed variable not mutable, we cannot mutate it. to solve the problem we need to make both original and passed variable **mut** mutable. ```rs fn do_something(input: &mut String) { input.push_str(" World!!") } fn main() { let mut name: String = String::from("Hello!!"); do_something(&mut name); println!("result is {} for name", name) } ``` [run it on playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=ff04818317a5b426e0b026a22f66962b) if you run above program you would get following output. ```rs result is Hello!! World!! for name ``` I hope you enjoyed this article. If you have any doubt or help, you can contact me on twitter. In next article we will talk about, how to create sharable variables between the threads. Thanks for reading. Have a Good day!
saiumesh
81,960
Around the Web – 20190208
And now for somethings to read (or watch) over the weekend, if you have some spare time that is....
359
2019-02-16T15:30:10
https://wipdeveloper.com/around-the-web-20190208/
blog
--- title: Around the Web – 20190208 published: true tags: Blog canonical_url: https://wipdeveloper.com/around-the-web-20190208/ cover_image: https://wipdeveloper.com/wp-content/uploads/2017/07/WIPDeveloper.com-logo-black-with-white-background.png series: Around the Web --- And now for somethings to read (or watch) over the weekend, if you have some spare time that is. ## [205: A Safe Place](https://www.gooddaysirpodcast.com/podcast/2019/2/6/205-a-safe-place) **Good Day, Sir! Show – Salesforce Podcast** – / John De Santiago – Feb 6 – In this episode, we discuss the popularity of PHP, Salesforce towner tours, Spotify, Microsoft 365, Satya Nadella’s achievements, Action buttons, Four Roses Bourbon, and IDE’s. ## [Working with Aura and Lightning Web Components: Interoperability and Migration – Salesforce Developers Blog](https://developer.salesforce.com/blogs/2019/02/working-with-aura-and-lightning-web-components-interoperability-and-migration.html) **Salesforce.com** – Feb 5, 10:30 AM – In this post, we’ll explore a few considerations and patterns for working with both Aura components and Lightning web components in your Salesforce applications. We’ll look at general usage patterns for combining Aura and Lightning web components… ## [New JavaScript Features That Will Change How You Write Regex](https://www.smashingmagazine.com/2019/02/regexp-features-regular-expressions/) **Smashing Magazine** – Feb 8, 1:00 PM – 11 min read Share on Twitter or LinkedIn Upgrade your inbox and get our editors’ picks twice a month. With useful tips for web devs. Sent 2× a month. You can unsubscribe any time — obviously. If you have ever done any sort of… ## [What Hooks Mean for Vue](https://css-tricks.com/what-hooks-mean-for-vue/) **CSS-Tricks** – Feb 4, 9:23 AM – Ship custom analytics today with Keen.io. Not to be confused with Lifecycle Hooks, Hooks were introduced in React in v16.7.0-alpha, and a proof of concept was released for Vue a few days after. Even though it was proposed by React, it’s actually an… ## [Debugging LWC Tests in VS Code](https://www.mattgoldspink.co.uk/debugging-lwc-tests-vs-code/) **Matt Goldspink** – View all posts by Matt Goldspink – Feb 4, 5:07 AM – Sometimes you have a test that consistently fails and you just can’t figure it out. Most of us are used to being able to debug a component in the browser but how do we do that with Jest tests in LWC? Well fortunately it’s very simple! I’m going to… ## Till Next Week Want to share something? Let me know by leaving a comment below or emailing [brett@wipdeveloper.com](mailto:brett@wipdeveloper.com) or following and tell me on [Twitter/BrettMN](https://twitter.com/BrettMN). Don’t forget to sign up for **[The Weekly Stand-Up!](https://wipdeveloper.com/newsletter/)** to receive free the [WIPDeveloper.com](https://wipdeveloper.com/) weekly newsletter every Sunday! The post [Around the Web – 20190208](https://wipdeveloper.com/around-the-web-20190208/) appeared first on [WIPDeveloper.com](https://wipdeveloper.com).
brettmn
82,001
My Top 3 books to learn Java programming for beginners
The Java programming language is one of the most used programming languages in the world and there...
0
2019-02-10T09:14:58
https://dev.to/devfanooos/my-top-3-books-to-learn-java-programming-for-beginners-2hkp
java
--- title: My Top 3 books to learn Java programming for beginners published: true description: tags: java --- ![alt text](https://thepracticaldev.s3.amazonaws.com/i/a3ih28l3bq12i2g7p5f7.png "") The Java programming language is one of the most used programming languages in the world and there are thousands of books out there to learn various topics. Here are my top 3 books to learn java programming for beginners. Each of these books has its own style. ***1- How to think like a computer scientist.*** This book is actually the introduction course to computer programming at MIT. The book exists in two versions. One in python and the other in Java. I usually recommend this book for computer science students. ***2- Murach's java programming*** If you want to learn how to get your first steps in the java programming with more practice than theory, this is the book you need. All of Murach's publications follow the same pattern. The left page explains a specific point and the right page shows how to get this point done. So, if you open the book at any page you will find the theory at the left and the code at the right. ***3- Introduction to java programming by Daniel Liang*** I am considering this book a piece of art. It contains a massive amount of information. Its language is very simple for a non-native English speaker like me. The content of the book is well organized. The book also contains some intermediate and advanced topics. The most thing I like about this book is its excersises. Each chapter has a set of excersises that start with simple tasks and its difficulty increase gradually.
devfanooos
82,093
Looking for open source contribution - HELP
Link - https://github.com/0xPrateek/Portfolio-Template A portfolio website template for Geeks,Program...
0
2019-02-10T19:52:31
https://dev.to/0xprateek/looking-for-open-source-contribution---help-21i7
beginners, javascript, opensource, github
--- title: Looking for open source contribution - HELP published: true description: tags: beginners,javascript,opensource,github --- Link - https://github.com/0xPrateek/Portfolio-Template A portfolio website template for Geeks,Programmers,Developers and hackers.
0xprateek
82,118
The Power of a Testing Framework
My new testing framework has been a success
0
2019-02-10T23:11:36
https://everythingfunctional.wordpress.com/2019/02/10/the-power-of-a-testing-framework/
tdd, testing, fortran, framework
--- title: The Power of a Testing Framework published: true description: My new testing framework has been a success tags: tdd, testing, fortran, framework cover_image: https://everythingfunctional.files.wordpress.com/2019/02/holdingpower-1200-2-1080x608.jpg canonical_url: https://everythingfunctional.wordpress.com/2019/02/10/the-power-of-a-testing-framework/ --- I am now ready to call Vegetables a huge success. In my [last post](https://everythingfunctional.wordpress.com/2018/11/19/eat-your-vegetables/) I discussed its design and construction. I am now using it in a project at work. The power and convenience it has brought us is huge. The design of Vegetables is such that its use highly encourages writing your tests in a BDD, or at least a specification style. This means that the act of writing tests encourages you to think more explicitly about the requirements of the code you’re writing. Now that your test descriptions are written as a requirements specification, the testing framework can report those requirements back to you. We’ve got our tests and our requirements developed as part of a single effort; two birds with one stone. Next, with all the information collected by the testing framework as part of running the tests, and the ability to ask it be verbose when reporting the results, means our test suite can effectively generate our V&V report for us. That makes 3 things now that we have gotten out of the single effort of developing our test suite. Finally, I added in the ability to ask the framework to run only a subset of the tests. Add this to the verbose outputs and I can get detailed feedback about the piece of code I’m currently working on, without having to wade through the outputs from the rest of the test suite. This shortens and simplifies the feedback cycle and makes TDD, or at least writing the tests very shortly after the code, much more useful. In fact, I put the following script into our repository for everybody to use while developing or debugging. ``` #!/bin/bash DRIVER_PROGRAM="unit_test_build/vegetable_driver" ./shake.sh "${DRIVER_PROGRAM}" && $DRIVER_PROGRAM -q -v -f "${1}" ``` I think having this level of power and flexibility from my testing framework is going to be a must have for any project of significant size in the future. It has proved to be too useful to do without. And if one like this doesn’t exist in the language I want to use, I’ll now be confident about creating it, since I’ve now done it for Fortran.
everythingfunct
82,236
My EMACS Custom Indent Function Uses 'tab-width'
Sharing a new trick I learned in Elisp.
0
2019-02-11T15:10:25
https://dev.to/emgodev/my-emacs-custom-indent-function-uses-tab-width-34p8
elisp, showdev, productivity
--- title: My EMACS Custom Indent Function Uses 'tab-width' published: true description: Sharing a new trick I learned in Elisp. tags: elisp, showdev, productivity cover_image: https://camo.githubusercontent.com/38467eb65de742741e216b6df7f986f2d7a621c6/687474703a2f2f7777772e656d61637377696b692e6f72672f706963732f7374617469632f54616273537061636573426f74682e706e67 --- *I think image came from here: [https://github.com/jcsalomon/smarttabs](https://github.com/jcsalomon/smarttabs)* I have been investing a guilty amount of time customizing my EMACS init config lately. It began with trying to understand more iterative processes in Elisp, so I read up on loops; most of which are simple examples, but I am a bit proud of what I was able to work on just now. So I am trying to get the Aggressive Indent package to use 2 spaces for indentation - A.I. uses the local mode indent width - which is not that difficult, but as I dedicate a section of my init to setting these widths I thought I should at least update the custom indent function I had. It was a custom interactive function I mapped to my space key that *insert*(s) two spaces for quick indentation. I used this to quickly indent my code (enter+space or beginning-of-line+space). A horrible process! Anyway now it uses a while loop and the global tab-width variable to *insert* spaces. ```elisp ; Previously ;(defun space-times-2 () (defun insert-space-indent () (interactive) (setq scoped-indent-width tab-width) (while (> scoped-indent-width 0) (insert " ") (setq scoped-indent-width (- scoped-indent-width 1)) ) ) ``` I thought this was exciting, learning Elisp here and there, it's a very different syntax to the web languages. If this is interesting or helpful I'll share more of Elisp/EMACS stuff I've been learning. \#candevtobemytwitter
emgodev
82,886
Top Tips For Better Cross Browser Testing
Web developers usually are biased towards a browser. They can debate for hours why the latest version of their favorite browser is the best one for all web development projects. Many times, they work in their favorite browser assuming that the code they have developed will run on other browsers too.
0
2019-02-14T08:14:05
https://www.lambdatest.com/blog/top-tips-for-better-cross-browser-testing/
testing, tips, beginners, showdev
--- title: Top Tips For Better Cross Browser Testing published: true description: Web developers usually are biased towards a browser. They can debate for hours why the latest version of their favorite browser is the best one for all web development projects. Many times, they work in their favorite browser assuming that the code they have developed will run on other browsers too. tags: #testing #tips #beginners #showdev cover_image: https://www.lambdatest.com/blog/wp-content/uploads/2018/08/534-x-300-15-1.jpg canonical_url: https://www.lambdatest.com/blog/top-tips-for-better-cross-browser-testing/ --- Web developers usually are biased towards a browser. They can debate for hours why the latest version of their favorite browser is the best one for all web development projects. Many times, they work in their favorite browser assuming that the code they have developed will run on other browsers too. But what about the other browsers that they don’t have in their system? Will the code they developed work for all browsers as well? Here, Cross browser compatibility testing comes into action. As we know that browsers, operating systems, and devices are evolving every single day so it is good to add cross-browser testing as part of the daily activity to ensure the best possible end-user experience. At the same time staying up to date with all of them and making sure that your web applications are working as intended without any discrepancy and compromises in quality is critical to touch the sky in this Internet world. You can read more about [what is cross-browser testing and why we need it here](https://goo.gl/sXEamT). I have done nuanced analysis and came up with few tips to keep in mind while performing cross-browser testing. ##Target Browser OS configurations## Finding browsers on which you want to test your web application is the first and foremost thing to take care of before starting web app testing. Each browser has many versions and some browser like Chrome and Firefox update frequently like at least once a month. Most tech companies support recent versions of browsers but we can not leave the user base who are still using old versions of Internet Explorer. This will curb us to a couple of versions of various browsers to test. An alternative way to discover browser, browser versions and OS configurations with different screen resolution is data sampling. When our website is live publically, we use different tools like Google analytics or Splunk to track the user data. We learn about user browser, browser version, mobile device and operating system usage and list out most used configuration to focus more on testing. You can also read this article to get tips for [choosing the right Browser List for Cross Browser Testing](https://bit.ly/2WZ6y4B). ##Make List Of Browsers As Per User Taste## Monitoring your user browser usage is most efficient and pivot element for achieving best cross-browser testing experience for your company. You can extract report for user browser and device consumption, and prioritize them according to user base strength. Create a list of browser-OS configuration with different parameters like priority, usage%, availability etc before hopping into cross-browser testing of your web app. [![](https://www.lambdatest.com/blog/wp-content/uploads/2018/11/Adword-Cyber2.jpg)](https://goo.gl/urph4c) ##Use Smart Tech Ways Don’t assume anything before starting browser compatibility testing. Different browsers behave in a disparate manner for various browser elements. Let’s take one example: a data picker browser property will open and work perfectly on Chrome, but renders differently and have a bug in the month navigator on FireFox. Have look this [checklist for cross browser testing](https://goo.gl/M9PfMW) before going live publically. ##Emulators and Virtual machines## Testing websites across old and new browser-OS combinations are necessary for cross-browser compatibility. We can use emulators or virtual machines for testing, both have its own benefits. Many [cloud-based tools](https://goo.gl/BH3hed) available which provide a different set of emulators of various configurations can replicate the exact look and feel of the website on all the browser versions. You can test your web apps on these emulators with very small effort and negligible budget. Alternatively, Virtual Machines are more authentic as they are framed to use specific browser versions. This will give us the idea of how the site will look to niche users. ##Mobile First World## We have seen an exponential growth in mobile uses in the last 20 years. It increases from 318 million in 1998 to 7740 million in 2018. Mobile users are extensively conquering the internet world. Considering this, every mobile first IT companies very much careful for mobile user experience across different mobile devices. For them, cross-device testing is always on highest priority. Daily new mobile devices are launching with different screen sizes and viewports, synching with them and setting up in-house infrastructure for testing is not an efficient way. Understanding these problems, many companies provides a cloud-based online platform where you can launch a mobile device of the desired configuration and test your website and web application with ease. You can also use google chrome to [check the responsiveness of your website](https://goo.gl/Kmj3kD) across different mobile and tablet devices with different screen sizes and viewports. ##Don’t Underestimate Tablets## Tablet devices cover approx 4% internet market share (Source: statcounter.com) worldwide which wide range of user base. 8% of people in the United States use tablet devices to achieve their internet need. Checking your website across different tablet devices with different screen sizes and resolutions is significant for great user experience ##Automate The Testing## All set with user browser-OS usage? The first question comes to my mind is how to test the website on all configurations? Browser-OS combinations count more than 2000 and performing testing manually on each configuration is very painful and repetitive task. Automation web app testing act as an aid to this problem. It is very easy to perform and it saves a lot of time. With automation testing, you can your testing script and test your website across different browser and OS combinations. There are many online automation testing tools are available which offers browsers, OSes and mobile devices ##Test Before Going Live## It is the best practice to perform cross-browser testing before going live publically. Always perform testing of your web application when it is hosted on your local server. This is very helpful for keeping good user experience of your website. This saves you from unexpected blunders when you make your website live. ##Take Care Of Accessibility Too## Have your website accessible to everyone? This is a very interesting thing to discuss, a different type of people can be your user. It can be a man who is not able to listen, a boy not able to see or a person with color blindness, people using screen readers to access your text, or people having some motor impairments who use nonmouse methods like keyboards and shortcuts to use the web. It becomes necessary to make your website accessible to everyone. So, making sure that your website is accessible to every user is ‘[Accessibility Testing](https://goo.gl/A8hQY7)’ ##Use Appropriate Tool For Testing## To perform testing, one must use best tool for that. However, finding that out is a tough decision. But there are many testing platforms available in the market, selecting one for your business is a crucial decision. It also depends on your requirements. You can read out more about the [top cross browser compatibility testing tool](https://dzone.com/articles/top-5-cross-browser-testing-tools) here. Cross browser and cross platform compatibility testing of websites is becoming a principal factor for great user experience and satisfaction. For this era of cut edge technologies, user experience is what helps internet business fly. Hope these tips might be helpful for you while going for cross browser testing. Do you have any tips that we missed out? We would love to add them to our list ! Let us know in the comments below. Till then Happy Testing!! [![LambdaTest](https://www.lambdatest.com/blog/wp-content/uploads/2018/11/Adword-Cyber2.jpg)](https://goo.gl/urph4c) Original Source: [lambdatest.com](https://goo.gl/XoPrhM) **Related Articles** 1. [Remote Debugging Webpages In iOS Safari](https://goo.gl/PQyRHK) 2. [All About Triaging Bugs](https://goo.gl/PGtf5W) 3. [How Do Software Engineers Choose Which Front End Framework To Use](https://goo.gl/d9S1P9)
saifsadiq1995
83,014
How to Export Data to XLSX Files
A while ago I wrote an article about exporting data to different spreadsheet formats. As recently...
0
2019-02-15T01:13:01
http://djangotricks.blogspot.com/2019/02/how-to-export-data-to-xlsx-files.html
export, python, django, xlsx
--- title: How to Export Data to XLSX Files published: true tags: Export,Python,Django,XLSX canonical_url: http://djangotricks.blogspot.com/2019/02/how-to-export-data-to-xlsx-files.html --- ![](https://1.bp.blogspot.com/-isH4SKIR1No/XGYHR4fL36I/AAAAAAAAB8c/21tlFQt5RJYeNuKWZKReVRyRY0CULZcUwCLcBGAs/s1600/how-to-export-data-to-xlsx-files.png) A while ago I wrote [an article about exporting data to different spreadsheet formats](https://djangotricks.blogspot.com/2013/12/how-to-export-data-as-excel.html). As recently I was reimplementing export to Excel for the [1st things 1st](https://www.1st-things-1st.com) project, I noticed that the API changed a little, so it's time to blog about that again. For Excel export I am using the XLSX file format which is a zipped XML-based format for spreadsheets with formatting support. XLSX files can be opened with Microsoft Excel, Apache OpenOffice, Apple Numbers, LibreOffice, Google Drive, and a handful of other applications. For building the XLSX file I am using __openpyxl__ library. ## Installing openpyxl You can install openpyxl to your virtual environment the usual way with pip: ```bash (venv) pip install openpyxl==2.6.0 ``` ## Simplest Export View To create a function exporting data from a QuerySet to XLSX file, you would need to create a view that returns a response with a special content type and file content as an attachment. Plug that view to URL rules and then link it from an export button in a template. Probably the simplest view that generates XLSX file out of Django QuerySet would be this: ```python # movies/views.py from datetime import datetime from datetime import timedelta from openpyxl import Workbook from django.http import HttpResponse from .models import MovieCategory, Movie def export_movies_to_xlsx(request): """ Downloads all movies as Excel file with a single worksheet """ movie_queryset = Movie.objects.all() response = HttpResponse( content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', ) response['Content-Disposition'] = 'attachment; filename={date}-movies.xlsx'.format( date=datetime.now().strftime('%Y-%m-%d'), ) workbook = Workbook() # Get active worksheet/tab worksheet = workbook.active worksheet.title = 'Movies' # Define the titles for columns columns = [ 'ID', 'Title', 'Description', 'Length', 'Rating', 'Price', ] row_num = 1 # Assign the titles for each cell of the header for col_num, column_title in enumerate(columns, 1): cell = worksheet.cell(row=row_num, column=col_num) cell.value = column_title # Iterate through all movies for movie in movie_queryset: row_num += 1 # Define the data for each cell in the row row = [ movie.pk, movie.title, movie.description, movie.length_in_minutes, movie.rating, movie.price, ] # Assign the data for each cell of the row for col_num, cell_value in enumerate(row, 1): cell = worksheet.cell(row=row_num, column=col_num) cell.value = cell_value workbook.save(response) return response ``` If you try this, you will notice, that there is no special formatting in it, all columns are of the same width, the value types are barely recognized, the header is displayed the same as the content. This is enough for further data export to CSV or manipulation with __pandas__. But if you want to present the data for the user in a friendly way, you need to add some magic. ## Creating More Worksheets By default, each Excel file has one worksheet represented as a tab. You can access it with: ```python worksheet = workbook.active worksheet.title = 'The New Tab Title' ``` If you want to create tabs dynamically with data from the database of Python structures, you can at first delete the current tab and add the others with: ```python workbook.remove(workbook.active) for index, category in enumerate(category_queryset): worksheet = workbook.create_sheet( title=category.title, index=index, ) ``` Although not all spreadsheet applications support this, you can set the background color of the worksheet tab with: ```python worksheet.sheet_properties.tabColor = 'f7f7f9' ``` ## Working with Cells Each cell can be accessed by its 1-based indexes for the rows and for the columns: ```python top_left_cell = worksheet.cell(row=1, column=1) top_left_cell.value = "This is good!" ``` Styles and formatting are applied to individual cells instead of rows or columns. There are several styling categories with multiple configurations for each of them. You can find some available options from the [documentation](https://openpyxl.readthedocs.io/en/latest/styles.html), but even more by exploring the [source code](https://bitbucket.org/openpyxl/openpyxl/). ```python from openpyxl.styles import Font, Alignment, Border, Side, PatternFill top_left_cell.font = Font(name='Calibri', bold=True) top_left_cell.alignment = Alignment(horizontal='center') top_left_cell.border = Border( bottom=Side(border_style='medium', color='FF000000'), ) top_left_cell.fill = PatternFill( start_color='f7f7f9', end_color='f7f7f9', fill_type='solid', ) ``` If you are planning to have multiple styled elements, instantiate the font, alignment, border, fill options upfront and then assign the instances to the cell attributes. Otherwise, you can get into memory issues when you have a lot of data entries. ## Setting Column Widths If you want to have some wider or narrower width for some of your columns, you can do this by modifying column dimensions. They are accessed by column letter which can be retrieved using a utility function: ```python from openpyxl.utils import get_column_letter column_letter = get_column_letter(col_num) column_dimensions = worksheet.column_dimensions[column_letter] column_dimensions.width = 40 ``` The units here are some relative points depending on the width of the letters in the specified font. I would suggest playing around with the width value until you find what works for you. When defining column width is not enough, you might want to wrap text into multiple lines so that everything can be read by people without problems. This can be done with the alignment setting for the cell as follows: ```python from openpyxl.styles import Alignment wrapped_alignment = Alignment(vertical='top', wrap_text=True) cell.alignment = wrapped_alignment ``` ## Data Formatting Excel automatically detects text or number types and aligns text to the left and numbers to the right. If necessary that can be overwritten. There are some gotchas on how to format cells when you need a percentage, prices, or time durations. ### Percentage For percentage, you have to pass the number in float format from 0.0 till 1.0 and style should be 'Percent' as follows: ```python cell.value = 0.75 cell.style = 'Percent' ``` ### Currency For currency, you need values of `Decimal` format, the style should be 'Currency', and you will need a special number format for currency other than American dollars, for example: ```python from decimal import Decimal cell.value = Decimal('14.99') cell.style = 'Currency' cell.number_format = '#,##0.00 €' ``` ### Durations For time duration, you have to pass timedelta as the value and define special number format: ```python from datetime import timedelta cell.value = timedelta(minutes=90) cell.number_format = '[h]:mm;@' ``` This number format ensures that your duration can be greater than '23:59', for example, '140:00'. ## Freezing Rows and Columns In Excel, you can freeze rows and columns so that they stay fixed when you scroll the content vertically or horizontally. That's similar to `position: fixed` in CSS. To freeze the rows and columns, locate the top-left cell that is below the row that you want to freeze and is on the right from the column that you want to freeze. For example, if you want to freeze one row and one column, the cell would be 'B2'. Then run this: ```python worksheet.freeze_panes = worksheet['B2'] ``` ## Fully Customized Export View So having the knowledge of this article now we can build a view that creates separate sheets. for each movie category. Each sheet would list movies of the category with titles, descriptions, length in hours and minutes, rating in percent, and price in Euros. The tabs, as well as the headers, can have different background colors for each movie category. Cells would be well formatted. Titles and descriptions would use multiple lines to fully fit into the cells. ```python # movies/views.py from datetime import datetime from datetime import timedelta from openpyxl import Workbook from openpyxl.styles import Font, Alignment, Border, Side, PatternFill from openpyxl.utils import get_column_letter from django.http import HttpResponse from .models import MovieCategory, Movie def export_movies_to_xlsx(request): """ Downloads all movies as Excel file with a worksheet for each movie category """ category_queryset = MovieCategory.objects.all() response = HttpResponse( content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', ) response['Content-Disposition'] = 'attachment; filename={date}-movies.xlsx'.format( date=datetime.now().strftime('%Y-%m-%d'), ) workbook = Workbook() # Delete the default worksheet workbook.remove(workbook.active) # Define some styles and formatting that will be later used for cells header_font = Font(name='Calibri', bold=True) centered_alignment = Alignment(horizontal='center') border_bottom = Border( bottom=Side(border_style='medium', color='FF000000'), ) wrapped_alignment = Alignment( vertical='top', wrap_text=True ) # Define the column titles and widths columns = [ ('ID', 8), ('Title', 40), ('Description', 80), ('Length', 15), ('Rating', 15), ('Price', 15), ] # Iterate through movie categories for category_index, category in enumerate(category_queryset): # Create a worksheet/tab with the title of the category worksheet = workbook.create_sheet( title=category.title, index=category_index, ) # Define the background color of the header cells fill = PatternFill( start_color=category.html_color, end_color=category.html_color, fill_type='solid', ) row_num = 1 # Assign values, styles, and formatting for each cell in the header for col_num, (column_title, column_width) in enumerate(columns, 1): cell = worksheet.cell(row=row_num, column=col_num) cell.value = column_title cell.font = header_font cell.border = border_bottom cell.alignment = centered_alignment cell.fill = fill # set column width column_letter = get_column_letter(col_num) column_dimensions = worksheet.column_dimensions[column_letter] column_dimensions.width = column_width # Iterate through all movies of a category for movie in category.movie_set.all(): row_num += 1 # Define data and formats for each cell in the row row = [ (movie.pk, 'Normal'), (movie.title, 'Normal'), (movie.description, 'Normal'), (timedelta(minutes=movie.length_in_minutes), 'Normal'), (movie.rating / 100, 'Percent'), (movie.price, 'Currency'), ] # Assign values, styles, and formatting for each cell in the row for col_num, (cell_value, cell_format) in enumerate(row, 1): cell = worksheet.cell(row=row_num, column=col_num) cell.value = cell_value cell.style = cell_format if cell_format == 'Currency': cell.number_format = '#,##0.00 €' if col_num == 4: cell.number_format = '[h]:mm;@' cell.alignment = wrapped_alignment # freeze the first row worksheet.freeze_panes = worksheet['A2'] # set tab color worksheet.sheet_properties.tabColor = category.html_color workbook.save(response) return response ``` ## The Takeaways - Spreadsheet data can be used for further mathematical processing with __pandas__. - XLSX file format allows quite a bunch of formatting options that can make your spreadsheet data more presentable and user-friendly. - To see Excel export in action, go to [1st things 1st](https://www.1st-things-1st.com), log in as a demo user, and navigate to project results where you can export them as XLSX. Feedback is always welcome. --- Cover photo by [Tim Evans](https://unsplash.com/photos/Uf-c4u1usFQ).
djangotricks
83,147
Learn Javascript: Introduction
Javascript is one of the most used language and is used in almost every areas of programming.
415
2019-02-15T17:51:40
https://www.kaherecode.com/tutorial/apprendre-javascript-introduction
javascript, gettingstarted
--- title: Learn Javascript: Introduction published: true description: Javascript is one of the most used language and is used in almost every areas of programming. tags: javascript, getting started series: Learn to code in Javascript canonical_url: https://www.kaherecode.com/tutorial/apprendre-javascript-introduction --- So are you new to programming or are you just trying to learn Javascript to add it to your other skills? Welcome! This series of tutorials is for you and you just have one thing to do, practice what we will see together. I will never stop saying it, it is useless to read tutorials without practice, the only way to really learn to program is to practice. In this tutorial, we will discover what is Javascript, it’s different versions, and see the basics to code in Javascript. Let’s start now. <blockquote>This tutorial is the first in a series of tutorials I write about programming in Javascript, be sure to follow to not miss any of the next tutorials.</blockquote> Javascript is one of the most popular programming languages, it is now used in almost all areas related to programming: Web, Mobile, Desktop Software, Embedded Systems, Machine Learning, Video Games, … Javascript is now used to create full stack web applications (front end and back end). The rise of Node.js in recent years has opened the use of Javascript on the back end, which came under the domain of languages such as Java, Python, PHP, Ruby, … But what is Javascript? ##Javascript, what it is? Created 20 years ago, Javascript was the first and only scripting language supported by web browsers. It was mainly used to make animations on DHTML pages. Nowadays, JavaScript has evolved and executed as we saw in the introduction on the front end (on the browser) but also in the back end (on the server), so what started as a simple language of scripts running in a browser has become a global language used almost everywhere. Javascript will run on any hardware that contains a so-called Javascript engine, there are several including V8 at Google Chrome and Opera, SpiderMonkey at Firefox, SquirrelFish at Safari, … It is these engines that read the Javascript and execute it. To define Javascript in a few points, we will say that Javascript is: <ul> <li><b>a high-level language</b>: it does not provide low-level access to memory or the CPU, as it was originally created for browsers that do not need it.</li> <li><b>a dynamic language</b>: a dynamic language executes at the moment of the execution of many tasks that a static language carries out at the compilation. This has advantages and disadvantages and gives us powerful features such as dynamic typing, late binding, reflection, functional programming, changing the execution of the object, and so on.</li> <li><b>a dynamically typed language</b>: in Javascript, a variable does not necessarily have a predefined type. So we can change the type of a variable during program execution.</li> <li><b>a weakly typed language</b>: as opposed to strong typing, weakly typed languages do not impose the type of an object, which allows more flexibility, but denies us security and type checking (something that TypeScript and Flow aim to improve)</li> <li><b>an interpreted language</b>: It is commonly called an interpreted language, which means that it does not require a compilation step before the program can be executed, unlike C or Java, for example. In practice, the browsers compile Javascript before executing it, for performance reasons, but this is transparent for you: no additional step is necessary.</li> <li><b>a multi-paradigm language</b>: the language does not apply a particular programming paradigm, unlike Java, for example, which imposes the use of object-oriented programming or C that imposes imperative programming. You can write Javascript using an object-oriented paradigm, using prototypes and the new class syntax (from ES6). You can write Javascript in a functional programming style, with its first class functions, or even in imperative style (like C).</li> </ul> Let’s do a little set up, Javascript has nothing to do with Java. Java is a programming language from Sun Microsystems, and Javascript a language developed by [Brendan Eich](https://en.wikipedia.org/wiki/Brendan_Eich). For the little story , the first version of Javascript was called LiveScript, but Java already existed at the time and was already very popular, the JavaScript maintainers have thought that positioning their language as the little brother of Java could help to position the language well, so they called it JavaScript. But today all this has changed, Javascript has its own specification called ECMAScript that we will see earlier. ##Versions of Javascript Now let’s talk about ECMAScript, that weird name. ECMAScript (also called ES) is the standard on which Javascript is based. <blockquote>Ecma International is a Swiss Standards Association responsible for defining international standards.</blockquote> The first version of Javascript (LiveScript) in 1997 was called ES1, then ES2 and ES3 in 1998 and 1999. Then came out ES4 which was a real fiasco and had to be abandoned (thanks Wikipedia). In December 2009 came out ES5 then ES5.1 in June 2011. In June 2015, Javascript has undergone a major change, the ES2015 is out, the change is already visible on the name. The official name is now ES2015 and the edition is ES6, today we will find more ES6 as the name than ES2015, but that does not change anything. This version of Javascript brings major changes to programming in Javascript such as classes, generators, … Since each year, in June, a new version of Javascript is published. <ul> <li>ES2016 (ES7)</li> <li>ES2017 (ES8)</li> <li>ES2018 (ES9)</li> </ul> Well, for the edition, you just take the last digit of the official name (ES2017–7) and you add 1 (ES7 + 1 — ES8), the version of Javascript that will be released this year in 2019 (June) will be called ES2019 and the edition will be ES10 (ES9 + 1). ##Utilities of Javascript Since the beginning of this tutorial, I keep saying it, Javascript is today used in almost all the domains of the computer programming that we know, web development, mobile development, video games , machine learning, … Let’s talk here about the two most popular areas, namely web and mobile development. On the web, Javascript allows us today to make full stack applications, our application will be fully coded in Javascript on the front end and the back end, which is already extraordinary in itself. Basically, we use a back end language such as Java, PHP, Python and on the front end, well we use Javascript, which makes us two languages on one and the same application. Always on the web, Javascript will allow us: <ul> <li>to do things on the browser of the user without having to make a request to the server (which requires reloading the page), which is good for example to validate a form</li> <li>add HTML dynamically, edit the page content, change the style of the page following the actions of the users</li> <li>make animations on the page</li> </ul> Nowadays, it is impossible to see a web page that does not use Javascript. Now on the mobile, Javascript allows us today to make mobile applications for Android, but also iOS, with a single code base, we have our applications, no need to make Java for Android and Swift for iOS . Javascript is therefore very used, today the mobile applications of Facebook (Messenger, Instagram, …) all turn on Javascript. Let’s look at some syntactical Javascript styles. ##The semicolon In Javascript, the semicolon is not mandatory at all, besides personally I prefer to omit it and you will see it in the examples that we will see together. You just have to be very careful in this case, avoid, for example, writing a single instruction on several lines: ```javascript return 1+4 ``` Or to start a line with `[` or `(` and you will be saved in most cases.) Use an linter (ESLint) to report errors and nothing will happen to you seriously. ##Comments In Javascript, you can use two types of comments, comments on several lines: ```javascript /* This is a comment in several lines */ ``` and comments on one line: ```javascript // This is a comment in one line ``` ##Case sensitivity Javascript is case-sensitive, which means that `variable` is different from `Variable` which is also different from `VARIABLE`. What is important to remember is that Javascript is a very popular language today and if you have the time to learn it, do not hesitate. It’s over for this first part, see you next for the second part of this series on Javascript, we’ll see variables and data types in Javascript. See you soon.
alioukahere
83,373
EnqueueZero Techshack 2019-07
EnqueueZero Techshack 2019-07 Original post: https://enqueuezero.com/techshack.weekly/2019...
246
2019-02-17T06:49:06
https://dev.to/soasme/enqueuezero-techshack-2019-07-1bn6
enqueuezero, architecture
--- title: EnqueueZero Techshack 2019-07 sidebar: auto published: true description: series: EnqueueZero Techshack tags: enqueuezero, architecture --- # EnqueueZero Techshack 2019-07 Original post: <https://enqueuezero.com/techshack.weekly/2019-07.html>. ## Kubernetes as an API standard [cloudatomiclab.com](https://www.cloudatomiclab.com/rustyk8s/) How about implementing a Kubernetes API in Rust, since Kubernetes is an excellent API for running code reliably? It just reconsiles between the world and desired state for an extensible set of things, things that include a concept of a pod by default. That is pretty much it, a simple idea. Instead of letting the implementation to be the specification, it might be a good idea to have a spec and let various implementations compete for the best one. Currently, there is not yet a public repo. ## Writing Docs at Amazon [usejournal.com](https://blog.usejournal.com/writing-docs-at-amazon-e025808616bd) Guidelines for writing an Amazon document: * Amazon documents are generally paragraphs/prose. * Check grammar and spelling. * Understand what you are trying to accomplish with the document (as with anything you write). If you disagree with something have already committed, you need to have data and logic clear. * Minimize surface area / leave out extraneous stuff. * Have the right people in the room. * Check your ego at the door. It's okay to get a different solution at the end, but it needs to be the RIGHT answer. * Be sure to “show your work.” Show all solutions and be explicit about why one of them is the best. * Represent and advocate for the customer. Write (seriously and convincingly) about why what you are proposing is good for the customer. * Don’t surprise or shock your peers (or your boss). * Only schedule a meeting if the document is ready. Otherwise, reschedule it. * Think big enough, meaning not just the problem and the solution but also the rest of the logic. * Read the room. Come back later with stronger cases when you can't convince your point. * Don’t be vague, and don’t be overly-dramatic. ## Understanding Database Sharding [digitalocean.com](https://www.digitalocean.com/community/tutorials/understanding-database-sharding) Sharding is a database architecture pattern related to horizontal partitioning — the practice of separating one table's rows into multiple different tables, known as partitions. Each partition has the same schema and columns, but also entirely different rows. Likewise, the data held in each is unique and independent of the data held in other partitions. Common sharding strategies include Key Based Sharding, Range Based Sharding, Directory Based Sharding. The main appeal of sharding a database is that it can help to facilitate horizontal scaling, also known as scaling out. The drawback is its complexity. ## Debugging MariaDB Galera Cluster SST Problems – A Tale of a Funny Experience [percona.org](https://www.percona.com/blog/2019/02/12/debugging-mariadb-galera-cluster-sst-problems-a-tale-of-a-funny-experience/) * Problem: MariaDB Cluster decided to restart itself and hanged while some nodes rejected to request to join to the cluster after copied a few gigs of data. * Cause: Systemd was killing the mysqld process but not stopping the service. This results in an infinite SST loop that only stops when the service is killed or stopped via systemd command. * Solution: Set `TimeoutStartSec=900` in `/etc/systemd/system/mariadb.service.d/timeoutstartsec.conf` and reload daemon. * Thoughts: a 90 seconds timeout is too short for a Galera cluster. It is very likely that almost any cluster will reach that timeout during SST. Even a regular MySQL server that suffers a crash with a high proportion of dirty pages or many operations to rollback, 90 seconds doesn’t seem to be a feasible time for crash recovery. ## A fast kubectl autocompletion with fzf [github.com](https://github.com/bonnefoa/kubectl-fzf) kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl. ## SQL: One of the Most Valuable Skills [craigkerstiens.com](http://www.craigkerstiens.com/2019/02/12/sql-most-valuable-skill/) SQL is an under-learned skill; the majority of application developers skip over it. It's a tool you can use everywhere. And more importantly, it's permanent, not like others changing APIs frequently. ## Cloud Programming Simplified: A Berkeley View on Serverless Computing [rise.cs.berkeley.edu](https://rise.cs.berkeley.edu/blog/a-berkeley-view-on-serverless-computing/) The three fundamental differences between Serverless and conventional Cloud Computing are: 1. Decoupling of computation and storage; they scale separately and are priced independently. 2. The abstraction of executing a piece of code instead of allocating resources on which to execute that code. 3. Paying for the code execution instead of paying for resources you have allocated to executing the code. Relevant reading, [Above the Clouds: A Berkeley View of Cloud Computing](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf). ## Why GraphQL is the Future of APIs [hashnode.com](https://hashnode.com/post/why-graphql-is-the-future-of-apis-cjs1r2hhe000rn9s23v9bydoq) REST has a lot of endpoints, has over-fetching and under-fetching problem, and ship to another version every time we need to include or remove something. GraphQL needs only one endpoint, only provided enough data as needed. It makes your API more self-documented, and there's no need for you to write a lot of documentation about it. ## Principled GraphQL [principledgraphql.com](https://principledgraphql.com/) * Your company should have one unified graph, instead of multiple graphs created by each team. * Though there is only one graph, the implementation of that graph should be federated across multiple teams. * There should be a single source of truth for registering and tracking the graph. * The schema should act as an abstraction layer that provides flexibility to consumers while hiding service implementation details. * The schema should be built incrementally based on actual requirements and evolve smoothly over time. * Performance management should be a continuous, data-driven process, adapting smoothly to changing query loads and service implementations. * Developers should be equipped with rich awareness of the graph throughout the entire development process. * Grant access to the graph on a per-client basis, and manage what and how clients can access it. * Capture structured logs of all graph operations and leverage them as the primary tool for understanding graph usage. * Adopt a layered architecture with data graph functionality broken into a separate tier rather than baked into every service. ## Python disk-backed cache [github.com](https://github.com/grantjenks/python-diskcache) The cloud-based computing of 2018 puts a premium on memory. Gigabytes of empty space is left on disks as processes vie for memory. Among these processes is Memcached (and sometimes Redis) which is used as a cache. Wouldn't it be nice to leverage empty disk space for caching? Can you really allow it to take sixty milliseconds to store a key in a cache with a thousand items? DiskCache efficiently makes gigabytes of storage space available for caching. By leveraging rock-solid database libraries and memory-mapped files, cache performance can match and exceed industry-standard solutions. There's no need for a C compiler or running another process. ```python import diskcache as dc cache = dc.Cache('tmp') cache[b'key'] = b'value' print(cache[b'key']) ``` ## If It Ain't TypeScript It Ain't Sexy [developer.okta.com](https://developer.okta.com/blog/2019/02/11/if-it-aint-typescript) More and more open source projects are or will be written in TypeScript, such as Yarn, Vue.js, deno, etc. Some benefits are higher velocity, reduced defects, faster on-boarding, easier refactoring. The risk of using TypeScript is like what happened to CoffeeScript; One day we may all go back to writing JavaScript again. ## CVE-2019-5736: runc container breakout (all versions) [seclists.org](https://seclists.org/oss-sec/2019/q1/119) A vulnerability was recently reported on runc. It allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command (it doesn't matter if the command is not attacker-controlled) as root within a container in either of these contexts: * Creating a new container using an attacker-controlled image. * Attaching (docker exec) into an existing container which the attacker had previous write access to. The fix is to check if we're not in a cloned library before running run `nsenter/nsexec`. [github.com](https://github.com/opencontainers/runc/commit/0a8e4117e7f715d5fbeef398405813ce8e88558b). ## Building a Kubernetes Edge (Ingress) Control Plane for Envoy v2 [kubernetes.io](https://kubernetes.io/blog/2019/02/12/building-a-kubernetes-edge-control-plane-for-envoy-v2/) * Ambassador itself is deployed within a container as a Kubernetes service, and uses annotations added to Kubernetes Services as its core configuration model. * Kubernetes and Envoy are very powerful frameworks, but they are also extremely fast moving targets * The best-supported libraries in the Kubernetes / Envoy ecosystem are written in Go. * Redesigning a test harness is sometimes necessary to move your software forward. * The real cost in redesigning a test harness is often in porting your old tests to the new harness implementation. ## A Structured RFC Process [philcalcado.com](http://philcalcado.com/2018/11/19/a_structured_rfc_process.html) How can you foster a culture that is more accepting and kind towards change? The answer is Structured Request For Comment (RFC) Process. * The author writes a document describing the proposal, following a template that aims at making sure that some fundamental questions are answered before inviting people to give feedback. * They will then ask other engineers for written feedback, usually by sending an email to a well-known mailing list. * People reviewing the document provide the author with their opinion, anecdotes from previous experience, and facts related to the proposal. * The author acknowledges every piece of feedback given, and commit to revisiting their final decision at some point in the future, sharing the lessons they have learned. ## The State of Kubernetes Configuration Management: An Unsolved Problem [blog.argoproj.io](https://blog.argoproj.io/the-state-of-kubernetes-configuration-management-d8b06c1205) Configuration management is a hard, unsolved problem. Good Kubernetes configuration tools have the following properties: * Declarative. The config is unambiguous, deterministic and not system dependent. * Readable. The config is written in a way that is easy to understand. * Flexible. The tool helps facilitates, and does not get in the way of accomplishing what you are trying to do. * Maintainable. The tool should promote reuse and composability. Below are some possible solutions. * Helm. It's self-described package manager for Kubernetes and doesn’t claim to be a configuration management tool, though many people use it in this way. The good part is there are already many high-quality charts well-maintained; the bad part is templating and complicated cd pipelines. * Kustomize. It has been merged into kubectl. * Jsonnet. It is as super-powered JSON combined with a sane way to do templating and not specific to Kubernetes. The good part is it's powerful and guarantees to generate syntax-correct YAML. * Ksonnet, jsonnet for Kubernetes. It seems it's too hard. * Replicated Ship. * Helm 3 + LUA script. It's not mature and as one of the least developed. ## My Philosophy on Alerting [docs.google.com](https://docs.google.com/document/d/199PqyG3UsyXlwieHaqbGiWVa8eMWi8zzAn0YfcApr8Q/edit) * Pages should be urgent, important, actionable, and real. * They should represent either ongoing or imminent problems with your service. * Err on the side of removing noisy alerts – over-monitoring is a harder problem to solve than under-monitoring. * You should almost always be able to classify the problem into one of: availability & basic functionality; latency; correctness (completeness, freshness and durability of data); and feature-specific problems. * Symptoms are a better way to capture more problems more comprehensively and robustly with less effort. * Include cause-based information in symptom-based pages or on dashboards, but avoid alerting directly on causes. * The further up your serving stack you go, the more distinct problems you catch in a single rule. But don't go so far you can't sufficiently distinguish what's going on. * If you want a quiet oncall rotation, it's imperative to have a system for dealing with things that need timely response, but are not imminently critical. ## Scraping and discovery [docs.influxdata.com](https://docs.influxdata.com/kapacitor/v1.5/working/scraping-and-discovery/) This is how Kapacitor scraping and discovery works. ![kapacitor](https://docs.influxdata.com/img/kapacitor/pull-metrics.png) ## Don't Measure Unit Test Code Coverage [jamesshore.com](https://www.jamesshore.com/Blog/Dont-Measure-Unit-Test-Code-Coverage.html) If people don't know how to do TDD properly, code coverage metrics won't help. Instead, you should, * Perform RCA, then improve your design and process to prevent that sort of defect from happening again. * Teach testing skills, speed up the test loop, refactor more, use evolutionary design, and try pairing or mobbing. * Whenever a bug is fixed, add a test first. Whenever a class is updated, retrofit tests to it first. * Involve customer representatives early in the process. * Use a mix of real-world monitoring, fail-fast code, and specialized testbeds. ## Logs vs Structured Events [charity.wtf](https://charity.wtf/2019/02/05/logs-vs-structured-events/) * Emit a rich record from the perspective of the request as it executes the code. Include all the context you can get your paws on. * Emit a single event per request per service that it hits. Write it out just before the request errors or exits the service. * Bypass local disk entirely, write to a remote service. * Sample if needed for cost or resource constraints. Practice dynamic sampling. * Treat this like operational data, not transactional data. Be profligate and disposable. * Feed this data into a columnar store or honeycomb or similar * Now use it every day. Not just as a last resort. Get knee deep in production every single day. Explore. Ask and answer rich questions about your systems, system quality, system behavior, outliers, error conditions, etc. ## Can Kubernetes Work at CERN? [youtube.com](https://www.youtube.com/watch?v=2PRGUOxL36M) | [speakerdeck.com](https://speakerdeck.com/rochaporto/kubernetes-at-cern-use-cases-integration-and-challenges) A lot of modern technologies are used: OpenStack, Kubernetes (Federation), Kubeflow, Jupyter, etc. ## When AWS Autoscale Doesn’t [segment.com](https://segment.com/blog/when-aws-autoscale-doesn-t/) The best way to find the right autoscaling strategy is to test it in your specific environment and against your specific load patterns. Sometimes, it's not scaling the way you think. ## trimstray/nginx-quick-reference [github.com](https://github.com/trimstray/nginx-quick-reference) This note describes how to improve Nginx performance, security, and other important things.
soasme
83,749
How to create overlapping images with CSS Grid
Create an overlapping image effect with CSS Grid. No absolute positioning or negative margins needed!
0
2019-02-18T16:00:54
https://dev.to/hybrid_alex/how-to-create-overlapping-images-with-css-grid-4ahh
cssgrid, css, frontenddevelopment
--- title: How to create overlapping images with CSS Grid published: true description: Create an overlapping image effect with CSS Grid. No absolute positioning or negative margins needed! tags: CSS Grid, CSS, Front-end Development --- {% youtube sZJrcOfBaNY %} If you enjoyed this screencast, consider subscribing to my [YouTube channel] (https://www.youtube.com/channel/UC2jJoQlzvLPvnYfowAEVaOg) for more screencasts about HTML, CSS, and JavaScript.
hybrid_alex
271,309
Linux terminals, tty, pty and shell - part 2
Demystifying Linux terminals
4,980
2020-03-02T17:21:18
https://dev.to/napicella/linux-terminals-tty-pty-and-shell-part-2-2cb2
linux, beginners, go, learning
--- title: Linux terminals, tty, pty and shell - part 2 published: true description: Demystifying Linux terminals tags: linux, beginners, go, learning series: Linux terminals demystified --- This is the second post of the series on Linux terminals, tty, pty and shell. In the first post we have talked about the difference between tty, pty and Shell and what happens when we press a key in a Terminal (like Xterm). If you haven't had the chance to read it yet, this is the link to the first part of the [article](https://dev.to/napicella/linux-terminals-tty-pty-and-shell-192e). Without reading the first part, some of the things discussed here might be harder to understand. In this article we will: - define what's a line discipline and see how programs can control it - build a simple remote terminal application in golang Let's get to it. ### Line discipline In the previous article we introduced the line discipline as an essential part of the terminal device. #### But what is it? > From Wiki: In the Linux terminal subsystem, the line discipline is a kernel module which serves as a protocol handler between the level device driver and the generic program interface routines (such as read(2), write(2) and ioctl(2)) offered to the programs. This definition is a bit dry, fortunately it contains a few keywords we can use to dive deeper. In a Unix-like system __everything is a file__, we all have heard this before. A program managing a pty will essentially perform __read__ and __write__ operations on a pair of files, pty master and pty slave. A program writing data to disk, sending a document to the printer or getting data from an usb stick will use the same __read__ and __write__ operations, although the work required to perform the tasks depends on the type of the device and the characteristics of the device itself. Our program is completely unaware of those details - the kernel provides a programming interface and takes care of all these differences for us. When a program calls the __read__ or __write__ operations, behind the scene, the kernel will use the right implementation for us. In the case of the pty, the kernel will use the tty driver to __handle the communication__ between the terminal and the program. The line discipline is a logical component of the tty driver. #### What does it do? The following is a (non comprehensive) list of the line discipline functionalities. - when we type, characters are echoed back to the pty master ([terminals are dumb](https://dev.to/napicella/linux-terminals-tty-pty-and-shell-192e)) - it buffers the characters in memory. It sends them to the pty slave when we press enter - when we type `CTRL + W`, it deletes the last word we typed - when we type `CTR + C`, it sends the `kill -2 (SIGINT)` command to the program attached to the pty slave - when we press `CRL + Z`, it sends the `kill -STOP` command - when pressing `CTRL + S`, it sends XOFF to the tty driver to put the process that is sending data into a sleep state - it replaces all the New Line (Enter key) characters with a Carriage return and New Line sequence. - when we press backspace, it deletes the character from the buffer. It then sends to the pty master the instructions to delete the last character > __Historical note__: XON/XOFF is a flow control feature that traces back to the time of hardware teletypes connected to the computer via a UART line. When one end of the data link could not receive any more data (because the buffer was full) it would send an "XOFF" signal to the sending end of the data link to pause until the "XON" signal was received. Today, with computers featuring Giga bytes of RAM, "XOFF" is used just as a mean to suspend processes. The Linux system is made of tons of abstractions which makes our life easier when we need to program. As with all the abstractions, they also make it hard to understand what's going on. Fortunately, we have an ace in the hole: the power of trying out stuff. This is what we are going to next. ### Managing the line discipline with stty The __stty__ is an utility to query and change the line discipline rules for the device connected to its standard input. Run in a terminal __stty -a__ ``` $ stty -a speed 38400 baud; rows 40; columns 80; line = 0; intr = ^C; quit = ^\; erase = ^H; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke ``` The output of the command returns the terminal characteristics and the line discipline rules. The first line contains the baud rate, the number of rows and columns of the terminal. > __Historical note__: When the terminal and the computer where connected via a line, baud rate provided the symbol rate of the channel. The baud rate is meaningless for a pty. You can read more about it in its [wiki page](https://en.wikipedia.org/wiki/Baud) The next line contains key bindings: For example the `initr = ^C` maps `CTRL + C` to `kill -2 (SIGINT)`. Scrolling through the end of the output we find the line discipline rules that do not require a key binding. Do you see the __echo__ ? __echo__ is the rule which instructs the line discipline to echo characters back. You can imagine what happens if we disable it. Open a terminal and run: ``` $ stty -echo ``` Then type something...nothing will appear on the screen. The line discipline does not echo the characters back to the pty master, thus the terminal does not show what we type anymore! Everything else works as usual. For example, type `ls` followed by enter. You will see the output of `ls`, although you haven't seen the characters `ls` when you typed them. We can restore it by typing: ``` stty echo ``` We can disable all the rules of the line discipline by typing __stty raw__. Such terminal is called __raw terminal__. A __cooked terminal__ is the opposite of a raw terminal - it's a terminal connected to a line discipline with all the rules enabled. Why would someone want a raw terminal? No echo, line editing, suspend or killing, etc. It looks like a nightmare! Well, it depends on what's the program receiving the input of the terminal. For example, programs like VIM set the terminal to raw because they need to process the characters themselves. Any external intervention that transforms or eats up characters would be a problem for an editor. As we will see, our remote terminal would need a raw terminal as well. ### Build a remote terminal in golang A remote terminal program allows to access a terminal on a remote host. Connecting through SSH to a remote machine does just that. What we want to do is similar to the result of running `ssh`, minus the encryption bits. I think we know enough to start hacking on some code. We would need a client-server application. The client runs on our machine and the server sits on some remote host. The client and the server will communicate via tcp. I have simplified the code to highlight the interesting bit for this article. You can find the code for the example and how to build it on [git](https://gist.github.com/napicella/777e83c0ef5b77bf72c0a5d5da9a4b4e). Let's start from the server. ### Remote terminal server The server performs the following operations: - open a tcp connection and listen for incoming requests - it creates a pty when it receives a request - run the bash process - assign the standard input, output and error of bash to the pty slave - send data received from the connection down to the pty master Our server does exactly what the [terminal emulator does](https://dev.to/napicella/linux-terminals-tty-pty-and-shell-192e), but in this case instead of drawing stuff to the screen, it performs the following: - read from the master and send the content down to the tcp connection - read from the tcp connection and write the content to the master Follows the code for the server: ```golang func server() error { // Create command c := exec.Command("bash") // Start the command with a pty. // It also assign the standard input, output and error of bash to the pty slave ptmx, e := pty.Start(c) if e != nil { return e } // Make sure to close the pty at the end. defer func() { _ = ptmx.Close() }() // Best effort. return listen(ptmx) } func listen(ptmx *os.File) error { fmt.Println("Launching server...") // listen on all interfaces ln, e := net.Listen("tcp", ":8081") if e != nil { return e } // accept connection on port conn, e := ln.Accept() if e != nil { return e } go func() { _, _ = io.Copy(ptmx, conn) }() _, e = io.Copy(conn, ptmx) return e } ``` ### Remote terminal client It would appear that our client would just need to open a tcp connection with the server, send the standard input to the tcp connection and write the data from the connection to the standard standard output. And indeed, there isn't much more to it. There is only a caveat, the client should send all the characters to the server. We do not want the line discipline on the client to interfere with the characters we type. __Setting the terminal to raw mode does just that.__ The client performs the following operations: - set the terminal to raw mode - open a tcp connection with the remote host - send the standard input to the tcp connection - send the data from the tcp connection to the standard output Finally let's see the client code: ```golang func client() error { // MakeRaw put the terminal connected to the given file // descriptor into raw mode and returns the previous state // of the terminal so that it can be restored. oldState, e := terminal.MakeRaw(int(os.Stdin.Fd())) if e != nil { return e } defer func() { _ = terminal.Restore(int(os.Stdin.Fd()), oldState) }() // Connect to this socket. // If client and server runs on different machines, // replace the loopback address with the address of // remote host conn, e := net.Dial("tcp", "127.0.0.1:8081") if e != nil { return e } go func() { _, _ = io.Copy(os.Stdout, conn) }() _, e = io.Copy(conn, os.Stdin) fmt.Println("Bye!") return e } ``` ### What happens when we run the program? Now that we have the client and the server, we can see the whole workflow from client to server. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ca19ny7m3an73am93u55.jpg) In the following we assume the golang program has been compiled in a binary called `remote`. We will also assume the program has already been started on the server machine. ``` go build -o remote main.go ``` ### Initialization __The client__ 1. the user (the stick-man in the picture[__*__]) opens a terminal emulator, like XTERM. The terminal emulator will: 1. draw the UI to the video and requests a pty from the OS 2. launch bash as subprocess 3. set the std input, output and error of bash to be the pty slave 4. listen for keyboard events 2. the user types `./remote -client` 1. the terminal emulator receives the keyboard events 2. sends the character to the pty master 3. the line discipline gets the character and buffers them. It copies them to the slave only when `Enter` is pressed. It also writes back its input to the master (echoing back). 4. when the user presses enter, the tty driver takes care of copying the buffered data to the pty slave 3. the user presses `Enter`: 1. bash (which was waiting for input on standard input) finally reads the characters 2. bash interprets the characters and figures it needs to run a program called `remote` 3. bash forks itself and runs the `remote` program in the fork. The forked process will have the same stdin, stdout and stderr used by bash, which is the pty slave. The remote client starts 1. set the terminal in raw mode, disabling the line discipline 2. open a tcp connection with the server __The server__ 1. accept the tcp connection 2. request a pty from the OS 3. launch bash and set the std input, output and error of bash to be the pty slave 1. bash starts 2. bash writes to standard output (pty slave) the bash line `~ >` 3. the tty driver copies the characters from the pty slave to pty master 4. the remote server copies the data from the pty master to the tcp connection __The client__ 1. the client receives data from the tcp connection and sends it to the standard output 2. the tty driver copies the characters from the pty slave to pty master 3. the terminal emulator gets the characters from the pty master and draws them on the screen All of this is just to display on the client the bash line `~ >` coming from the bash process which runs on the remote server! Now, what happens when the user types a command? ### Typing a command __The client__ 1. the user types `ls -la` followed by `Enter` 2. the terminal emulator sends the characters to the pty master 3. the tty driver copies them as they come to the pty slave (remember the remote client has disabled the line discipline) 4. the remote client reads the data from the pty slave and sends them through the tcp connection 5. the remote client waits to read the characters from the tcp connection __The server__ 1. the remote server writes the bytes received from the tcp connection to the pty master 2. tty driver buffers them until the character `Enter` has been received. It also writes back its input to the master (echoing back). 3. the remote server reads the characters from the master and sends them back to the tcp connection (these are the characters typed by the client!) 4. the tty driver writes the data to the pty slave 5. bash interprets the character and figures it needs to run a program called `ls -la` 6. bash forks the process. The forked process will have the same stdin, stdout and stderr used by bash, which is the pty slave. 7. the output of the command is copied to the pty slave 8. the tty driver copies the output to the pty master 9. the remote server copies the data from the pty master to the tcp connection An interesting thing to notice. On the client machine, all the characters we see on the screen come from the remote server. Including what we type! It's the line discipline on the remote server which echoes back the characters and from there find their way back to the client! > Look back at our little golang program and compare it with the number of steps in the workflow. With a little over 50 lines of code we were able to implement the whole workflow. Our program is small thanks to the kernel, which performs the heavy lifting. > > __That's the power of abstraction__ ### Conclusions We have reached the end of the series. The research work to write it was a lot fun and I hope it was also an interesting read. Some of the content I write is too short for a post, but still interesting enough to share it as a tweet. Follow me on [Twitter](https://twitter.com/napicellatwit) to get them in your Twitter feed! Happy coding :) -Nicola ------------------------------------------------- __*__ I know, it looks more like the game of hangman than a system diagram
napicella
84,385
Sharing code examples with Carbon
A free tool to use for creating clean looking images for sharing code examples.
0
2019-02-20T05:42:47
https://dev.to/daveskull81/sharing-code-examples-with-carbon-4fp0
meta, todayilearned, beginners, discuss
--- title: Sharing code examples with Carbon published: true description: A free tool to use for creating clean looking images for sharing code examples. cover_image: https://thepracticaldev.s3.amazonaws.com/i/vdvdzi1zxcq4vk6ev3ol.png tags: meta, todayilearned, beginners, discuss --- I often find myself thinking about the best way to share something like a code example. [Gists](https://gist.github.com/) seem to work well, especially for something more complex than a function that makes up more than a few lines. But, when you do have something that is only a few lines of code that can seem like overkill. My next thought goes to taking screenshots of my text editor. This has drawbacks as well since it can take multiple attempts to get something right that will embed well wherever I am putting the example. Today I came across this tweet from @emmawedekind. {% twitter 1097855481052303360 %} I was reminded that I have seen code examples that look like this before on Twitter. I really like how it looks. The code is clear, has highlighting, and the whole result just looks really polished and professional. This is the kind of resource that communicates clearly and helps to elevate the content by looking nice. I asked Emma how they created the screenshot and they pointed out the website [Carbon](https://carbon.now.sh/) to me. It is a really neat site built by [Dawn Labs](https://dawnlabs.io/) that is free to use. You can choose from various themes to adjust the colors. There is a setting for the language you are using so that it gets the highlighting right. You can export the image out to PNG or SVG for use online or just tweet the picture directly from there if you want. The results look really nice. ![Example Javascript code](https://thepracticaldev.s3.amazonaws.com/i/6f8hvvv4qp182y3jk3ta.png) I really like how it takes the guess work out of creating code example images and leaves you able to focus on the code that you are trying to share. I definitely suggest checking it out and seeing if it can help you. Have you used Carbon before? I'd love to see examples of how folks have used this in their own work. Let me know in the comments! **UPDATE (Mar 2nd, 2019)** Multiple folks in the comments have pointed out the accessibility issues with using only images to share code as it hinders someone using a screen reader to be able to consume the content fully. Screen readers won't read out the content of the image and instead use the Alt Text set for the image. It's also been noted that when the code is in an image it can't be selected for copying and pasting for review in a text editor. These are both very valid points. I still think Carbon is an awesome app and is very useful, but this goes to show that it isn't a complete solution and should be combined with other methods of sharing code to ensure everyone can get what they need. I'd still say to use it for sharing code snippets on Twitter as there really isn't a better way to do it at this time within a tweet. You'd have to link out to a gist or another place otherwise. As for using code images in content like articles I'd consider using the images as only headers or presentational pieces and then embedding a gist or something else that allows for a screen reader to function and for the code to be more easily shared as some have suggested. Or using the image to show the code and linking to a more consumable version within your text explaining the code. If you are going to use the image for the main way to communicate the code make sure to set good Alt Text for it to help anyone using a screen reader. A lot of this can be hard to get right sometimes. I found some good resources on Twitter from [Marcy Sutton](https://twitter.com/marcysutton) who replied back on [this tweet](https://twitter.com/jesslynnrose/status/1099122636720754688). * [An alt decision tree](https://www.w3.org/WAI/tutorials/images/decision-tree/) - helps in determining how much info to give in the ```alt``` attribute. * [How A Screen Reader User Accesses The Web](https://www.smashingmagazine.com/2019/02/accessibility-webinar/) - a webinar from Smashing Magazine showing the experience of a blind individual using a screen reader online.
daveskull81
84,408
Dohnut 🍩 DNS to DoH proxy
TL;DR Dohnut easily upgrades all your network clients by proxying plaintext DNS...
0
2019-02-20T08:01:18
https://dev.to/commonshost/dohnut--dns-to-doh-proxy-42bg
showdev, opensource, dns
*TL;DR [Dohnut](https://help.commons.host/dohnut/) easily upgrades all your network clients by proxying plaintext DNS to encrypted DoH.* The [Commons Host](https://commons.host) CDN project [recently launched](https://dev.to/commonshost/how-we-built-a-doh-cdn-with-20-global-edge-servers-in-10-days-1man) a public DNS-over-HTTPS (DoH) service. DoH now operates across all 30+ edge servers of the Commons Host network, offering low latency in many locations worldwide. Uniquely the Commons Host network is grown by contributors who [own and host low cost micro-servers](https://dev.to/commonshost/little-lamb-mk-i-5gf3) using consumer-grade Internet connections at their homes or offices. The DoH Internet standard, [RFC8484](https://tools.ietf.org/html/rfc8484), promises improved privacy and security for DNS. DoH encrypts all queries, protecting users against snooping or DNS response tampering by ISPs and rogue Wi-Fi routers. Upgrading all of your network clients from plaintext DNS to encrypted DoH is not trivial. There is currently no operating system or router/hardware support. The only browser supporting DoH today is Firefox. This is why a DNS to DoH proxy is needed. ## Introducing: Dohnut 🍩 Dohnut acts as a local DNS server, either for one machine or for an entire local network. It proxies all DNS queries to remote DoH services inside encrypted, long-lived HTTP/2 connections. ![Dohnut overview diagram](https://thepracticaldev.s3.amazonaws.com/i/00ba69tj4wfbu1uww7od.png) {% github commonshost/dohnut %} ## Easy to Deploy [Deployment guides](https://help.commons.host/dohnut/) are currently available for Raspbian, Docker, Linux/systemd, and macOS/launchd. A desktop client and a web dashboard are in the works! ## Lightweight Dohnut is built with Node.js to be cross-platform and fast. Running on just a $35 Raspberry Pi computer, Dohnut can easily handle a typical home or SME network with dozens of DNS clients. {% twitter 1092304430739771392 %} Dohnut is also a great companion to the popular DNS ad-blocker Pi-hole. Dohnut acts as the Custom DNS Upstream server to Pi-hole. Pi-hole, as the DNS server to DNS clients on the network, does the ad-blocking, monitoring, and provides a local, low latency DNS cache. {% twitter 1098130028787724289 %} ## Auto-Optimising DNS Latency Multiple DoH services can be used by Dohnut at once. Dohnut load balances between DoH services using two configurable strategies: - **Best performance**: Always send DNS queries to the fastest DoH resolver. Continuously monitors the round-trip-time latency to each DoH resolver using HTTP/2 PING frames. Set and forget; this mode automatically discovers when one of the DoH resolvers improves their latency to your network (e.g. deploying a new server near you). - **Best privacy**: Uniformly distributes DNS queries across all enabled DoH resolvers. This shards DNS queries so that a single DoH resolver only sees a slice of the total traffic. *Tip: Use [Bulldohzer](https://dev.to/commonshost/bulldohzer--dns--doh-performance-testing-50fm) to measure lookup latency from your location to multiple DNS and DoH resolvers.* ## Experimental: Active Tracking Countermeasures Privacy policies of public DNS resolvers vary. But there is always the unavoidable fact that resolvers must see your DNS queries. This is an inherent privacy risk when using a [DNS-over-Cloud provider](https://blog.powerdns.com/2019/02/07/the-big-dns-privacy-debate-at-fosdem/). To deter tracking by DoC providers, Dohnut can spoof DNS queries. It does this at random times and using a realistic sampling of popular real-world domains. This makes it very hard for any DoC provider to tell, if they wanted to, which queries are really yours and which are just spoofed noise. This does introduce additional traffic and load on public DNS services. This is intended as a privacy experiment. Another concern with DoH is the increased surface for tracking at the HTTP layer. By muxing queries from multiple clients into a single DoH connection, Dohnut acts as an passive privacy mechanism. Dohnut can also randomise the HTTP `User-Agent` header based on real world browser usage data. ## Feedback Feedback on these ideas and their implementation is greatly appreciated. ❤️ Blog comments, GitHub Issues, Twitter, etc. *Cover photo by [Ferry Sitompul](https://www.flickr.com/photos/65991505@N08/8222939536/)*
sebdeckers
84,416
Top Resume Tips to Land the Job of Your Dreams
Well-researched tips to help you write a catchy resume and make a great impression when you apply for a new job.
0
2019-02-20T09:14:18
https://dev.to/drsavvina/top-resume-tips-to-land-the-job-you-want-212k
career, resume, tips, beginners
--- title: Top Resume Tips to Land the Job of Your Dreams published: true description: Well-researched tips to help you write a catchy resume and make a great impression when you apply for a new job. tags: career, resume, tips, beginners cover_image: https://1kabswnt2ua3ivl0cuqv2f17-wpengine.netdna-ssl.com/wp-content/uploads/2014/09/Dream-job-sign.jpg --- <b>If you were a Recruiter, would you invite yourself to an interview?</b> It’s not necessary to write an essay or a long list of everything you have ever done since you were a toddler. A recruiter will look at your resume only for a few minutes, so make sure you present the most relevant to the job skills, experience, and achievements. This is your chance to show off your personality, so don’t hesitate to give a personal touch to the resume. Check out the following simple guidelines and make your CV stands out among the others. <b>Make it Simple, but Effective</b> A resume should be a short, ideally one page, document designed to present in the best way why you are a good fit for a job position. Be clear yet detail-oriented. Start with your best qualities, such as technical and soft skills, work experience, education. Highlight any certificates and achievements and avoid to write just job titles. Customize your CV for each job you candidate. <b>Basics of CV Formatting</b> Since you have only an A4 document to work with, use text and space wisely. Write your contact information at the top of your resume – name, email address, and phone number. Use a 12 pt or 14 pt font for your name and 10 pt or 11 pt for the body. To gain some more space, adjust the margins. It’s preferable to save it as a PDF file. <b>Spell Check, Double Check</b> When it comes to a resume, details really matter. If you want to give a professional first impression on your future employer, be sure that you have corrected any grammar errors, spelling mistakes or unclear sentences. <b>Show your Profiles</b> Make your resume modern and more informative by adding links to your online profiles, such as LinkedIn, GitHub, StackOverflow or any other relevant account. Place this info at the top of your CV along with your personal details. Recruiters and employers always like to see more info about you, additionally to a common resume. Ensure that all links work properly and direct to the correct URL addresses. <b>Take it Personally</b> Customize. Your resume should be up-to-date, reflecting your current professional achievements and skills. Take some time to edit your CV every time you want to apply to a specific offer. <b>Career Path Change</b> What if you have taken a different direction and your past working experience seems irrelevant? Do you remember the summary? Focus on it. Place it under your personal details and then add your current professional skills. If you have worked in customer support, take the chance to mention your soft skills, e.g. you can easily work with others. Highlight your major advantages. Once you start creating your resume, keep in mind a simple checklist to help you rock your CV! <b>Info Checklist – Order & Format</b> <ul> <li>Personal and contact details</li> <li>Summary – optional, yet nice to have. You can include your goals if you have a specific job in mind.</li> <li>Technical skills – be specific and include any programming languages, documentation or technical writing skills, and similar.</li> <li>Soft skills – show off your best personality. Are you a team-player, communicative, and self-disciplined? Write that down.</li> <li>Work experience</li> <li>Certifications</li> <li>Education</li> <li>Any projects you were involved with</li> <li>Personal interests and hobbies – optional, yet nice to have</li> <li>References – optional or if requested by the employer</li> </ul>
drsavvina
84,665
An Introduction to Queries in PostgreSQL
An introduction to queries in PostgreSQL
0
2019-02-21T23:47:47
https://www.digitalocean.com/community/tutorials/introduction-to-queries-postgresql
tutorial, postgres, database
--- title: An Introduction to Queries in PostgreSQL published: true description: An introduction to queries in PostgreSQL tags: tutorial, postgres, database canonical_url: https://www.digitalocean.com/community/tutorials/introduction-to-queries-postgresql cover_image: https://cl.ly/f3bb3f2599bf/Image%202019-02-20%20at%2012.48.58%20PM.png --- ### Introduction Databases are a key component of many websites and applications, and are at the core of how data is stored and exchanged across the internet. One of the most important aspects of database management is the practice of retrieving data from a database, whether it's on an ad hoc basis or part of a process that's been coded into an application. There are several ways to retrieve information from a database, but one of the most commonly-used methods is performed through submitting *queries* through the command line. In relational database management systems, a *query* is any command used to retrieve data from a table. In Structured Query Language (SQL), queries are almost always made using the `SELECT` statement. In this guide, we will discuss the basic syntax of SQL queries as well as some of the more commonly-employed functions and operators. We will also practice making SQL queries using some sample data in a PostgreSQL database. [PostgreSQL](https://www.postgresql.org/), often shortened to "Postgres," is a relational database management system with an object-oriented approach, meaning that information can be represented as objects or classes in PostgreSQL schemas. PostgreSQL aligns closely with standard SQL, although it also includes some features not found in other relational database systems. Prerequisites ------------- In general, the commands and concepts presented in this guide can be used on any Linux-based operating system running any SQL database software. However, it was written specifically with an Ubuntu 18.04 server running PostgreSQL in mind. To set this up, you will need the following: * An Ubuntu 18.04 machine with a non-root user with sudo privileges. This can be set up using our [Initial Server Setup guide for Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019). * PostgreSQL installed on the machine. For help with setting this up, follow the "Installing PostgreSQL" section of our guide on [How To Install and Use PostgreSQL on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04#installing-postgresql?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019). With this setup in place, we can begin the tutorial. Creating a Sample Database -------------------------- Before we can begin making queries in SQL, we will first create a database and a couple tables, then populate these tables with some sample data. This will allow you to gain some hands-on experience when you begin making queries later on. For the sample database we'll use throughout this guide, imagine the following scenario: *You and several of your friends all celebrate your birthdays with one another. On each occasion, the members of the group head to the local bowling alley, participate in a friendly tournament, and then everyone heads to your place where you prepare the birthday-person's favorite meal.* *Now that this tradition has been going on for a while, you've decided to begin tracking the records from these tournaments. Also, to make planning dinners easier, you decide to create a record of your friends' birthdays and their favorite entrees, sides, and desserts. Rather than keep this information in a physical ledger, you decide to exercise your database skills by recording it in a PostgreSQL database.* To begin, open up a PostgreSQL prompt as your **postgres** superuser: ``` sudo -u postgres psql ``` **Note:** If you followed all the steps of the prerequisite tutorial on [Installing PostgreSQL on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019), you may have configured a new role for your PostgreSQL installation. In this case, you can connect to the Postgres prompt with the following command, substituting `sammy` with your own username: ``` sudo -u sammy psql ``` Next, create the database by running: ``` CREATE DATABASE birthdays; ``` Then select this database by typing: ``` \c birthdays ``` Next, create two tables within this database. We'll use the first table to track your friends' records at the bowling alley. The following command will create a table called `tourneys` with columns for the `name` of each of your friends, the number of tournaments they've won (`wins`), their all-time `best` score, and what size bowling shoe they wear (`size`): ``` CREATE TABLE tourneys ( name varchar(30), wins real, best real, size real ); ``` Once you run the `CREATE TABLE` command and populate it with column headings, you’ll receive the following output: ``` Output CREATE TABLE ``` Populate the `tourneys` table with some sample data: ``` INSERT INTO tourneys (name, wins, best, size) VALUES ('Dolly', '7', '245', '8.5'), ('Etta', '4', '283', '9'), ('Irma', '9', '266', '7'), ('Barbara', '2', '197', '7.5'), ('Gladys', '13', '273', '8'); ``` You’ll receive the following output: ``` Output INSERT 0 5 ``` Following this, create another table within the same database which we'll use to store information about your friends' favorite birthday meals. The following command creates a table named `dinners` with columns for the `name` of each of your friends, their `birthdate`, their favorite `entree`, their preferred `side` dish, and their favorite `dessert`: ``` CREATE TABLE dinners ( name varchar(30), birthdate date, entree varchar(30), side varchar(30), dessert varchar(30) ); ``` Similarly for this table, you’ll receive feedback verifying that the table was created: ``` Output CREATE TABLE ``` Populate this table with some sample data as well: ``` INSERT INTO dinners (name, birthdate, entree, side, dessert) VALUES ('Dolly', '1946-01-19', 'steak', 'salad', 'cake'), ('Etta', '1938-01-25', 'chicken', 'fries', 'ice cream'), ('Irma', '1941-02-18', 'tofu', 'fries', 'cake'), ('Barbara', '1948-12-25', 'tofu', 'salad', 'ice cream'), ('Gladys', '1944-05-28', 'steak', 'fries', 'ice cream'); ``` ``` Output INSERT 0 5 ``` Once that command completes successfully, you're done setting up your database. Next, we'll go over the basic command structure of `SELECT` queries. Understanding SELECT Statements ------------------------------- As mentioned in the introduction, SQL queries almost always begin with the `SELECT` statement. `SELECT` is used in queries to specify which columns from a table should be returned in the result set. Queries also almost always include `FROM`, which is used to specify which table the statement will query. Generally, SQL queries follow this syntax: ``` SELECT column_to_select FROM table_to_select WHERE certain_conditions_apply; ``` By way of example, the following statement will return the entire `name` column from the `dinners` table: ``` SELECT name FROM dinners; ``` ``` Output name --------- Dolly Etta Irma Barbara Gladys (5 rows) ``` You can select multiple columns from the same table by separating their names with a comma, like this: ``` SELECT name, birthdate FROM dinners; ``` ``` Output name | birthdate ---------+------------ Dolly | 1946-01-19 Etta | 1938-01-25 Irma | 1941-02-18 Barbara | 1948-12-25 Gladys | 1944-05-28 (5 rows) ``` Instead of naming a specific column or set of columns, you can follow the `SELECT` operator with an asterisk (`*`) which serves as a placeholder representing all the columns in a table. The following command returns every column from the `tourneys` table: ``` SELECT * FROM tourneys; ``` ``` Output name | wins | best | size ---------+------+------+------ Dolly | 7 | 245 | 8.5 Etta | 4 | 283 | 9 Irma | 9 | 266 | 7 Barbara | 2 | 197 | 7.5 Gladys | 13 | 273 | 8 (5 rows) ``` `WHERE` is used in queries to filter records that meet a specified condition, and any rows that do not meet that condition are eliminated from the result. A `WHERE` clause typically follows this syntax: ``` . . . WHERE column_name comparison_operator value ``` The comparison operator in a `WHERE` clause defines how the specified column should be compared against the value. Here are some common SQL comparison operators: | Operator | What it does | |-------------|-----------------------------------------------------------------------| | `=` | tests for equality | | `!=` | tests for inequality | | `<` | tests for less-than | | `>` | tests for greater-than | | `<=` | tests for less-than or equal-to | | `>=` | tests for greater-than or equal-to | | `BETWEEN` | tests whether a value lies within a given range | | `IN` | tests whether a row's value is contained in a set of specified values | | `EXISTS` | tests whether rows exist, given the specified conditions | | `LIKE` | tests whether a value matches a specified string | | `IS NULL` | tests for `NULL` values | | `IS NOT NULL` | tests for all values other than `NULL` | For example, if you wanted to find Irma's shoe size, you could use the following query: ``` SELECT size FROM tourneys WHERE name = 'Irma'; ``` ``` Output size ------ 7 (1 row) ``` SQL allows the use of wildcard characters, and these are especially handy when used in `WHERE` clauses. Percentage signs (`%`) represent zero or more unknown characters, and underscores (`_`) represent a single unknown character. These are useful if you're trying to find a specific entry in a table, but aren't sure of what that entry is exactly. To illustrate, let's say that you've forgotten the favorite entree of a few of your friends, but you're certain this particular entree starts with a "t." You could find its name by running the following query: ``` SELECT entree FROM dinners WHERE entree LIKE 't%'; ``` ``` Output entree ------- tofu tofu (2 rows) ``` Based on the output above, we see that the entree we have forgotten is `tofu`. There may be times when you're working with databases that have columns or tables with relatively long or difficult-to-read names. In these cases, you can make these names more readable by creating an alias with the `AS` keyword. Aliases created with `AS` are temporary, and only exist for the duration of the query for which they're created: ``` SELECT name AS n, birthdate AS b, dessert AS d FROM dinners; ``` ``` Output n | b | d ---------+------------+----------- Dolly | 1946-01-19 | cake Etta | 1938-01-25 | ice cream Irma | 1941-02-18 | cake Barbara | 1948-12-25 | ice cream Gladys | 1944-05-28 | ice cream (5 rows) ``` Here, we have told SQL to display the `name` column as `n`, the `birthdate` column as `b`, and the `dessert` column as `d`. The examples we've gone through up to this point include some of the more frequently-used keywords and clauses in SQL queries. These are useful for basic queries, but they aren't helpful if you're trying to perform a calculation or derive a *scalar value* (a single value, as opposed to a set of multiple different values) based on your data. This is where aggregate functions come into play. Aggregate Functions ------------------- Oftentimes, when working with data, you don't necessarily want to see the data itself. Rather, you want information *about* the data. The SQL syntax includes a number of functions that allow you to interpret or run calculations on your data just by issuing a `SELECT` query. These are known as *aggregate functions*. The `COUNT` function counts and returns the number of rows that match a certain criteria. For example, if you'd like to know how many of your friends prefer tofu for their birthday entree, you could issue this query: ``` SELECT COUNT(entree) FROM dinners WHERE entree = 'tofu'; ``` ``` Output count ------- 2 (1 row) ``` The `AVG` function returns the average (mean) value of a column. Using our example table, you could find the average best score amongst your friends with this query: ``` SELECT AVG(best) FROM tourneys; ``` ``` Output avg ------- 252.8 (1 row) ``` `SUM` is used to find the total sum of a given column. For instance, if you'd like to see how many games you and your friends have bowled over the years, you could run this query: ``` SELECT SUM(wins) FROM tourneys; ``` ``` Output sum ----- 35 (1 row) ``` Note that the `AVG` and `SUM` functions will only work correctly when used with numeric data. If you try to use them on non-numerical data, it will result in either an error or just `0`, depending on which RDBMS you're using: ``` SELECT SUM(entree) FROM dinners; ``` ``` Output ERROR: function sum(character varying) does not exist LINE 1: select sum(entree) from dinners; ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. ``` `MIN` is used to find the smallest value within a specified column. You could use this query to see what the worst overall bowling record is so far (in terms of number of wins): ``` SELECT MIN(wins) FROM tourneys; ``` ``` Output min ----- 2 (1 row) ``` Similarly, `MAX` is used to find the largest numeric value in a given column. The following query will show the best overall bowling record: ``` SELECT MAX(wins) FROM tourneys; ``` ``` Output max ----- 13 (1 row) ``` Unlike `SUM` and `AVG`, the `MIN` and `MAX` functions can be used for both numeric and alphabetic data types. When run on a column containing string values, the `MIN` function will show the first value alphabetically: ``` SELECT MIN(name) FROM dinners; ``` ``` Output min --------- Barbara (1 row) ``` Likewise, when run on a column containing string values, the `MAX` function will show the last value alphabetically: ``` SELECT MAX(name) FROM dinners; ``` ``` Output max ------ Irma (1 row) ``` Aggregate functions have many uses beyond what was described in this section. They're particularly useful when used with the `GROUP BY` clause, which is covered in the next section along with several other query clauses that affect how result sets are sorted. Manipulating Query Outputs -------------------------- In addition to the `FROM` and `WHERE` clauses, there are several other clauses which are used to manipulate the results of a `SELECT` query. In this section, we will explain and provide examples for some of the more commonly-used query clauses. One of the most frequently-used query clauses, aside from `FROM` and `WHERE`, is the `GROUP BY` clause. It's typically used when you're performing an aggregate function on one column, but in relation to matching values in another. For example, let's say you wanted to know how many of your friends prefer each of the three entrees you make. You could find this info with the following query: ``` SELECT COUNT(name), entree FROM dinners GROUP BY entree; ``` ``` Output count | entree -------+--------- 1 | chicken 2 | steak 2 | tofu (3 rows) ``` The `ORDER BY` clause is used to sort query results. By default, numeric values are sorted in ascending order, and text values are sorted in alphabetical order. To illustrate, the following query lists the `name` and `birthdate` columns, but sorts the results by birthdate: ``` SELECT name, birthdate FROM dinners ORDER BY birthdate; ``` ``` Output name | birthdate ---------+------------ Etta | 1938-01-25 Irma | 1941-02-18 Gladys | 1944-05-28 Dolly | 1946-01-19 Barbara | 1948-12-25 (5 rows) ``` Notice that the default behavior of `ORDER BY` is to sort the result set in ascending order. To reverse this and have the result set sorted in descending order, close the query with `DESC`: ``` SELECT name, birthdate FROM dinners ORDER BY birthdate DESC; ``` ``` Output name | birthdate ---------+------------ Barbara | 1948-12-25 Dolly | 1946-01-19 Gladys | 1944-05-28 Irma | 1941-02-18 Etta | 1938-01-25 (5 rows) ``` As mentioned previously, the `WHERE` clause is used to filter results based on specific conditions. However, if you use the `WHERE` clause with an aggregate function, it will return an error, as is the case with the following attempt to find which sides are the favorite of at least three of your friends: ``` SELECT COUNT(name), side FROM dinners WHERE COUNT(name) >= 3; ``` ``` Output ERROR: aggregate functions are not allowed in WHERE LINE 1: SELECT COUNT(name), side FROM dinners WHERE COUNT(name) >= 3... ``` The `HAVING` clause was added to SQL to provide functionality similar to that of the `WHERE` clause while also being compatible with aggregate functions. It's helpful to think of the difference between these two clauses as being that `WHERE` applies to individual records, while `HAVING` applies to group records. To this end, any time you issue a `HAVING` clause, the `GROUP BY` clause must also be present. The following example is another attempt to find which side dishes are the favorite of at least three of your friends, although this one will return a result without error: ``` SELECT COUNT(name), side FROM dinners GROUP BY side HAVING COUNT(name) >= 3; ``` ``` Output count | side -------+------- 3 | fries (1 row) ``` Aggregate functions are useful for summarizing the results of a particular column in a given table. However, there are many cases where it's necessary to query the contents of more than one table. We'll go over a few ways you can do this in the next section. Querying Multiple Tables ------------------------ More often than not, a database contains multiple tables, each holding different sets of data. SQL provides a few different ways to run a single query on multiple tables. The `JOIN` clause can be used to combine rows from two or more tables in a query result. It does this by finding a related column between the tables and sorts the results appropriately in the output. `SELECT` statements that include a `JOIN` clause generally follow this syntax: ``` SELECT table1.column1, table2.column2 FROM table1 JOIN table2 ON table1.related_column=table2.related_column; ``` Note that because `JOIN` clauses compare the contents of more than one table, the previous example specifies which table to select each column from by preceding the name of the column with the name of the table and a period. You can specify which table a column should be selected from like this for any query, although it's not necessary when selecting from a single table, as we've done in the previous sections. Let's walk through an example using our sample data. Imagine that you wanted to buy each of your friends a pair of bowling shoes as a birthday gift. Because the information about your friends' birthdates and shoe sizes are held in separate tables, you could query both tables separately then compare the results from each. With a `JOIN` clause, though, you can find all the information you want with a single query: ``` SELECT tourneys.name, tourneys.size, dinners.birthdate FROM tourneys JOIN dinners ON tourneys.name=dinners.name; ``` ``` Output name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 (5 rows) ``` The `JOIN` clause used in this example, without any other arguments, is an *inner* `JOIN` clause. This means that it selects all the records that have matching values in both tables and prints them to the results set, while any records that aren't matched are excluded. To illustrate this idea, let's add a new row to each table that doesn't have a corresponding entry in the other: ``` INSERT INTO tourneys (name, wins, best, size) VALUES ('Bettye', '0', '193', '9'); ``` ``` INSERT INTO dinners (name, birthdate, entree, side, dessert) VALUES ('Lesley', '1946-05-02', 'steak', 'salad', 'ice cream'); ``` Then, re-run the previous `SELECT` statement with the `JOIN` clause: ``` SELECT tourneys.name, tourneys.size, dinners.birthdate FROM tourneys JOIN dinners ON tourneys.name=dinners.name; ``` ``` Output name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 (5 rows) ``` Notice that, because the `tourneys` table has no entry for Lesley and the `dinners` table has no entry for Bettye, those records are absent from this output. It is possible, though, to return all the records from one of the tables using an *outer* `JOIN` clause. Outer `JOIN` clauses are written as either `LEFT JOIN`, `RIGHT JOIN`, or `FULL JOIN`. A `LEFT JOIN` clause returns all the records from the “left” table and only the matching records from the right table. In the context of outer joins, the left table is the one referenced by the `FROM` clause, and the right table is any other table referenced after the `JOIN` statement. Run the previous query again, but this time use a `LEFT JOIN` clause: ``` SELECT tourneys.name, tourneys.size, dinners.birthdate FROM tourneys LEFT JOIN dinners ON tourneys.name=dinners.name; ``` This command will return every record from the left table (in this case, `tourneys`) even if it doesn't have a corresponding record in the right table. Any time there isn't a matching record from the right table, it's returned as a blank value or `NULL`, depending on your RDBMS: ``` Output name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 Bettye | 9 | (6 rows) ``` Now run the query again, this time with a `RIGHT JOIN` clause: ``` SELECT tourneys.name, tourneys.size, dinners.birthdate FROM tourneys RIGHT JOIN dinners ON tourneys.name=dinners.name; ``` This will return all the records from the right table (`dinners`). Because Lesley's birthdate is recorded in the right table, but there is no corresponding row for her in the left table, the `name` and `size` columns will return as blank values in that row: ``` Output name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 | | 1946-05-02 (6 rows) ``` Note that left and right joins can be written as `LEFT OUTER JOIN` or `RIGHT OUTER JOIN`, although the `OUTER` part of the clause is implied. Likewise, specifying `INNER JOIN` will produce the same result as just writing `JOIN`. There is a fourth join clause called `FULL JOIN` available for some RDBMS distributions, including PostgreSQL. A `FULL JOIN` will return all the records from each table, including any null values: ``` SELECT tourneys.name, tourneys.size, dinners.birthdate FROM tourneys FULL JOIN dinners ON tourneys.name=dinners.name; ``` ``` Output name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 Bettye | 9 | | | 1946-05-02 (7 rows) ``` **Note:** As of this writing, the `FULL JOIN` clause is not supported by either MySQL or MariaDB. As an alternative to using `FULL JOIN` to query all the records from multiple tables, you can use the `UNION` clause. The `UNION` operator works slightly differently than a `JOIN` clause: instead of printing results from multiple tables as unique columns using a single `SELECT` statement, `UNION` combines the results of two `SELECT` statements into a single column. To illustrate, run the following query: ``` SELECT name FROM tourneys UNION SELECT name FROM dinners; ``` This query will remove any duplicate entries, which is the default behavior of the `UNION` operator: ``` Output name --------- Irma Etta Bettye Gladys Barbara Lesley Dolly (7 rows) ``` To return all entries (including duplicates) use the `UNION ALL` operator: ``` SELECT name FROM tourneys UNION ALL SELECT name FROM dinners; ``` ``` Output name --------- Dolly Etta Irma Barbara Gladys Bettye Dolly Etta Irma Barbara Gladys Lesley (12 rows) ``` The names and number of the columns in the results table reflect the name and number of columns queried by the first `SELECT` statement. Note that when using `UNION` to query multiple columns from more than one table, each `SELECT` statement must query the same number of columns, the respective columns must have similar data types, and the columns in each `SELECT` statement must be in the same order. The following example shows what might result if you use a `UNION` clause on two `SELECT` statements that query a different number of columns: ``` SELECT name FROM dinners UNION SELECT name, wins FROM tourneys; ``` ``` Output ERROR: each UNION query must have the same number of columns LINE 1: SELECT name FROM dinners UNION SELECT name, wins FROM tourne... ``` Another way to query multiple tables is through the use of *subqueries*. Subqueries (also known as *inner* or *nested queries*) are queries enclosed within another query. These are useful in cases where you're trying to filter the results of a query against the result of a separate aggregate function. To illustrate this idea, say you want to know which of your friends have won more matches than Barbara. Rather than querying how many matches Barbara has won then running another query to see who has won more games than that, you can calculate both with a single query: ``` SELECT name, wins FROM tourneys WHERE wins > ( SELECT wins FROM tourneys WHERE name = 'Barbara' ); ``` ``` Output name | wins --------+------ Dolly | 7 Etta | 4 Irma | 9 Gladys | 13 (4 rows) ``` The subquery in this statement was run only once; it only needed to find the value from the `wins` column in the same row as `Barbara` in the `name` column, and the data returned by the subquery and outer query are independent of one another. There are cases, though, where the outer query must first read every row in a table and compare those values against the data returned by the subquery in order to return the desired data. In this case, the subquery is referred to as a *correlated subquery*. The following statement is an example of a correlated subquery. This query seeks to find which of your friends have won more games than is the average for those with the same shoe size: ``` SELECT name, size FROM tourneys AS t WHERE wins > ( SELECT AVG(wins) FROM tourneys WHERE size = t.size ); ``` In order for the query to complete, it must first collect the `name` and `size` columns from the outer query. Then, it compares each row from that result set against the results of the inner query, which determines the average number of wins for individuals with identical shoe sizes. Because you only have two friends that have the same shoe size, there can only be one row in the result set: ``` Output name | size ------+------ Etta | 9 (1 row) ``` As mentioned earlier, subqueries can be used to query results from multiple tables. To illustrate this with one final example, say you wanted to throw a surprise dinner for the group's all-time best bowler. You could find which of your friends has the best bowling record and return their favorite meal with the following query: ``` SELECT name, entree, side, dessert FROM dinners WHERE name = (SELECT name FROM tourneys WHERE wins = (SELECT MAX(wins) FROM tourneys)); ``` ``` Output name | entree | side | dessert --------+--------+-------+----------- Gladys | steak | fries | ice cream (1 row) ``` Notice that this statement not only includes a subquery, but also contains a subquery within that subquery. Conclusion ---------- Issuing queries is one of the most commonly-performed tasks within the realm of database management. There are a number of database administration tools, such as [phpMyAdmin](https://www.phpmyadmin.net/) or [pgAdmin](https://www.pgadmin.org/), that allow you to perform queries and visualize the results, but issuing `SELECT` statements from the command line is still a widely-practiced workflow that can also provide you with greater control. If you're new to working with SQL, we encourage you to use our [SQL Cheat Sheet](https://www.digitalocean.com/community/tutorials/how-to-manage-sql-database-cheat-sheet?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019) as a reference and to review the [official PostgreSQL documenation](https://www.postgresql.org/docs/10/static/index.html). Additionally, if you'd like to learn more about SQL and relational databases, the following tutorials may be of interest to you: * [Understanding SQL And NoSQL Databases And Different Database Models](https://www.digitalocean.com/community/tutorials/understanding-sql-and-nosql-databases-and-different-database-models?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019) * [How To Set Up Logical Replication with PostgreSQL 10 on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-set-up-logical-replication-with-postgresql-10-on-ubuntu-18-04?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019) * [How To Secure PostgreSQL Against Automated Attacks](https://www.digitalocean.com/community/tutorials/how-to-secure-postgresql-against-automated-attacks?utm_source=devto&utm_medium=display&utm_campaign=DO_Dev_Awareness_Cold_Devto_2019) --- [![CC 4.0 License](https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png)](http://creativecommons.org/licenses/by-nc-sa/4.0/) _This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/)_
mdrakedo
84,735
Look for advice - need React component library for pet project
Hello! I have a pet project and I plan to rewrite it in React. Couple words about project: It is wo...
0
2019-02-20T23:33:05
https://dev.to/ntsdk/look-for-advice---need-react-component-library-for-pet-project-1dkk
react, javascript
--- title: Look for advice - need React component library for pet project published: true description: tags: react, javascript --- Hello! I have a pet project and I plan to rewrite it in React. Couple words about project: - It is working in browser - It supposes a lot of text editing, visual editing, plots and schemas so it is desktop first. Right now I use customized Bootstrap 3 for UI. I reduced margins and paddings to have more dense environment. What is valuable for me: - It is supposed that user will work in my program for hours. It is very important to keep user eyes. So UI lib should have color themes for that. - Themes with small component size. - Possibility of class based customization because I use several third-party visual tools and it should be integrated together. You can help me by answering questions: - Which component libraries are suitable? - Which component libraries are not suitable? - What colors or color themes are good for long usage?
ntsdk
84,768
Sometimes, in the heat of the moment, it's forgivable to cause a runtime exception.
Perhaps we could benefit from putting more thought into how our code can fail, and who is going to see it when it does.
0
2019-02-21T01:32:39
https://dev.to/nimmo/sometimes-in-the-heat-of-the-moment-its-forgivable-to-cause-a-runtime-exception-2ko2
javascript
--- title: Sometimes, in the heat of the moment, it's forgivable to cause a runtime exception. published: true description: Perhaps we could benefit from putting more thought into how our code can fail, and who is going to see it when it does. tags: javascript --- Runtime errors _suck_. But when you're working in JS they're difficult to avoid. Fortunately though, our whole deal is problem solving; so avoid them we do. For client-side JS this seems totally necessary: We shouldn't subject users to runtime exceptions; we should be giving them appropriate feedback in the event of an error. But do we _always_ want to avoid runtime exceptions at all costs? I'm not convinced. In a perfect world, we'd have an equivalent to Elm's compiler in every language. But in the real world, we can save ourselves some time when things actually do go wrong. Take this, as an example: ```javascript import someModule from 'someModule'; const { someObject: { someFunction, } = {}, } = someModule; ``` Let's assume that our code is being transpiled with Babel before it's deployed. In this instance, if `someObject` didn't exist in `someModule`, then this would transpile fine. But at runtime, `someFunction` would be `undefined`. ``` Uncaught TypeError: someFunction is not a function. ``` It seems like this code is destined to fail for one of our users. Consider if we'd done it this way instead, without the default value in our destructuring: ```javascript import someModule from 'someModule'; const { someObject: { someFunction, }, } = someModule; ``` Now, if `someObject` doesn't exist in `someModule` we'll get a runtime error when trying to transpile instead of after it's been deployed. ``` Uncaught TypeError: Cannot destructure property `someFunction` of 'undefined' or 'null'. ``` Not only is this feedback much faster, but it's also going to fail on _our_ machine. This particular example can only even happen in one place in any given file, which improves our ability to locate the problem quickly. With any sort of automated build pipeline in place, this error now _can't even possibly make it to production_ any more. Not bad considering that all we did was remove three characters. This example isn't indicative of every possible problem we can encounter in JS, of course. But this was a real example that I saw recently. It was the direct result of an over-zealous approach to preventing runtime exceptions: something that the original code _didn't even do_. **TL;DR**: We ought to spend a lot more time thinking about how and where our code can fail, and we should be very careful to consider unintended consequences we introduce by trying to protect ourselves from errors.
nimmo
84,826
Anatomy of functors and category theory
I'll try to give you an intuition about what is a functor and what do they look like in Scala. Then the ones being curious about theory can keep on reading because we'll take a quick glance at category theory and what are functors in category theory terms. Then we'll try to bridge the gap between category theory and pure FP in Scala and finally take a look back at our functors !
432
2019-02-21T08:51:28
http://geekocephale.com/blog/2019/02/16/functors-cat-theory
scala, functional
--- title: "Anatomy of functors and category theory" published: true description: "I'll try to give you an intuition about what is a functor and what do they look like in Scala. Then the ones being curious about theory can keep on reading because we'll take a quick glance at category theory and what are functors in category theory terms. Then we'll try to bridge the gap between category theory and pure FP in Scala and finally take a look back at our functors !" tags: [scala, fp, functional] canonical_url: "http://geekocephale.com/blog/2019/02/16/functors-cat-theory" series: "Functional programming atlas" --- [Check my articles on my blog [here](http://geekocephale.com/blog/)] I will try to group here, in an anatomy atlas, basic notions of functional programming that I find myself explaining often lately into a series of articles. The idea here is to have a place to point people needing explanations and to increase my own understanding of these subjects by trying to explain them the best I can. I'll try to focus more on making the reader feel an intuition, a feeling about the concepts rather than on the perfect, strict correctness of my explanations. - Part 1: [Anatomy of functional programming](https://dev.to/mmenestret/anatomy-of-functional-programming-1bpg) - Part 2: [Anatomy of an algebra](https://dev.to/mmenestret/anatomy-of-an-algebra-3cd9) - Part 3: [Anatomy of a type class](https://dev.to/mmenestret/anatomy-of-a-type-class-440j) - Part 4: [Anatomy of semi groups and monoids](https://dev.to/mmenestret/anatomy-of-semigroups-and-monoids-22i8) - Part 5: [Anatomy of functors and category theory](https://dev.to/mmenestret/anatomy-of-functors-and-category-theory-2gf0) - Part 6: Anatomy of the tagless final encoding - Yet to come ! # What is a _functor_ ? There's a nice answer [by Bartosz Milewski on _Quora_](https://www.quora.com/Functional-Programming-What-is-a-functor) from which I'll keep some parts: > I like to think of a _functor_ as a generalization of a container. A regular container contains zero or more values of some type. A _functor_ may or may not contain a value or values of some type (...) . > > So what can you do with such a container? You might think that, at the minimum, you should be able to retrieve values. But each container has its own interface for accessing values. If you try to specify that interface, you're Balkanizing containers. You're splitting them into stacks, queues, smart pointers, futures, etc. So value retrieval is too specific. > > It turns out that the most general way of interacting with a container is by modifying its contents using a function. ## Let's try to rephrase that - _Functors_ represent containers - For now, we won't care about their particularities, all we need to know is that, at some point, they will maybe hold a value or values "inside" (but keep in mind that every container have particularities, I'll refer to that at the end) - Defining an generic interface about how to access values inside a container does not make any sense since some containers' values would be accessed by index (_arrays_ for example), others only by taking the first element (_stacks_ for example), other by taking the value only if it exists (_optionals_), etc. - __However__, we can define an interface defining how the value(s) inside containers is modified by a function despite being in a container __So, to summarize, a _functor_ is a kind of container that can be mapped over by a function.__ But _functors_ have to respect some rules, called _functor_'s laws... - __Identity__: A _functor_ mapped over by the identity function (the function returning its parameter unchanged) is the same as the original _functor_ (the container and its content remain unchanged) - __Composition__: A _functor_ mapped over the composition of two functions is the same as the _functor_ mapped over the first function and then mapped over the second one > A quick note about _functor_ / _container_ parallel: the analogy is convenient to get the intuition, but not all _functors_ will not fit into that model, keep it in a corner of you mind so that you're not taken off guard. # How does it look like in practice Along the next sections, the examples and code snippets I'll provide will be in _Scala_. ## Let explore some examples We're going to play with concrete containers of `Int` values to try to grasp the concept. ```scala val halve: Int => Float = x => x.toFloat / 2 ``` Here we defined the function from `Int` to `Float` that we are going to use to map over our containers - Our first guinea pig is `Option[Int]`, which is a container of (0 or 1) `Int`. ```scala val intOpt: Option[Int] = Some(99) val mappedResult1: Option[Float] = intOpt.map(halve) ``` We can see that an `Option[Int]` turns into an `Option[Float]`, the inner value of the container being modified from `Int` to `Float` when mapped over with a function from `Int` to `Float`... - Our second guinea pig is `List[Int]`, which is a container of (0 or more) `Int`. ```scala val intList: List[Int] = List(1, 2, 3) val mappedResult2: List[Float] = intList.map(halve) ``` We can see that an `List[Int]` turns into a `List[Float]`, the inner values of the container are modified from `Int` to `Float` when mapped over with a function from `Int` to `Float`... - Our third is a hand made `UselessContainer[Int]`, which is a container of exactly 1 `Int`. ```scala final case class UselessContainer[A](innerValue: A) val intContainer: UselessContainer[Int] = UselessContainer(99) val mappedResult3: UselessContainer[Float] = intContainer.map(halve) ``` We can see that an `UselessContainer[Int]` turns into an `UselessContainer[Float]`, the inner value of the container being modified from `Int` to `Float` when mapped over with a function from `Int` to `Float`... (I've deliberately hidden an implementation detail here for clarity, I'll cover it later) So we can observe that pattern we described earlier: __A _functor_, let's call it `F[A]`, is a structure containing a value of type `A` and which can be mapped over by a function of type `A => B`, getting back a _functor_ `F[B]` containing a value of type `B`.__ ## How do we abstract and encode that ability ? _Functors_ are usually represented by a _type class_. As a reminder, a _type class_ is a group of types that all provide the same abilities (interface), which make them part of the same class (group, "club") of same abilities providing types (see my article about _type classes_ [here](http://geekocephale.com/blog/2018/10/05/typeclasses). This is the _functor type class_ implementation: ```scala trait Functor[F[_]]{ def map[A, B](fa: F[A], func: A => B): F[B] } ``` 1. The types our _functor_ _type class_ abstract over are _type constructors_ (`F[_]`, our container types) 2. The _type class_ exposes a `map` function taking a container `F[A]` of values of type `A`, a function of type `A => B` and return a `F[B]`, a container of values of type `B`: the pattern we just described. > **A note about _type constructors_**: A _type constructor_ is a _type_ to which you have to supply an other _type_ to get back a new _type_. You can think of it just as functions that take values to produce values. And that makes sense, since we have to supply to our container type the type of values it will "hold" ! > > Most used concrete _type constructors_ are `List[_]`, `Option[_]`, `Either[_,_]`, `Map[_, _]` and so on. To illustrate what it means in your _Scala_ code let's make our `UselessContainer` a _functor_: ```scala implicit val ucFunctor = new Functor[UselessContainer] { override def map[A, B](fa: UselessContainer[A], func: A => B): UselessContainer[B] = UselessContainer(func(fa.innerValue)) } ``` Be careful, if you attempt to create your own _functor_, it is not enough. __You have to prove that your _functor_ instance respects the _functor_'s laws we stated earlier__ (usually via property based tests), hence that: - For all values `uc` of type `UselessContainer`: ```scala ucFunctor.map(uc, identity) == uc ``` - For all values `uc` of type `UselessContainer` and for any two functions `f` of type `A => B` and `g` of type `B => C`: ```scala ucFunctor.map(uc, g compose f) == ucFunctor.map(ucFunctor.map(uc, f), g) ``` However, you can safely use _functor_ instances brought to you by _Cats_ or _Scalaz_ because their implementations __lawfulness__ are tested for you. (You can find the _Cats_ _functor_ laws [here](https://github.com/typelevel/cats/blob/master/laws/src/main/scala/cats/laws/FunctorLaws.scala) and their tests [here](https://github.com/typelevel/cats/blob/master/laws/src/main/scala/cats/laws/discipline/FunctorTests.scala). They are tested with [discipline](https://typelevel.org/cats/typeclasses/lawtesting.html).) Now that you know what a _functor_ is and how it's implemented in Scala, let's talk a bit about category theory ! # An insight about the theory behind _functors_ During this article, we only talked about the most widely known kind of _functors_, the _co-variant endofunctors_. Don't mind the complicated name, they are all you need to know to begin having fun in functional programming. However if you'd like to have a grasp a little bit of theory behind _functors_, keep on reading. ___Functors_ are structure-preserving mappings between categories.__ ## Tiny crash course into category theory Category theory is the mathematical field that study how things relate to each others in general and how their relations compose. A category is composed of: - __Objects__ (view it as something purely abstract, absolutely anything, points for example) - __Arrows__ or __morphisms__ (which are the ways to go from one object to another) - And two fundamental properties: - __Composition__: A way to compose these arrows associatively. It means that if it exists an arrow from an object `a` to an object `b` and an arrow from the object `b` to an object `c`, it exists an arrow that goes from `a` to `c` and the order of composition does not matter (given 3 morphisms that are composable `f`, `g`, `h` then (`h` . `g`) . `f`) == `h` . (`g` . `f`)) - __Identity__: There is an identity arrow for every object in the category which is the arrow which goes from that object to itself ![category](https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Category_SVG.svg/1024px-Category_SVG.svg.png) - `A`, `B`, `C` are this category's __objects__ - `f` and `g` are its __arrows__ or __morphisms__ - `g . f` is `f` and `g` composition since `f` goes from `A` to `B` and `g` goes from `B` to `C` (and it __MUST__ exist to satisfy composition law, since `f` and `g` exist) - `1A`, `1B` and `1C` are the identity arrows of `A`, `B` and `C` ## Back to _Scala_ In the context of purely functional programming in _Scala_, we can consider that we work in a particular category that we are going to call it `S` (I won't go into theoretical compromises implied by that parallel, but there are some !): - `S` __objects__ are _Scala_'s __types__ - `S` __morphisms__ are _Scala_'s __functions__ - __Composition__ between morphisms is then __function composition__ - __Identity__ morphisms for `S` objects is __the identity function__ Indeed, if we consider the object `a` (the type `A`) and the object `b` (the type `B`), _Scala_ functions `A => B` are morphisms between `a` and `b`. Given our morphism from `a` to `b`, if it exists an object `c` (the type `C`) and a morphism between `b` and `c` exists (a function `B => C`): - Then it must exist a morphism from `a` to `c` which is the composition of the two. And it does ! It is (pseudo code): - For `g: B => C` and `f: A => B`, `g compose f` - And that composition is associative: - `(h compose g) compose f` is the same as `h compose (g compose f)` Moreover for every object (every type) it exists an identity morphism, the identity function, which is the type parametric function: - `def id[A](a: A) = a` We can now grasp how category theory and purely functional programming can relate ! ## And then back to our _functors_ Now that you know what a category is, and that you know about the category `S` we work in when doing functional programming in _Scala_, re-think about it. A _functor_ `F` being a structure-preserving mapping between two categories means that it maps objects from category `A` to objects from category `F(A)` (the category which `A` is mapped to by the _functor_ `F`) and morphisms from `A` to morphisms of `F(A)` while preserving their relations. Since we always work with types and with functions between types in Scala, a _functor_ in that context is a mapping from and to the __same category__, between `S` and `S`, and that particular kind of _functor_ is called an __endofunctor__. Let's explore how `Option` behaves (but we could have replaced `Option` by any _functor_ `F`): __Objects__ | Objects in `S` (types) | Objects in `F(S)` | | ------------- | ----------------- | | `A` | `Option[A]` | | `Int` | `Option[Int]` | | `String` | `Option[String]` | So `Option` type construtor maps objects (types) in `S` to other objects (types) in `S`. __Morphisms__ Let's use our previously defined: - `def map[A, B](fa: F[A], func: A => B): F[B]`. If we partially apply `map` with a function `f` of type `A => B` like so (pseudo-code): `map(_, f)`, then we are left with a new function of type `F[A] => F[B]`. Using `map` that way, let's see how morphisms behave: | Morphisms in `S` (function between types) | Morphisms in `F(S)` | | ------------- | ----------------- | | `A => A` (identity) | `Option[A] => Option[A]` | | `A => B` | `Option[A] => Option[B]` | | `Int => Float` | `Option[Int] => Option[Float]` | | `String => String` | `Option[String] => Option[String]`| So `Option`'s `map` maps morphisms (functions from type to type) in `S` to other morphisms (functions from type to type) in `S`. We won't go into details but we could have shown how `Option` _functor_ respects morphism composition and identity laws. ## What does it buy us ? - _Functors_ are mappings between two categories - A _functor_, due to its theorical nature, preserves the _morphisms_ and their relations between the two categories it maps - When programming in pure FP, we are in `S`, the category of _Scala_ types, functions and under function composition. The _functors_ we use are then _endofunctors_ (from `S` to `S`) because they map _Scala_ types and functions between them to other _Scala_ types and functions between them In programming terms, _(endo)functors_ in _Scala_ allow us to move from origin types (`A`, `B`, ...), to new target types (`F[A]`, `F[B]`, ...) while safely allowing us to re-use the origin functions and their compositions on the target types. To continue with our `Option` example, `Option` type constructor "map" our types `A` and `B` into `Option[A]` and `Option[B]` types while allowing us to re-use functions of type `A => B` thanks to `Options`' `map`, turning them into `Option[A] => Option[B]` and preserving their compositions. But that is not over ! Let's leave abstraction world we all love so much and head back to concrete world. Concrete _functors_ instances enhance our origin types with new capacities. __Indeed, _functor_ instances are concrete data structures__ with particularities (the one we said we did not care about at the beginning of that article), the abilty to represent empty value for `Option`, the ability to suspend an effectful computation for `IO`, the ability to hold multiple values for `List` and so on ! Ok, so, to sum up, why _functors_ are awesome ? The two main reasons I can think of are: 1. Abstraction, abstraction, abstraction... Code using _functors_ allows you to only care about the fact that what you manipulate is mappable. - It increases code reuse since a piece of code using _functors_ can be called with any concrete _functor_ instance - And it reduces a lot error risks since you _have to_ deal with less particularities of the concrete, final, data structure your functions will be called with 2. They add functionnalities to existing types, while allowing to still use functions on them (you would not want to re-write them for every _functor_ instances), and that's a big deal: - `Option` allow you to bring `null` into a concrete value, making code a lot healthier (and functions purer) - `Either` allow you to bring computation errors into concrete values, making dealing with computation errors a lot healthier (and functions purer) - `IO` allow you to turn a computations into a values, allowing better compositionality and referential transparency - And so on... I hope I made a bit clearer what _functors_ are in the context of category theory and how that translates to pure FP in _Scala_ ! # More material If you want to keep diving deeper, some interesting stuff can be found on my [FP resources list](https://github.com/mmenestret/fp-ressources) and in particular: - [Scala with Cats - Functor chapter](https://underscore.io/books/scala-with-cats/) - [Functors' section of the awesome Functors, Applicatives, And Monads In Pictures article](http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html#functors) - [Yann Esposito's great "Category theory and programming"](http://yogsototh.github.io/Category-Theory-Presentation/categories.html) - Let me know if you need more # Conclusion To sum up, we saw: - That a _functor_ is a kind of container that can be mapped over by a function and the laws it has to respect - Some examples and identified a common pattern - How we abstract over and encode that pattern in _Scala_ as a _type class_ of _type constructors_ - We had a modest overview about category theory, what _functors_ are in category theory, and how both relates to pure FP in _Scala_ - We concluded by how great _functors_ are and for what practical reasons I'll try to keep that blog post updated. If there are any additions, imprecision or mistakes that I should correct or if you need more explanations, feel free to contact me on Twitter or by mail ! --- Edit: Thanks [Jules Ivanic](https://twitter.com/guizmaii) for the review :).
mmenestret
84,930
Adding Pages to a Gatsby Project
How to add new pages to a Gatsby project and navigate between them with Gatsby's Link component.
6,774
2019-02-21T16:27:43
https://michaeluloth.com/gatsby-adding-pages/
gatsby, react, beginners
--- title: Adding Pages to a Gatsby Project description: How to add new pages to a Gatsby project and navigate between them with Gatsby's Link component. canonical_url: https://michaeluloth.com/gatsby-adding-pages/ tags: gatsby, react, beginners cover_image: https://dev-to-uploads.s3.amazonaws.com/i/7bmn1ovozc4p9gy3t0ax.png series: Up & Running with Gatsby published: true --- This is the fifth video in our [beginner series](https://www.youtube.com/watch?v=jAa1wh5ATm0&list=PLHBEcHVSROXQQhXpNhmiVKKcw72Cc0V-U) exploring GatsbyJS and how to use it to easily build performant apps and websites. In this video, we learn how to add new pages to a Gatsby project and how to navigate between them using Gatsby's Link component. Check out the video below or [view the entire playlist](https://www.youtube.com/watch?v=jAa1wh5ATm0&list=PLHBEcHVSROXQQhXpNhmiVKKcw72Cc0V-U) on YouTube. Enjoy! 🎉📺 {% youtube ktHshp6SKXc %}
ooloth
85,286
Decorator design pattern [Structural]
The Decorator Design Pattern allows adding new behavior to existing types. We can...
0
2019-02-26T03:25:58
https://itscoderslife.wordpress.com/2019/02/23/decorator-design-pattern-structural/
blog, concepts, designpattern, extension
--- title: Decorator design pattern [Structural] published: true tags: blog,concepts,Design Pattern,extension canonical_url: https://itscoderslife.wordpress.com/2019/02/23/decorator-design-pattern-structural/ --- The Decorator Design Pattern allows adding new behavior to existing types. We can extend the functionality of the type without modifying its implementation which is a flexible alternative to subclassing. The Decorator can be implemented via wrapping the object to be enhanced. Extensions provide the ability to add additional methods to types without having to suppress or change their source code. > The Decorator Design Pattern adds new responsibilities to objects without modifying the code of the type used to create them Example: A Basic Color class lets you create color with RGB and HSB. What if I have to create a color using hex values? Typically we subclass. Instead of subclassing we need to extend the same color class and add a function to create color with hex values. Another example would be when ordering food online a typical burger or subway sandwich would cost x then user also has options to add extra cheese or steak. Sometimes the vendor provides option to add a drink to the cart. All this adding extras would cost additional charges. The final cost is caluclated with calling the decorators to add their cost to the cart over the base item. Common misuses: Accidentally replacing the existing functionality of a type. Another misuse is that we may change the original purpose of a type by decorating it with unrelated behavior.
itscoderslife
85,294
Automatización con Shell Scripts
Aprendí a usar Shell Scripting y en este tutorial comparto las cosas que aprendí con un script simple de instalación de un proyecto.
0
2019-02-23T05:00:45
https://sergiodxa.com/articles/automatizacion-shell-scripts
spanish, shell, automate, todayilearned
--- title: Automatización con Shell Scripts published: true description: Aprendí a usar Shell Scripting y en este tutorial comparto las cosas que aprendí con un script simple de instalación de un proyecto. tags: spanish, shell, automate, til canonical_url: https://sergiodxa.com/articles/automatizacion-shell-scripts --- _Publicado originalmente en https://sdx.im/articles/automatizacion-shell-scripts_ Hace unos días tuve la necesidad de crear un script para automatizar el proceso de instalación de un proyecto y me tocó aprender Shell Scripting. Acá las cosas que aprendía creándolo. > _Nota_: Esto está basado en un script que tuve que hacer para un proyecto en el que estoy trabajando. ## Creando el archivo Lo primero es crear el archivo, vamos a llamarlo `setup.sh`. Lo podemos colocar en cualquier carpeta, por ejemplo en `~/` haciendo `touch ~/setup.sh`. ## Condiciones Luego vamos a ir agregando el código. Vamos a hacer que nuestro script tenga dos funcionalidades, `install` y `run`, para esto necesitamos una simple condición. ```shell if [ "$1" = "install" ] then # Install the app elif [ "$1" = "run" ] then # Run the app else # Throw an error fi ``` Si vemos estamos usando algo llamado `$1`, eso es una variable, en este caso hacer referencia al primer argumento que le pasemos a nuestro script al ejecutarlo. Por ejemplo, si para ejecutarlo hacemos esto: ```bash bash ~/setup.sh install ``` Entonces `install` es el valor de `$1`. Lo siguiente que vemos es como hacen las condiciones, la sintaxis es: ```shell if [ condición ] then # algo elif [ otra condición] then # otra cosa else # algo más fi ``` La condición se inicia con `if`. Después del `if` va la primer condición entre corchetes y en la siguiente línea un `then`, luego de este va el código a ejecutar si se pasa la condición, para hacer un `else if` se usa la palabra clave`elif` seguida por la condición entre corchetes y el `then`, igual que un `if` normal. Para el último caso hacemos `else` sin necesidad de poner un `then`. Por último ponemos un `fi` que sirve para terminar el bloque de condiciones. ## Funciones Vamos a crear algunas funciones para ordenar nuestro código. Creemos una función que nos permita mostrarle texto al usuario de distintos colores dependiendo de que ocurrió, vamos a tener entonces `throw`, `warn` y `print` que va a mostrar texto en rojo, amarillo y verde, respectivamente. ```shell NC='\033[0m' throw() { COLOR='\033[0;31m' >&2 echo -e ${COLOR}$1${NC} } print() { COLOR='\033[0;32m' echo -e ${COLOR}$1${NC} } warn() { COLOR='\033[0;33m' echo -e ${COLOR}$1${NC} } ``` La forma de definir una función es simplemente poner el nombre de la función seguida por `()`. Luego creamos un bloque de código usando llaves y dentro va el código de la función. En nuestro caso son funciones más o menos similares, en todas creamos una variable llamada `COLOR` con su valor, `'\033[0;31m'` significa rojo, `'\033[0;32m'` significa verde y `'\033[0;33m'` significa amarillo, la variable `NC` que creamos antes de las funciones significa _No Color_ y la vamos a usar para hacer que el texto deje de tener color. Lo siguiente que hacemos es hacer un `echo` para mostrar texto en pantalla, le pasamos el flag `-e` para soportar `\`, esto es necesario para usar colores, y le pasamos como valor a mostrar en pantalla `${COLOR}$1${NC}` ¿Qué significa esto? Primero ponemos el contenido de nuestra variable de color, después colocamos `$1` que, si recordás, es el primer argumento que le pasamos a nuestro script, resulta que dentro de una función es el primer argumento que le pasemos a la función, en nuestro caso el texto, y por último ponemos `${NC}` que que no tenga color otra vez y no se quede el resto del texto en rojo, verde o amarillo. Hay un caso diferente al resto que es la función `throw`, antes del `echo` se agrega `>&2`. Eso significa que el contenido del `echo` lo debe pasar a `stderr` en vez de `stdout` (lo normal). Esto es para que si algún programa usa nuestro script va a poder identificar los `throw` como errores correctamente. Ahora podríamos empezar a usar estas funciones, por ejemplo agreguemos que en nuestro `else` se muestre un mensaje de error en rojo. ```shell NC='\033[0m' throw() { COLOR='\033[0;31m' >&2 echo -e ${COLOR}$1${NC} } print() { COLOR='\033[0;32m' echo -e ${COLOR}$1${NC} } warn() { COLOR='\033[0;33m' echo -e ${COLOR}$1${NC} } if [ "$1" = "install" ] then # Install the app elif [ "$1" = "run" ] then # Run the app else throw "Invalid command, available values are 'install' or 'run'" fi ``` Con esto ya vamos avanzando. Ahora hagamos la instalación. ## Función de instalación Creemos una función llamada `install` que nos va a instalar nuestra aplicación. El proceso para instalar debe cubrir todo, la idea es que podamos correr este script en una computadora nueva y nos deje el entorno listo. Para esto vamos a necesitar realizar varios pasos: - Instalar Homebrew - Instalar Git - Crear una clave SSH - Agregarla a GitHub - Instalar Yarn y Node.js - Clonar desde GitHub nuestro repositorio - Instalar dependencias En código esto sería algo así ```shell confirm_ssh() { read -p "Have you added the SSH Key? [y/N] " CONFIRM_SSH if [[ $CONFIRM_SSH -ne "y" || $CONFIRM_SSH -ne "Y" ]] then warn "Please add it to continue." confirm_ssh fi } install() { /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" print "Homebrew installed" brew install git print "Git installed" warn "Please, enter your GitHub email address and name" read -p "Email address: " EMAIL read -p "Name: " NAME git config --global user.name "$NAME" git config --global user.email "$EMAIL" echo "Generating SSH Key" ssh-keygen -t rsa -b 4096 -C "$EMAIL" eval "$(ssh-agent -s)" ssh-add -K ~/.ssh/id_rsa pbcopy < ~/.ssh/id_rsa.pub warn "We have copied your new SSH Key to your clipboard, please add it to your GitHub account going to https://github.com/settings/keys" sleep 2.5 open "https://github.com/settings/keys" confirm_ssh print "Git & GitHub configured" brew install yarn print "Yarn & Node.js installed" git clone git@github.com:sergiodxa/personal-site.git ~/website print "Repository clonned" cd ~/website && npm install ; cd - print "Dependencies installed" print "Project succesfully installed, you could run it with 'bash ~/setup.sh run'" } ``` Veamos que hacen estas funciones `install` y `confirm_ssh`. Empecemos por `install`, lo primero que hacemos es instalar Homebrew, al terminar mostramos un mensaje diciendo que fue instalado. Después usamo Homebrew para instalar Git, otra vez le avisamos al usuario que fue instalado. Le advertimos al usuario que vamos a necesitar su nombre y dirección de email y usamos `read` para pedirle que ingrese estos valores, el flag `-p` nos permite pasar un texto para mostrarse a la izquierda de donde el usuario va a ingresar su nombre y email, por último a `read` le pasamos el nombre de la variable donde queremos guardar lo que el usuario escriba. Una vez tenemos `$NAME` y `$EMAIL` procedemos a configurar Git para que use estos valores y generamos una nueva clave SSH cuyo valor copiamos al portapapeles usando `pbcopy < ~/.ssh/id_rsa.pub`. Le advertimos al usuario que copiamos su clave SSH al portapapeles y que tiene que agregarla a su cuenta de GitHub, esperamos dos segundos y medio y abrimos en el navegador la URL donde se agrega la clave SSH, esta espera es para dar tiempo a leer. Después de esto llamamos a la función `confirm_ssh`. Esta función muy simple usa `read` como ya vimos para preguntarle al usuario si ya agregó la clave SSH, si el usuario no escribe `y` o `Y` entonces se muestra el mensaje la advertencia "Please add it to continue" y se vuelve a llamar a la función `confirm_shh` recursivamente, de esta forma mientras el usuario no escriba `y` o `Y` no va a pasar de `confirm_ssh`. Luego de todo esto instalamos Yarn usando Homebrew y este además instala Node.js, lo que nos da un dos por uno. Clonamos el repositorio de nuestro proyecto a una carpeta ya conocida, también podríamos usar `read` para preguntarle al usuario a que carpeta clonar el repositorio. Después de esto ejecutamos `cd ~/website && yarn ; cd -`, esto lo que hace es movernos a la carpeta donde clonamos nuestro proyecto, ejecutar `yarn` y al terminar de instalar las dependencias volver a la carpeta donde estábamos originalmente. Por último se muestra un mensaje indicando como ejecutar el proyecto. ## Función de ejecución Ahora que ya instalamos todo, vamos a crear la función para iniciar nuestro proyecto. ```shell run() { print "Running project" cd ~/website && yarn dev ; cd - } ``` Eso es todo, mostramos un mensaje diciendo que vamos a correr el proyecto, nos movemos a la carpeta donde clonamos el repositorio y ejecutamos `yarn dev` para iniciarlo, este script `dev` lo uso en mi sitio personal para iniciarlo en modo desarrollo. Al terminar va a volver a la carpeta inicial. ## Poniendo todo junto Ahora juntemos todo, de forma ordenada y ejecutemos nuestras nuevas funciones en las condiciones correspondientes. ```shell NC='\033[0m' throw() { COLOR='\033[0;31m' >&2 echo -e ${COLOR}$1${NC} } print() { COLOR='\033[0;32m' echo -e ${COLOR}$1${NC} } warn() { COLOR='\033[0;33m' echo -e ${COLOR}$1${NC} } run() { print "Running project" cd ~/website && yarn dev ; cd - } confirm_ssh() { read -p "Have you added the SSH Key? [y/N] " CONFIRM_SSH if [[ $CONFIRM_SSH -ne "y" || $CONFIRM_SSH -ne "Y" ]] then warn "Please add it to continue." confirm_ssh fi } install() { /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" print "Homebrew installed" brew install git print "Git installed" warn "Please, enter your GitHub email address and name" read -p "Email address: " EMAIL read -p "Name: " NAME git config --global user.name "$NAME" git config --global user.email "$EMAIL" echo "Generating SSH Key" ssh-keygen -t rsa -b 4096 -C "$EMAIL" eval "$(ssh-agent -s)" ssh-add -K ~/.ssh/id_rsa pbcopy < ~/.ssh/id_rsa.pub warn "We have copied your new SSH Key to your clipboard, please add it to your GitHub account going to https://github.com/settings/keys" sleep 2.5 open "https://github.com/settings/keys" confirm_ssh print "Git & GitHub configured" brew install yarn print "Yarn & Node.js installed" git clone git@github.com:sergiodxa/personal-site.git ~/website print "Repository clonned" cd ~/website && npm install ; cd - print "Dependencies installed" print "Project succesfully installed, you could run it with 'bash ~/setup.sh run'" } if [ "$1" = "install" ] then install elif [ "$1" = "run" ] then run else throw "Invalid command, available values are 'install' or 'run'" fi ``` Eso es todo, si copias el contenido de este archivo a `~/setup.sh` y lo ejecutamos con `bash ~/setup.sh install` o `bash ~/setup.sh run` vamos a tener todo listo. Adicionalmente podríamos llamar la función `run` al final de la instalación para ya tener todo listo. ## Palabras finales Al principio usar Shell Scripting fue algo raro ya que nunca había tenido la necesidad de usarlo antes, pero es bastante simple y divertido y estoy pensando que otras cosas se podrían automatizar mediante scripts de Shell para hacer tareas de forma más sencilla.
sergiodxa
129,183
Next.js vs. Create React App: Whose apps are more performant?
Introduction What are the performance differences between Next.js and ...
0
2019-07-08T20:22:38
https://blog.logrocket.com/next-js-vs-create-react-app/
createreactapp, nextjs, react
--- title: Next.js vs. Create React App: Whose apps are more performant? published: true tags: create-react-app,nextjs,react canonical_url: https://blog.logrocket.com/next-js-vs-create-react-app/ --- ![](https://thepracticaldev.s3.amazonaws.com/i/ek6mz3tdc0m7prtctmt2.jpg) ## Introduction What are the performance differences between Next.js and Create React App? Let’s unpack that question with some data, but first, we need to understand what exactly we are comparing here. ### What is Next.js? Next.js is a React framework built by [Zeit](https://zeit.co), and according to [nextjs.org](https://nextjs.org/): > _With Next.js, server rendering React applications has never been easier, no matter where your data is coming from._ Next.js also supports static exporting, but for the purposes of this post, we are focused on that “server rendering” capability mentioned above. ### [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ### What is Create React App? According to its [Getting Started](https://facebook.github.io/create-react-app/docs/getting-started) page: > _Create React App is an officially supported way to create single-page React applications._ Again, for the purposes of this post, we are paying attention to the term “single-page.” ### SSR vs. CSR Next.js is one way that you can leverage React to support server-side rendering (SSR). Likewise, Create React App is one way that you can leverage React to support client-side rendering (CSR). There are other frameworks out there when it comes to either choice, but what we are really comparing in this post is how each rendering strategy impacts web application performance. We just happen to be using two of the more popular frameworks out there to make that comparison. ## The experiment Let’s start our experiment with a question: **Does SSR improve application performance?** ### Hypothesis Walmart Labs published a great post titled, “[The Benefits of Server Side Rendering Over Client Side Rendering](https://medium.com/walmartlabs/the-benefits-of-server-side-rendering-over-client-side-rendering-5d07ff2cefe8).” They also provide some excellent diagrams that demonstrate the fundamental difference between SSR vs. CSR performance. ![Server-Side Rendering Infographic](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/ssr-explanation.png?resize=800%2C570&ssl=1) ![Client-Side Rendering Infographic](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/csr-explanation.png?resize=800%2C564&ssl=1) These diagrams postulate that SSR can deliver HTML to the browser faster than CSR can, so let’s make that our hypothesis: a web application built with SSR is more performant than one built with CSR. ### Test parameters The best way to test our hypothesis is by building two applications with identical functionality and UI. We want it to mimic a real-world application as much as possible, so we will set a few parameters. The application must: - Fetch data from an API - Render a non-trivial amount of content - Carry some JavaScript weight ### Mobile matters Software developers are typically spoiled with high-powered computers paired with blazingly fast office networks; we do not always experience our applications the same way our users do. With that in mind, when optimizing for performance, it is important to consider both network and CPU limitations. Mobile devices generally have less processing power, so heavy JavaScript file parsing and expensive rendering can degrade performance. Fortunately, Chrome provides a dev tool called Lighthouse, which makes it easy for us to step into the shoes of our users and understand their experience. You can find this under the **Audits** tab in Chrome DevTools. ![Audits Tab Contents In Chrome DevTools](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/audits-tab-chrome-devtools.png?resize=1322%2C1306&ssl=1) We will use the exact settings displayed above: - Mobile device - Applied Fast 3G, 4x CPU Slowdown - Clear storage ### Geography matters If you live in Northern California and you are on servers living in AWS us-west-1 (N. California) all day, you are not experiencing your application the same way as your users in other parts of the United States, nor other parts of the world. So, for the purposes of this test, the demo apps and the API were deployed to Sydney, Australia (specifically, [Zeit’s syd1 region](https://zeit.co/docs/v2/platform/regions-and-providers/)). The client’s browser will be accessing the applications from Boulder, CO, USA. The distance between Boulder and Sydney is 8,318 mi (13,386 km). ![Distance Between Sydney And Boulder](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/boulder-sydney-difference.png?resize=730%2C725&ssl=1) Look at what that means for data fetching between these two applications. ![CSR Data Fetching](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/csr-data-fetching.png?resize=741%2C661&ssl=1)<figcaption>CSR</figcaption> ![SSR Data Fetching](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/ssr-data-fretching.png?resize=741%2C661&ssl=1)<figcaption>SSR</figcaption> ### Two applications, one API The code for the two apps is available in my [GitHub](https://github.com/goldenshun/nextjs-cra). Here are the applications: - [Create React App](https://nextjs-cra.goldenshun.now.sh/) - [Next.js](https://nextjs-cra.goldenshun.now.sh/nextjs) All of the code is in a monorepo: - `/cra` contains the Create React App version of the application - `/nextjs` contains the Next.js version - `/api` contains a mock API that both applications use The UI appears identical: ![CSR User Interface](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/cra-ui.png?resize=730%2C581&ssl=1)<figcaption>CSR</figcaption> ![SSR User Interface](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/nextjs-ui.png?resize=730%2C586&ssl=1)<figcaption>SSR</figcaption> And the JSX is nearly identical: ```jsx // Create React App <ThemeProvider> <div> <Text as="h1">Create React App</Text> <PrimaryNav align="left" maxItemWidth="20rem"> <NavItem href="/" selected>Create React App</NavItem> <NavItem href="/nextjs">Next.js</NavItem> </PrimaryNav> <Table data={users} rowKey="id" title="Users" hideTitle /> </div> </ThemeProvider> ``` ```jsx // Next.js <ThemeProvider> <div> <Text as="h1">Next.js</Text> <PrimaryNav align="left" maxItemWidth="20rem"> <NavItem href="/">Create React App</NavItem> <NavItem href="/nextjs" selected>Next.js</NavItem> </PrimaryNav> <Table data={users} rowKey="id" title="Users" hideTitle /> </div> </ThemeProvider> ``` We will get to what the `ThemeProvider` and other components are in a moment. The code differs in how the data is fetched from the API, however: ```jsx // Create React App // This all executes in the browser const = useState([]); useEffect(() => { const fetchData = async () => { const resp = await axios.get('/api/data'); const users = resp.data.map(user => { return { id: user.id, FirstName: user.FirstName, DateOfBirth: moment(user.DateOfBirth).format('MMMM Do YYYY'), } }); setUsers(users); }; fetchData(); }, []); ``` ```jsx // Next.js // This all executes on the server on first load Index.getInitialProps = async({ req }) => { const resp = await axios.get(`http://${req.headers.host}/api/data`); const users = resp.data.map(user => { return { id: user.id, FirstName: user.FirstName, DateOfBirth: moment(user.DateOfBirth).format('MMMM Do YYYY'), } }); return { users }; } ``` `getInitialProps` is a special function that Next.js uses to populate the initial data for a page in Next.js. You can learn more about fetching data with Next.js in their [docs](https://nextjs.org/docs#fetching-data-and-component-lifecycle). ### So what’s with all these components, and why are you using Moment.js? Going back to our original test parameters, we are trying to test with an application that at least somewhat resembles one we would ship to production. The `ThemeProvider`, `PrimaryNav`, etc. all come from a UI component library called [Mineral UI](https://mineral-ui.com/). We are also pulling in [Moment.js](https://momentjs.com/) because it is a larger dependency that adds some JavaScript weight and also some additional processing that needs to occur when rendering the component tree. The actual libraries that we’re using are not important; the point is to get a little closer to the weight of a normal application without taking the time to build all of that in its entirety. ## Results Here are the Lighthouse results for a full page load on each application. ![Create React App Lighthouse Results](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/cra-lighthouse-results.png?resize=753%2C1058&ssl=1)<figcaption>Create React App (CSR) results</figcaption> ![Next.js Lighthouse Results](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/nextjs-lighthouse-results.png?resize=751%2C1012&ssl=1)<figcaption>Next.js (SSR) results</figcaption> To understand the details of these metrics, read the [Lighthouse Scoring Guide](https://developers.google.com/web/tools/lighthouse/v3/scoring). One of the more notable differences for our purposes is the First Meaningful Paint. - **CRA:** 6.5s - **Next.js:** 0.8s According to Google’s [First Meaningful Paint](https://developers.google.com/web/tools/lighthouse/audits/first-meaningful-paint) docs: > _This audit identifies the time at which the user feels that the primary content of the page is visible._ Lighthouse also helps us visualize these differences: ![Create React App First Meaningful Paint](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/cra-first-meaningful-paint.png?resize=891%2C141&ssl=1)<figcaption>Create React App (CSR)</figcaption> ![Next.js First Meaningful Paint](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2019/06/nextjs-first-meaningful-paint.png?resize=905%2C143&ssl=1)<figcaption>Next.js (SSR)</figcaption> Do the visuals above look familiar? They should because they mimic the diagrams included in the **Hypothesis** section, where we postulated that SSR can deliver HTML to the browser faster than CSR. Based on these results, it can! To view the Lighthouse results yourself: 1. Download the files for [CRA](https://www.dropbox.com/s/g6ezatrz1iushgo/lighthouse-cra?dl=0) and [Next.js](https://www.dropbox.com/s/i71x8in664eb95t/lighthouse-nextjs?dl=0) 2. Open [https://googlechrome.github.io/lighthouse/viewer/](https://googlechrome.github.io/lighthouse/viewer/) in Chrome 3. Drag the downloaded files into the Lighthouse Viewer in Chrome ## Conclusion We opened our experiment with a question: **Does SSR improve application performance?** We built two nearly identical applications, one that uses client-side rendering with Create React App and one that uses server-side rendering with Next.js. The Lighthouse results from our simulations showed better metrics in the Next.js application in all significant categories, especially First Meaningful Paint (87.69 percent decrease), First Contentful Paint (87.69 percent decrease) and Time to Interactive (27.69 percent decrease). * * * ## Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps [![LogRocket Dashboard Free Trial Banner](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/1d0cd-1s_rmyo6nbrasp-xtvbaxfg.png?resize=1200%2C677&ssl=1)](https://logrocket.com/signup/) [LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store. In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps. [Try it for free](https://logrocket.com/signup/). * * * The post [Next.js vs. Create React App: Whose apps are more performant?](https://blog.logrocket.com/next-js-vs-create-react-app/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
85,504
Peacock - Choose What to Color
Visually identify different VS Code instances with colors that you select
0
2019-02-24T04:45:13
https://dev.to/john_papa/peacock---choose-what-to-color-33mg
vscode, javascript, fun
--- title: Peacock - Choose What to Color published: true description: Visually identify different VS Code instances with colors that you select tags: vscode, javascript, fun cover_image: https://thepracticaldev.s3.amazonaws.com/i/b6f462wg5xd7ca5mmvat.jpg --- Last Week I released [Peacock](https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&wt.mc_id=devto-blog-jopapa), which solves a problem I had: quickly and visually differentiating between VS Code instances. Yeah, I usually have many of them open for multiple unrelated projects. Not to mention that I also use VS Code to write articles (like this one), take notes, and edit just about everything. {% link john_papa/peacock---late-night-coding-ftw-3pk0 %} When I announced it, it seemed to go over well (thanks for the support). I received several good suggestions/requests via Twitter and GitHub. So I decided to add a few of them. ## What's New Here are the new features: (or you can check out the [CHANGELOG.md](https://github.com/johnpapa/vscode-peacock/blob/master/CHANGELOG.md)) The biggest new features are ... 1. you can reset (aka clear) all colors that Peacock sets in the workspace. 2. you can tell Peacock which parts of VS Code will be affected by when you select a color. You can do this by setting the property `peacock.affectedSettings` to one or more of the valid values below. ```javascript // Valid settings you can choose to be affected "peacock.affectedSettings": [ "activityBar", "statusBar", "titleBar" ] ``` So you can choose to affect just one of those, two of them or all three of them. You do you! {% twitter 1099531106083328001 %} ## Credits Redux I also want once again thank [@josephrexme](https://twitter.com/josephrexme) for the name and icon for Peacock and to the VS Code team and their incredibly [helpful guide for creating extensions](https://code.visualstudio.com/api/get-started/your-first-extension?wt.mc_id=devto-blog-jopapa) ## Get Peacock If you have Peacock and want the update to v0.0.7, VS Code will prompt you soon. If you are interested in trying out Peacock, you can [find it here in the marketplace](https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&wt.mc_id=devto-blog-jopapa). It is currently in preview, which means there may be dragons ahead. - Get the extension [here](https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&wt.mc_id=devto-blog-jopapa) - Contribute to GitHub repository [here](https://github.com/johnpapa/vscode-peacock?wt.mc_id=devto-blog-jopapa) [![peacock icon](https://thepracticaldev.s3.amazonaws.com/i/b4fq4z43mjx8q6mpip59.png)](https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&wt.mc_id=devto-blog-jopapa) Worst case, this extension is just something I'll use, and that's OK. But if you like it too, please give it a try and submit feedback in GitHub. You can [open issues](https://github.com/johnpapa/vscode-peacock/issues?wt.mc_id=devto-blog-jopapa) or grab an [open issue](https://github.com/johnpapa/vscode-peacock/issues?wt.mc_id=devto-blog-jopapa) and help contribute. Thanks!
john_papa
85,800
Migrating to TypeScript, Part 2: Trust the compiler!
Header image by Irina Iriser on Unsplash. In part 1, we explored how to initialise a project with...
443
2019-02-25T09:35:46
https://resir014.xyz/posts/2019/02/25/migrating-to-typescript-part-2/
typescript, javascript, beginners, programming
--- title: Migrating to TypeScript, Part 2: Trust the compiler! published: true tags: typescript, javascript, beginners, programming cover_image: https://thepracticaldev.s3.amazonaws.com/i/2zb1km4n88lvvsvmj8bz.jpg canonical_url: https://resir014.xyz/posts/2019/02/25/migrating-to-typescript-part-2/ series: Migrating to TypeScript --- _Header image by [Irina Iriser](https://unsplash.com/photos/nYIQYg8cQVc) on [Unsplash](https://unsplash.com/)._ In part 1, we explored how to initialise a project with the TypeScript compiler and the new TypeScript Babel preset. In this part, we’ll go through a quick primer of TypeScript’s features and what they’re for. We’ll also learn how to migrate your existing JavaScript project gradually to TypeScript, using an actual code snippet from an existing project. This will get you to learn how to trust the compiler along the way. * * * ## Thinking in TypeScript The idea of static typing and type safety in TypeScript might feel overwhelming coming from a dynamic typing background, but it doesn’t have to be that way. The main thing people often tell you about TypeScript is that it’s “just JavaScript with types”. Since JavaScript is dynamically typed, a lot of features like type coercion is often abused to make use of the dynamic nature of the language. So the idea of type-safety might never come across your average JS developer. This makes the idea of static typing and type safety feel overwhelming, but it doesn’t have to be that way. The trick is to rewire our thinking as we go along. And to do that we need to have a mindset. The primary mindset, as defined in [Basarat’s book](https://basarat.gitbooks.io/typescript/docs/javascript/recap.html), is **Your JavaScript is already TypeScript**. ### But why is TypeScript important? A more appropriate question to ask would be **“why is static typing in JavaScript important?”** Sooner or later, you’re going to start writing medium to large-scale apps with JavaScript. When your codebase gets larger, detecting bugs will become a more tedious task. Especially when it’s one of those pesky `Cant read property 'x' of undefined` errors. JavaScript is a dynamically-typed language by nature and it has a lot of its quirks, like `null` and `undefined` types, type coercion, and the like. Sooner or later, these tiny quirks will work against you down the road. Static typing ensures the correctness of your code in order to help detect bugs early. Static type checkers like TypeScript and [Flow](https://flow.org/) help reduce the amount of bugs in your code by detecting type errors during compile time. In general, using static typing in your JavaScript code [can help prevent about 15%](https://blog.acolyer.org/2017/09/19/to-type-or-not-to-type-quantifying-detectable-bugs-in-javascript/) of the bugs that end up in committed code. TypeScript also provides various productivity enhancements like the ones listed below. You can see these features on editors with first-class TypeScript support like [Visual Studio Code](https://code.visualstudio.com/). - Advanced statement completion through IntelliSense - Smarter code refactoring - Ability to infer types from usage - Ability to type-check JavaScript files (and infer types from JSDoc annotations) * * * ## Strict mode TypeScript’s “strict mode” is where the meat are of the whole TypeScript ecosystem. The `--strict` compiler flag, introduced in [TypeScript 2.3](https://blog.mariusschulz.com/2017/06/09/typescript-2-3-the-strict-compiler-option), activates TypeScript’s strict mode. This will set all strict typechecking options to true by default, which includes: - `--noImplicitAny` - Raise error on expressions and declarations with an implied ‘any’ type. - `--noImplicitThis` - Raise error on ‘this’ expressions with an implied ‘any’ type. - `--alwaysStrict` - Parse in strict mode and emit “use strict” for each source file. - `--strictBindCallApply` - Enable strict ‘bind’, ‘call’, and ‘apply’ methods on functions. - `--strictNullChecks` - Enable [strict null checks](https://basarat.gitbooks.io/typescript/docs/options/strictNullChecks.html). - `--strictFunctionTypes` - Enable strict checking of function types. - `--strictPropertyInitialization` - Enable strict checking of property initialization in classes. When `strict` is set to `true` in your `tsconfig.json`, all of the options above are set to `true`. If some of these options give you problems, you can override strict mode by overriding the options above one by one. For example: ```json { "compilerOptions": { "strict": true, "strictFunctionTypes": false, "strictPropertyInitialization": false } } ``` This will enable all strict type-checking options _except_ `--strictFunctionTypes` and `--strictPropertyInitialization`. Fiddle around with these options when they give you trouble. Once you get more comfortable with them, slowly re-enable them one by one. ## Linting Linting and static analysis tools are one of the many essential tools for any language. There are currently two popular linting solutions for TypeScript projects. - **[TSLint](https://palantir.github.io/tslint/)** used to be the de-facto tool for linting TypeScript code. It has served the TS community well throughout the years, but it has fallen out of favour as of late. Development seems to have stagnated lately, with the authors even [announcing its deprecation](https://medium.com/palantir/tslint-in-2019-1a144c2317a9) recently in favour of ESLint. Even Microsoft themselves have [noticed some architectural and performance issues](https://github.com/Microsoft/TypeScript/issues/29288) in TSLint as of late, and recommended against it. Which brings me to the next option. - **[ESLint](https://eslint.org/)** - yeah, I know. But hear me out for a second. Despite being a tool solely for linting JavaScript for quite some time, ESLint has been adding more and more features to better support TS. It has announced plans to [better support TS](https://eslint.org/blog/2019/01/future-typescript-eslint) through the new [typescript-eslint](https://github.com/typescript-eslint/typescript-eslint) project. It contains a TypeScript parser for ESLint, and even a plugin which [ports many TSLint rules into ESLint](https://github.com/typescript-eslint/typescript-eslint/tree/master/packages/eslint-plugin#supported-rules). Therefore, ESLint might be the better choice going forward. To learn more about using ESLint for TypeScript, read through the docs of the [typescript-eslint](https://github.com/typescript-eslint/typescript-eslint) project. ## A quick primer to TypeScript types The following section contains some quick references on how TypeScript type system works. For a more detailed guide, read this [2ality blog post](http://2ality.com/2018/04/type-notation-typescript.html) on TypeScript’s type system. ### Applying types Once you’ve renamed your `.js` files to `.ts` (or `.tsx`), you can enter type annotations. Type annotations are written using the `: TypeName` syntax. ```tsx let assignedNumber: number | undefined = undefined assignedNumber = 0 function greetPerson(name: string) { return `Hello, ${name}!` } ``` You can also define return types for a function. ```tsx function isFinishedGreeting(name: string): boolean { return getPerson(name).isGreeted() } ``` ### Primitive & unit types TypeScript has a few supported primitive types. These are the most basic data types available within the JavaScript language, and to an extent TypeScript as well. ```tsx // Boolean let isDone: boolean = false // Number let decimal: number = 6 let hex: number = 0xf00d let binary: number = 0b1010 let octal: number = 0o744 // string let standardString: string = 'Hello, world!' let templateString: string = `Your number is ${decimal}` ``` These primitive types can also be turned into **unit types** , where values can be their own types. ```tsx // This variable can only have one possible value: 42. let fortyTwo: 42 = 42 // A unit type can also be combined with other types. // The `|` turns this into a union type. We'll go through it in the next section. let maybeFalsey: 0 | false | null | undefined ``` ### Intersection & union types You can combine two or more types together using intersection and union types. Union types can be used for types/variables that have have one of several types. This tells TypeScript that **“variable/type X can be of either type A or type B.”** ```tsx function formatCommandline(command: string[] | string) { var line = '' if (typeof command === 'string') { line = command.trim() } else { line = command.join(' ').trim() } return line } ``` Intersection types can be used to combine multiple types into one. This tells TypeScript that **“variable/type X contains type A and B.”** ```tsx type A = { a: string } type B = { b: string } type Combined = A & B // { a: string, b: string } // Example usage of intersection types. // Here we take two objects, then combining them into one whilst using intersection types // to combine the types of both objects into one. function extend<T, U>(first: T, second: U): T & U { // use TypeScript type casting to create an object with the combined type. let result = {} as T & U // combine the object. for (let id in first) { result[id] = first[id] } for (let id in second) { if (!result.hasOwnProperty(id)) { result[id] = second[id] } } return result } const x = extend({ a: 'hello' }, { b: 42 }) // `x` now has both `a` and `b` property console.log(x.a) console.log(x.b) ``` ### `type`s and `interface`s For defining types of objects with a complex structure, you can use either the `type` or the `interface` syntax. Both work essentially the same, with `interface` being well-suited for object-oriented patterns with classes. ```tsx // Types type ComponentProps = { title?: string } function ReactComponent(props: ComponentProps) { return <div>{props.title}</div> } // Interfaces interface TaskImpl { start(): void end(): void } class CreepTask implements TaskImpl { state: number = 0 start() { this.state = 1 } end() { this.state = 0 } } ``` ### Generics Generics provide meaningful type constraints between members. In the example below, we define an Action type where the `type` property can be anything that we pass into the generic. ```tsx interface Action<T = any> { type: T } ``` The type that we defined inside the generic will be passed down to the `type` property. In the example below, `type` will have a unit type of `'FETCH_USERS'`. ```tsx // You can also use `Action<string>` for any string value. interface FetchUsersAction extends Action<'FETCH_USERS'> { payload: UserInfo[] } type AddUserAction = Action<'ADD_USER'> const action: AddUserAction = { type: 'ADD_USER' } ``` ### Declaration files You can let TypeScript know that you’re trying to describe a some code that exists somewhere in your library (a module, global variables/interfaces, or runtime environments like Node). To do this, we use the `declare` keyword. Declaration files always have a `.d.ts` file extension. ```tsx // For example, to annotate Node's `require()` call declare const require: (module: string) => any // Now you can use `require()` everywhere in your code! require('whatwg-fetch') ``` You can include this anywhere in your code, but normally they’re included in a declaration file. Declaration files have a `.d.ts` extension, and are used to declare the types of your own code, or code from other libraries. Normally, projects will include their declaration files in something like a `declarations.d.ts` file and will not be emitted in your compiled code. You can also constrain declarations to a certain module in the `declare module` syntax. For example, here’s a module that has a default export called `doSomething()`. ```tsx declare module 'module-name' { // You can also export types inside modules so library consumers can use them. export type ExportedType = { a: string; b: string } const doSomething: (param: ExportedType) => any export default doSomething } ``` * * * ## Let’s migrate! Alright, enough with the lectures, let’s get down and dirty! We’re going to take a look at a real-life project, take a few modules, and convert them to TypeScript. To do this, I’ve taken upon the help of my Thai friend named [Thai](https://dt.in.th/) (yeah, I know). He has a massive, web-based rhythm game project named [Bemuse](https://bemuse.ninja), and he’s been planning to migrate it to TypeScript. So let’s look at some parts of the code and try migrating them to TS where we can. ### From `.js` to `.ts` Consider the following module: ![1-non-js-file](https://thepracticaldev.s3.amazonaws.com/i/bi681uy6marceq0zrmrr.png) Here we have your typical JavaScript module. A simple module with a function type-annotated with JSDoc, and two other non-annotated functions. And we’re going to turn this bad boy into TypeScript. To make a file in your project a TypeScript file, we just need to rename it from `.js` to `.ts`. Easy, right? ![2-rename-to-ts](https://thepracticaldev.s3.amazonaws.com/i/mug06higec3p1pa63qot.png) Oh no! We’re starting to see some red! What did we do wrong? This is fine, actually! We’ve just enabled our TypeScript type-checking by doing this, so what’s left for us is to add types as we see fit. The first thing to do is to add parameter types to these functions. As a quick way to get started, TypeScript allows us to infer types from usage and include them in our code. If you use Visual Studio Code, click on the lightbulb that appears when your cursor is in the function name, and click on “Infer parameter types from usage”. ![infer-types-from-usage](https://media.giphy.com/media/jy8Ii9UdsRGZxQETgq/giphy.gif) If your functions/variables are documented using [JSDoc](http://usejsdoc.org/), this gets much easier as TS can also infer parameter types from JSDoc annotations. ![infer-types-from-jsdoc](https://media.giphy.com/media/fMzOUcjYX6k8GdC8a3/giphy.gif) Note that TypeScript generated a partial object schema for the function at the bottom of this file based on usage. We can use it as a starting point to improve its definition using `interface`s and `type`s. For example, let’s take a look at this line. ```tsx /** * Returns the accuracy number for a play record. */ export function formattedAccuracyForRecord(record: { count: any; total: any }) { return formatAccuracy(calculateAccuracy(record.count, record.total)) } ``` We already know that we have properties `count` and `total` in this parameter. To make this code cleaner, we can put this declaration into a separate `type`/`interface`. You can include this within the same file, or separately on a file reserved for common types/interfaces, e.g. `types.ts` ```tsx export type RecordItem = { count: any total: any [key: string]: any } import { RecordItem } from 'path/to/types' /** * Returns the accuracy number for a play record. */ export function formattedAccuracyForRecord(record: RecordItem) { return formatAccuracy(calculateAccuracy(record.count, record.total)) } ``` ### Dealing with external modules With that out of the way, now we’re going to look at how to migrate files with external modules. For a quick example, we have the following module: ![4-raw-ts-with-modules](https://thepracticaldev.s3.amazonaws.com/i/yb3fpxlnj4yv9ph7ta3v.png) We’ve just renamed this raw JS file into `.ts` and we’re seeing a few errors. Let’s take a look at them. On the first line, we can see that TypeScript doesn’t understand how to deal with the `lodash` module we imported. If we hovered over the red squiggly line, we can see the following: ``` Could not find a declaration file for module 'lodash-es'. '/Users/resir014/etc/repos/bemusic/bemuse/node_modules/lodash/lodash.js' implicitly has an 'any' type. Try `npm install @types/lodash` if it exists or add a new declaration (.d.ts) file containing `declare module 'lodash';` ``` As the error message says, all we need to do to fix this error is to install the type declaration for `lodash`. ``` $ npm install --save-dev @types/lodash ``` This declaration file comes from [DefinitelyTyped](https://github.com/DefinitelyTyped/DefinitelyTyped), an extensive library community-maintained declaration files for the Node runtime, as well as many popular libraries. All of them are autogenerated and published in the `@types/` scope on npm. Some libraries include their own declaration files. If a project is compiled from TypeScript, the declarations will be automatically generated. You can also create declaration files manually for your own library, even when your project is not built using TypeScript. When generating declaration files inside a module, be sure to include them inside a `types`, or `typings` key in the `package.json`. This will make sure the TypeScript compiler knows where to look for the declaration file for said module. ```json { "main": "./lib/index.js", "types": "./types/index.d.ts" } ``` OK, so now we have the type declarations installed, how does our TS file look like? ![5-installed-declarations](https://thepracticaldev.s3.amazonaws.com/i/ui0btlep4ymrk8tls75k.png) Whoa, what’s this? I thought only one of those errors would be gone? What’s happening here? Another power of TypeScript is that it’s able to infer types based on how data flows throughout your module. This is called _control-flow based type analysis_. This means that TypeScript will know that the `chart` inside the `.orderBy()` call comes from what was passed from the previous calls. So the only type error that we have to fix now would be the function parameter. But what about libraries without type declaration? On the first part of my post, I’ve come across this comment. {% devcomment 834k %} Some packages include their own typings within the project, so oftentimes it will get picked up by the TypeScript compiler. But in case we have neither built-in typings nor `@types` package for the library, we can create a shim for these libraries using ambient declarations (`*.d.ts` files). First, create a folder in your source directory to hold ambient declarations. Call it `types/` or something so we can easily find them. Next, create a file to hold our own custom declarations for said library. Usually we use the library name, e.g. `evergreen-ui.d.ts`. Now inside the `.d.ts` file we just created, put the following: ```tsx declare module 'evergreen-ui' ``` This will shim the `evergreen-ui` module so we can import it safely without the “Cannot find module” errors. Note that this doesn’t give you the autocompletion support, so you will have to declare the API for said library manually. This is optional of course, but very useful if you want better autocompletion. For example, if we were to use Evergreen UI’s Button component: ```tsx // Import React's base types for us to use. import * as React from 'react' declare module 'evergreen-ui' { export interface ButtonProps extends DimensionProps, SpacingProps, PositionProps, LayoutProps { // The above extended props props are examples for extending common props and are not included in this example for brevity. intent: 'none' | 'success' | 'warning' | 'danger' appearance: 'default' | 'minimal' | 'primary' isLoading?: boolean // Again, skipping the rest of the props for brevity, but you get the idea. } export class Button extends React.PureComponent<ButtonProps> {} } ``` * * * And that’s it for part 2! The full guide concludes here, but if there are any more questions after this post was published, I’ll try to answer some of them in part 3. As a reminder, the `#typescript` channel on the [Reactiflux](https://www.reactiflux.com/) Discord server has a bunch of lovely people who know TypeScript inside and out. Feel free to hop in and ask any question about TypeScript!
resir014
85,990
Random gif generator, Pricing cards, Cloudinary uploader & more | Module Monday 29
Module Monday 29
0
2019-02-25T17:10:56
https://guide.anymod.com/module-monday/29.html
showdev, opensource, webdev, javascript
--- title: Random gif generator, Pricing cards, Cloudinary uploader & more | Module Monday 29 published: true description: Module Monday 29 tags: showdev, opensource, webdev, javascript cover_image: https://res.cloudinary.com/component/image/upload/b_rgb:f6f6f6,c_pad,h_370,w_880/v1550949608/gif-generator_j8sqy1.gif canonical_url: https://guide.anymod.com/module-monday/29.html --- ## 5 open-source website modules you can use anywhere Everything below is open source and free to use in any project you choose. [Anymod](https://anymod.com) lets you quickly add features to any website or web app. Click a mod to see it in action along with its source code. ## Random gif generator Uses the Giphy API to show a new gif every 10 seconds <a class="btn btn-sm" href="https://anymod.com/mod/raakda?v=20&h1=31&h2=32">View mod</a> <a href="https://anymod.com/mod/raakda?v=20&h1=31&h2=32"> <img src="https://res.cloudinary.com/component/image/upload/v1550949608/gif-generator_j8sqy1.gif"/> </a> ## Cloudinary upload widget v2 Upload to your Cloudinary account from a variety of sources. <a class="btn btn-sm" href="https://anymod.com/mod/nkklnn?h1=33&h2=34&v=20">View mod</a> <a href="https://anymod.com/mod/nkklnn?h1=33&h2=34&v=20"> <img src="https://res.cloudinary.com/component/image/upload/v1550945879/cloudinary_eicv0f.gif"/> </a> ## Team portraits Show off your team members to the world. <a class="btn btn-sm" href="https://anymod.com/mod/allonr">View mod</a> <a href="https://anymod.com/mod/allonr"> <img src="https://res.cloudinary.com/component/image/upload/v1550951780/team_sh64vf.gif"/> </a> ## Pricing table Ready-to-use pricing cards to showcase your product. <a class="btn btn-sm" href="https://anymod.com/mod/mlldod?v=20">View mod</a> <a href="https://anymod.com/mod/mlldod?v=20"> <img src="https://res.cloudinary.com/component/image/upload/v1550976091/screenshots/pricing.png"/> </a> ## Fade in quotation A gentle fade in for any text you choose. <a class="btn btn-sm" href="https://anymod.com/mod/baaokd">View mod</a> <a href="https://anymod.com/mod/baaokd"> <img src="https://res.cloudinary.com/component/image/upload/v1550947127/quotation_wkc0nx.gif"/> </a> <hr> I post new modules [here](https://dev.to/tyrw) every (Module) Monday -- I hope you find them useful! Happy coding ✌️
tyrw
85,998
Read YAMLy config with a few lines of code
Originally posted on detunized.net I was working on a C# library and in a simp...
0
2019-02-25T17:51:54
https://detunized.net/posts/2019-02-25-read-yamly-config-with-a-few-lines-of-code/
csharp, javascript, ruby
--- title: "Read YAMLy config with a few lines of code" published: true canonical_url: https://detunized.net/posts/2019-02-25-read-yamly-config-with-a-few-lines-of-code/ tags: - c-sharp - javascript - ruby - go --- *Originally posted on [detunized.net](https://detunized.net/posts/2019-02-25-read-yamly-config-with-a-few-lines-of-code/)* I was working on a C# library and in a simple example application I needed to load a config file. It didn't have to be fancy or very efficient. Something like INI, JSON, TOML or YAML would do. What I didn't want to have is any dependencies, not to bother the user with installing any libraries. Unfortunately, .NET doesn't provide any of those in its standard library. There's XML, but I cannot stomach that. So I though, I could probably write a simple text config file parser in a few minutes. Why not give it a try. All I need is string keys and values. Comments would be good to have. Something like this: ```yaml # Login username username: dude@lebowski.com # User password password: no one will guess # URL url: https://lebowski.com:443/index.html ``` This is a subset of YAML actually. Very clean and readable. How difficult would it be to write a parser for that. Normally, every time I say something like this to myself, I mentally prepare myself for a huge underestimation. What looks like a ten minute task, could turn out to be a week long project. Strangely, not this time. Thanks to pretty great runtime library and awesome LINQ support in 3 minutes I had a fully working solution: ```c# Dictionary<string, string> ReadConfig(string filename) { return File .ReadAllLines(filename) .Select(line => line.Trim()) .Where(line => line.Length > 0 && !line.StartsWith("#")) .Select(line => line.Split(new[] {':'}, 2)) .Where(parts => parts.Length == 2) .ToDictionary(parts => parts[0].Trim(), parts => parts[1].Trim()); } ``` This function is not crazy efficient, but who cares. It's pretty robust, it wouldn't fail with an error as long as it's possible to read a file. It doesn't have any error reporting in case there's a syntax error, though. It would simply ignore it. In my case it's good enough. Let's see how this works. First, I read the file. This call would return an array of strings, one per line: ```c# File.ReadAllLines(filename) ``` Next, I trim all the whitespace on both ends. `Select` in LINQ is the same as `map` almost everywhere else, it transforms the sequence by applying a function to each element: ```c# .Select(line => line.Trim()) ``` Next, I filter out all lines that are blank or start with `#`. `Where` filters out the sequence by keeping the elements that satisfy the given predicate: ```c# .Where(line => line.Length > 0 && !line.StartsWith("#")) ``` Next, I split each line on the first colon. If the rest of the line has more colons they will not be split and become part of the value. That's intentional: ```c# .Select(line => line.Split(new[] {':'}, 2)) ``` Next, I filter out all the lines that didn't get split into exactly two parts. This is the place where syntax errors would get ignored and thrown out: ```c# .Where(parts => parts.Length == 2) ``` And in the last step I convert the array of two element arrays to a dictionary. What in C# is called a dictionary in other languages might be called *object*, *map* or *hash map*. It's a key-value storage or an associative array. In this step I also trim any trailing whitespace on the key and leading whitespace on the value (other ends are trimmed already): ```c# .ToDictionary(parts => parts[0].TrimEnd(), parts => parts[1].TrimStart()); ``` Done. In a few lines and one statement I've read and parsed a config file. JavaScript has petty similar functional programming capabilities, so it would be possible to mirror this solution in JS. [Like always](https://www.destroyallsoftware.com/talks/wat), there are some gotchas. In this case JS `String.split` function is acting weird. The limit parameter works differently compared to all the other languages I tried. Instead of returning the rest of the line in the last element, `split` in JavaScript truncates the input. [WAT](https://www.destroyallsoftware.com/talks/wat)?! To fix that I have to `join` the split tail back together in the line before the final `reduce` that converts the array to object. ```js function readConfig(filename) { return require("fs") .readFileSync(filename, "utf-8") .split("\n") .map(x => x.trim()) .filter(x => x.length > 0 && !x.startsWith("#")) .map(x => x.split(":")) .filter(x => x.length > 1) .map(x => [x[0], x.slice(1).join(":")]) .reduce((a, x) => (a[x[0].trimEnd()] = x[1].trimStart(), a), {}) } ``` JavaScript has native support for JSON, so it's probably stupid to roll your own config format, when JSON could be read in one short statement. The comments are not supported though. I think the Ruby version is the cleanest, though it's practically the same: ```ruby def read_config filename File .readlines("config.yaml") .map(&:strip) .reject { |x| x.empty? || x.start_with?("#") } .map { |x| x.split ":", 2 } .select { |x| x.size == 2 } .map { |k, v| [k.rstrip, v.lstrip] } .to_h end ``` Ruby supports both YAML and JSON out of the box. It would be easier to just do ```ruby YAML.load_file "config.yaml" ``` but then I'd have to quote some of the values as YAML is not that flexible with the whitespace and special characters. How would I do it Go? I wouldn't! I don't want to drown in `if`s, `for`s, `err`s and `nil`s. Just say no to writing code and `go get` some packages.
detunized
86,008
Understanding git for real by exploring the .git directory
“Whoah, I’ve just read this quick tuto about git and oh my god it is cool. I feel now super...
466
2019-02-25T19:09:04
https://www.daolf.com/posts/git-series-part-1/
git, junior, showdev, beginners
--- title: Understanding git for real by exploring the .git directory published: true description: cover_image: https://thepracticaldev.s3.amazonaws.com/i/va29dwhc3dkk6qdq8xi1.png tags: [git, junior, showdev, beginner] series: "Mastering Git" canonical_url: "https://www.daolf.com/posts/git-series-part-1/" --- > # “Whoah, I’ve just read this quick tuto about git and oh my god it is cool. I feel now super comfortable using it, and I’m not afraid at all to break something.”— said no one ever. Using git as a beginner is like visiting a new country for someone who can’t read/speak the local language. As soon as you know where you are and where to go, everything is fine, but the moment you get lost, the big troubles begin (#badMetaphor). There are a lot of posts out there about learning the basic commands of git, this is not one of them. What I’m going to try here is a different approach. ![[xkcd](https://xkcd.com/1597/)](https://cdn-images-1.medium.com/max/2000/1*0o9GZUzXiNnI4poEvxvy8g.png#center) <figcaption> <a href="https://xkcd.com/1597/"> XKCD</a> </figcaption> New users are usually afraid of git and really, it is hard not to be. It is a powerful tool for sure but it is not really user-friendly. Lots of new concepts and commands doing completely different things if a file is passed as a parameter or not, cryptic feedback … I think that one way to overcome those first difficulties is to do a little more than just git commit/push, I think that if we take the time to really understand what git is really made of, it can save you from a lot of troubles. ### Get into the .git So, let’s begin. When you create a git repo, using git init, git creates this wonderful directory: the .git. This folder contains all the information needed for git to work. To be clear, if you want to remove git from your project, but keep your project files, just delete the .git folder. But come on, why would you do that? ``` ├── HEAD ├── branches ├── config ├── description ├── hooks │ ├── pre-commit.sample │ ├── pre-push.sample │ └── ... ├── info │ └── exclude ├── objects │ ├── info │ └── pack └── refs ├── heads └── tags ``` Here is what’s your .git will look like before your first commit: * HEAD We’ll come to this later * config This file contains the settings for your repository, there will be written the url of the remote for example, your mail, username,…. Every-time you use ‘git config …’ in the console it ends here. * description Used by gitweb (kind of an ancestor of github) to display the description of the repo. * hooks Here is an interesting feature. Git comes with a set of script that you can automatically run at every meaningful git phase. Those scripts, called hooks, can be run before/after a commit/rebase/pull… The name of the script dictates when to execute it. An example of a useful pre-push hook would be to test that all the styling rules are respected to keep consistency in the remote (the distant repository). * info — exclude So you can put the files you don’t want git to deal with in your .gitignore file. Well, the exclude file is the same except that it won’t be shared. If you don’t want to track your custom IDE related config files for example, even though most of the time .gitignore is enough (please tell me in the comments if you really use this one). ### What’s inside a commit? Every-time you create a file, and track it, git compresses it and stores it into its own data structure. The compressed object will have a unique name, a hash, and will be stored under the object directory. Before exploring the object directory we’ll have to ask ourselves what is a commit. So a commit is kind of a snapshot of your working directory, but it is a little bit more than that. In fact, when you commit git does only two things in order to create the snapshot of your working directory: 1. If the file didn’t change, git just adds the name of the compressed file (the hash) into the snapshot. 2. If the file has changed, git compresses it, stores the compressed file in the object folder. Finally, it adds the name (the hash) of this compressed file into the snapshot. This is a simplification, this whole process is a little bit complicated and will be part of a future post. And once that snapshot is created, it will also be compressed and be named with a hash, and where all those compressed objects end up? In the object folder. ├── 4c │ └── f44f1e3fe4fb7f8aa42138c324f63f5ac85828 // hash ├── 86 │ └── 550c31847e518e1927f95991c949fc14efc711 // hash ├── e6 │ └── 9de29bb2d1d6434b8b29ae775ad8c2e48c5391 // hash ├── info // let's ignore that └── pack // let's ignore that too This is what the object directory looked like after I created one empty file file_1.txt and committed it. Please note that if the hash of your file is “4cf44f1e…”, git will store this file under a “4c” subdirectory and then name the file “f44f1…”. This little trick reduces by 255 the size of the /objects directory. You see 3 hash right. So one would be for my file_1.txt, the other would be for the snapshot created when I committed. What is the third one? Well because a commit is an object in itself, it is also compressed and stored in the object folder. What you need to remember is that a commit is made of 4 things : 1. The name (a hash) of the working directory’s snapshot 1. A comment 1. Committer information 1. Hash of the parent commit And that’s it, look by yourself what happens if we uncompressed the commit file : // by looking at the history you can easily find your commit hash // you also don't have to paste the whole hash, only enough // characters to make the hash unique git cat-file -p 4cf44f1e3fe4fb7f8aa42138c324f63f5ac85828 This is what I get tree 86550c31847e518e1927f95991c949fc14efc711 author Pierre De Wulf <test[@gmail.com](mailto:pierredewulf31@gmail.com)> 1455775173 -0500 committer Pierre De Wulf <[test@gmail.com](mailto:pierredewulf31@gmail.com)> 1455775173 -0500 commit A You see, we got as expected, the snapshot hash, the author, and my commit message. Two things are important here : 1. As expected, the snapshot hash “86550…” is also an object and can be found in the object folder. 1. Because it was my first commit, there is no parent. What’s in my snapshot for real? git cat-file -p 86550c31847e518e1927f95991c949fc14efc711 100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 file_1.txt And here we find the last object that was previously in our object store, the only object that was in our snapshot. It’s a blob, but that’s another story. ### branch, tags, HEAD all the same So now you understand that everything in git can be reached with the correct hash. Let’s take a look at the HEAD now. So what’s in that HEAD? cat HEAD ref: refs/heads/master Okay, this is not a hash, and it makes sense because the HEAD can be considered as a pointer to the tip of the branch you’re working on. And now if we look at what is in refs/heads/master here what we’ll see : cat refs/heads/master 4cf44f1e3fe4fb7f8aa42138c324f63f5ac85828 Does that look familiar? Yes, this is the exact same hash of our first commit. This shows you that branches and tags are nothing more than a pointer to a commit. Meaning that you can delete all the branches you want, all the tags you want, the commit they were pointing to are still going to be here. There are only be much more difficult to access. If you want to know more about all a this, go check [the git book](https://git-scm.com/book/en/v2/Git-Internals-Git-Objects). ### One last thing So by now, you should understand that all that git does when you commit is “zipping” your current working directory and storing it into the objects folder with a bunch of other information. But if you’re familiar enough with the tool you’ll now that you have complete control on what files should be included in the commit and what files should not. I mean a commit isn’t really a snapshot of your working directory, it is a snapshot of the files you want to commit. And where does git store those file you want to commit before making the actual? Well, it stores them into the index file. We’re not going to dig deeper into it now, meanwhile, if you’re really curious you can always take a look at [this](https://github.com/git/git/blob/master/Documentation/technical/index-format.txt). ## Thank you for reading: I hope you learn something valuable reading this post and that it will make your use of git easier. You can read the part 2 [here](https://www.daolf.com/posts/git-series-part-2). If you like JS, I've just published something you might like: {% link https://dev.to/daolf/things-you-should-know-about-js-events-4k2l %} Please tell me in the comments your last Git confusement, don't be shy 🙂 and, if you liked this post, do not forget to [subscribe](https://www.daolf.com/stay_updated/) to my newsletter, there is more to come (And you'll also get the first chapters of my next ebook for free 😎).
daolf
86,039
How to manage Local vs Dev vs Prod settings.py in latest Django RestAPI?
as we have some common settings in both local and prod; how to manage the settings that are different from local vs dev vs prod; such as database connections, installed apps, etc...
0
2019-02-25T20:31:34
https://dev.to/thammuio/how-to-manage-local-vs-dev-vs-production-settings-in-latest-django-restapi-28n6
help, django, python, discuss
--- title: How to manage Local vs Dev vs Prod settings.py in latest Django RestAPI? published: true description: as we have some common settings in both local and prod; how to manage the settings that are different from local vs dev vs prod; such as database connections, installed apps, etc... tags: help, django, python, discuss --- **Django Best Practices** As we have some common settings in both Local, Dev, and Prod settings, and also have settings that are different from Local, Dev, and Prod; such as database connections, installed apps, etc...I would like to start my Django app with custom settings_<env>.py file for each environment so that we can have better visibilty in settings and also will not cause git conflicts....
thammuio
86,592
How to starting software development
Deneme
0
2019-02-26T13:41:34
https://dev.to/enesfurkangenc/how-to-starting-software-development-2e3
--- title: How to starting software development published: true description: Deneme tags: ---
enesfurkangenc
86,902
Optional chaining: What is it, and how can you add it to your JavaScript application right now?
Optional chaining gives you a concise way to handle issues that crop up with values in JS that may be `undefined`.
0
2019-02-27T16:40:51
https://dev.to/nimmo/optional-chaining-what-is-it-and-how-can-you-add-it-to-your-javascript-application-right-now-37ie
javascript, optionalchaining, babel
--- title: Optional chaining: What is it, and how can you add it to your JavaScript application right now? published: true description: Optional chaining gives you a concise way to handle issues that crop up with values in JS that may be `undefined`. tags: javascript, optionalChaining, babel --- _This post assumes that you're already transpiling your JS applications with Babel (version 7+). If you're not, then this probably isn't the feature to convince you to add that into your build process, but it's still a proposed language feature that is worth being aware of._ You've seen these errors before, hiding in your logs, in your automated test readouts, in your console, in your devtools: `cannot read property "map" of undefined`. You spend time tracking down the error, and find the function in question: ```javascript const someFunction = someArray => someArray.map(someOtherFunction); ``` You spend even more time looking at the code that called this function. Sometimes that array really might be undefined. In this scenario you decide that it is `someFunction`'s responsibility to handle that. You update your code, and leave a helpful comment so that no-one else wastes time wondering why you're accounting for this: ```javascript const someFunction = (someArray) => { // Sometimes this is undefined: See [link to bug report] if (someArray === undefined) { return []; } return someArray.map(someOtherFunction); } ``` This works. But, you kind of liked the implicit return from the original example. A single expression in a function makes you feel more comfortable. No way anything else can sneak in there and cause problems. I like your thinking. You try again, with a single expression this time, using a default value: ```javascript const someFunction = (someArray = []) => // Sometimes this is undefined: See [link to bug report] someArray.map(someOtherFunction); ``` This works. But, now your helpful comment is a bit weird. Will someone think that the _output_ of this function is undefined, and accidentally account for that possibility elsewhere, even though this will always return an array? You imagine the confusion you've potentially caused, and the accumulated (hypothetical) cost to your company as a result. You could make your comment clearer, but you want to solve this problem using JavaScript, not boring words. You could resign yourself to a ternary, but that would mean having to type `someArray` an extra time. Let's look at a new alternative: ## Enter `optional chaining` With optional chaining, you have a new operator: `?.` You can use `?.` on anything that you think might be undefined, which can save you from the most common and the most frustrating issues you see regularly in JS. For example: ```javascript const withoutOptionalChaining = something && something.someOtherThing && something.someOtherThing.yetAnotherThing const withOptionalChaining = something ?.someOtherThing ?.yetAnotherThing ``` It's crucial to understand that if either `someOtherThing` or `yetAnotherThing` are `undefined`, then the `withoutOptionalChaining` example will be `false`, where the `withOptionalChaining` example will be `undefined`. As you're aware if you've written JS for anything more than a day, `undefined is not a function`. But, what if that didn't matter? ```javascript const someValue = someObject.someFunction?.() // returns `undefined` rather than a runtime error if `someFunction` is undefined! ``` ## I'm in. But, how? Fortunately, there's a Babel plugin for this: [@babel/plugin-proposal-optional-chaining](https://www.npmjs.com/package/@babel/plugin-proposal-optional-chaining) Install that plugin with `npm`, and [add it to your babel config](https://babeljs.io/docs/en/plugins) via your chosen configuration option. Depending on the rest of your Babel config, you may also find that you end up with an error about `regenerator runtime` not being defined. If so, you may need to add the [@babel/plugin-transform-runtime](https://babeljs.io/docs/en/babel-plugin-transform-runtime) as well, and configure it like so: ```javascript ['@babel/plugin-transform-runtime', { regenerator: true, }, ] ``` If you're using ESlint, you'll find that it isn't too happy about this new operator. You'll also need to add the [babel-eslint](https://github.com/babel/babel-eslint) plugin to your ESlint config. And that's it. Now you ought to be able to use optional chaining as much as you want to in your application. Let's look again at that original code: ```javascript const someFunction = someArray => someArray // Sometimes this is undefined: See [link to bug report] ?.map(someOtherFunction) || []; ``` There we have it, another option for solving our problem. Do you always want to do this? Absolutely not: there are times when you [probably do want a runtime error](https://dev.to/nimmo/sometimes-in-the-heat-of-the-moment-its-forgivable-to-cause-a-runtime-exception-2ko2) after all. But for the rest of the time, optional chaining is a great addition to your toolkit. ## Disclaimer Optional chaining is currently at Stage 1 in the proposal process, so whether or not you are willing to incorporate it right now is up to you.
nimmo
107,711
Time Tracking: Kimai2 0.9 on OpenBSD
Kimai is time-tracking app. It's open source, published under MIT license, and based on the latest PHP Symfony. I'll show how to install Kimai2 in this post.
880
2019-05-12T13:33:44
https://dev.to/nabbisen/time-tracking-kimai2-1-0-on-openbsd-9k
kimai, timetracking, opensource, installation
--- title: Time Tracking: Kimai2 0.9 on OpenBSD published: true description: Kimai is time-tracking app. It's open source, published under MIT license, and based on the latest PHP Symfony. I'll show how to install Kimai2 in this post. tags: kimai, time tracking, open source, installation cover_image: https://thepracticaldev.s3.amazonaws.com/i/rmxvn7gjx8f3ajpaelby.png series: Time Tracking - Kimai on OpenBSD canonical_url: https://dev.to/nabbisen/time-tracking-kimai2-1-0-on-openbsd-9k --- ## Introducation [Kimai](https://www.kimai.org/) is open source time-tracking app. There are two major versions: [Kimai1](https://github.com/kimai/kimai), which is [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html) licensed, and [Kimai2](https://github.com/kevinpapst/kimai2), which is [MIT](https://opensource.org/licenses/MIT) licensed. In this post, I'll show how to install Kimai2, the latest. It is based on [Symfony](https://symfony.com/), the [PHP](https://php.net) robust framework. Besides, Kimai2 is great to [keep up the Symfony's updates](https://github.com/kevinpapst/kimai2/pull/710) :) #### Environment - OS: [OpenBSD](https://www.openbsd.org/) 6.5 - Database: [MariaDB](https://mariadb.org/) 10.0 - Web Server: [httpd](https://man.openbsd.org/httpd.8) - App Server: [PHP](https://php.net/) 7.2 - CGI: [PHP-FPM](https://php-fpm.org/) - Package Manager: [Composer](https://getcomposer.org/) - Time-Tracking App: Kimai2 0.9 ## Tutorial The official installation manual is [here](https://www.kimai.org/documentation/installation.html). #### Requirements First of all, you have to have: - [httpd as web server](https://dev.to/nabbisen/setting-up-openbsds-httpd-web-server-4p9f) - [php-fpm](https://dev.to/nabbisen/php-fpm-on-openbsd-2iof) - PHP [Composer](https://dev.to/nabbisen/almost-php-72-composer-with-openbsd-64-100o) - [MariaDB as database server](https://dev.to/nabbisen/installing-mariadb-server-on-openbsd-5lm) Additionally, [HTTPS is recommended](https://dev.to/nabbisen/lets-encrypt-certbot-for-openbsds-httpd-3ofd). Well, All the links in this section are to the tutorials. #### DB server Create database and user: ```sql CREATE DATABASE <database> CHARACTER SET = 'utf8mb4'; GRANT ALL PRIVILEGES ON <database>.* TO <dbuser> IDENTIFIED BY '<dbpass>'; ``` #### App server Get the package: ```console $ git clone -b 0.9 --depth 1 https://github.com/kevinpapst/kimai2.git $ cd kimai2/ ``` Then modify the permissions: ```console # chown -R :www . # chmod -R g+r . # chmod -R g+rw var/ ``` Also configure the system: ```console $ cp -p .env.dist .env $ nvim .env ``` Here is where to change at least in `.env`: ```diff - APP_SECRET=change_this_to_something_unique + APP_SECRET=<some_salt_key_long_enough> ... - DATABASE_URL=sqlite:///%kernel.project_dir%/var/data/kimai.sqlite + DATABASE_URL=mysql://<dbuser>:<dbpass>@<dbhost>:<dbport>/<database> ``` Let's install via Composer. Using `php-7.2 /usr/local/libexec/composer.phar` is the trick for OpenBSD installation. ```console $ php-7.2 /usr/local/libexec/composer.phar install --no-dev --optimize-autoloader ``` The result is: ```console Loading composer repositories with package information Installing dependencies from lock file Package operations: 138 installs, 0 updates, 0 removals - Installing ocramius/package-versions (1.4.0): Downloading (100%) - Installing kimai/kimai2-composer (0.1): Downloading (100%) - Installing symfony/flex (v1.2.3): Downloading (100%) Prefetching 101 packages 🎶 💨 - Downloading (100%) - Installing beberlei/doctrineextensions (v1.2.0): Loading from cache ... - Installing white-october/pagerfanta-bundle (v1.2.4): Loading from cache Generating optimized autoload files ocramius/package-versions: Generating version class... ocramius/package-versions: ...done generating version class Executing script cache:clear [OK] Executing script assets:install [OK] ``` Then create schema in the database created below: ```console $ php-7.2 bin/console doctrine:schema:create ``` The result is: ```console ! ! [CAUTION] This operation should not be executed in a production environment! ! Creating database schema... [OK] Database schema created successfully! ``` Here you will catch the warning above. [The official document says](https://www.kimai.org/documentation/installation.html#recommended-setup): > You can safely ignore the message: This operation should not be executed in a production environment! Well, let's go ahead: ```console $ php-7.2 bin/console cache:warmup --env=prod ``` The result is: ```console // Warming up the cache for the prod environment with debug false [OK] Cache for the "prod" environment (debug=false) was successfully warmed. ``` The system is now almost ready. Create a user as the system administrator: ```console $ php-7.2 bin/console kimai:create-user <username> <email-address> ROLE_SUPER_ADMIN ``` The result is: ```console Please enter the password: [OK] Success! Created user: <username> ``` The last step is preparing web server. Add the definition below to [`/etc/httpd.conf`](https://man.openbsd.org/httpd.conf.5): ```apache server "<fqdn>" { listen on $ext_addr port 80 block return 301 "https://$SERVER_NAME$REQUEST_URI" } server "<fqdn>" { listen on $ext_addr tls port 443 tls { certificate "/etc/letsencrypt/live/<fqdn>/fullchain.pem" key "/etc/letsencrypt/live/<fqdn>/privkey.pem" } # create unique log files (optional): log { access "<fqdn>-access.log" error "<fqdn>-error.log" } # the document root is the directory named `public`: root "/<...>/kimai2/public" directory index index.php location "/*.php" { fastcgi socket "/run/php-fpm.sock" } location "/*.php[/?]*" { fastcgi socket "/run/php-fpm.sock" } # if directories are accessed to, call `index.php` with url parameter: location match "^/(.*)/[^\.]+/?$" { fastcgi socket "/run/php-fpm.sock" request rewrite "/index.php/%1" } } ``` Restart it: ```console # rcctl restart httpd ``` ## Conclusion Now all servers, database/app/web, are ready and listening. Let's access to \<fqdn\> via any web browser: ![login form](https://thepracticaldev.s3.amazonaws.com/i/orsmvc14j52izbc40ag3.png) ![succeeded](https://thepracticaldev.s3.amazonaws.com/i/0lg0pq8klf2o3by4u65t.png) Voilà 😆 I like this simple UI 🥳 Thank you for your reading. Happy computing.
nabbisen
110,257
Meetups, the right way
Meetups, the right way Under this provocative, clickbait title, I hope these feedback can...
0
2019-05-19T15:04:55
https://dev.to/paulleclercq/meetups-the-right-way-38bi
meetup
# Meetups, the right way Under this provocative, clickbait title, I hope these feedback can help meetups organizers to better target their audience and make meetups more valuable for everyone. These feedback come from meetups of different people, cities and countries: Montpellier, Paris, Marseille, New York, Montréal and San Francisco. ## Rule #1 : No Q&A (Question & Answer) I'm a simple person : I hate wars, I hate global warming deniers, I hate poverty, and… I hate Q&A at the end of conferences or meetups. I wonder why this is a absolute norm. During a meetup, you have the great power to have **rare collective time**, so please use it wisely! Instead of a Q&A session (often as long as the speaker's talk), add a tiny talk with no slides from someone of the audience.  Example: "I am working on this interesting project at my company because we did this thing this particular way." ### Everyone should be able to express his/her opinion It's hard to express ourselves in front of an audience, and let's be honest, the tech community has more introverts than other communities. Having an open mic Q&A is not fair to everyone. ### Alternatives Propose an interactive quiz with [Kahoot](https://kahoot.com/), an interactive survey on a slide at the beginning/end to know your audience. Have a [MC](https://en.wikipedia.org/wiki/Master_of_ceremonies) to animate the audience, group/filter questions, or a person who has already prepared questions for a round table discussion. Another thing, they can highlight someone in the audience they know by doing 3 minute interview on what they are working on. {% twitter 1089145441818677248 %} ## Rule #2: No Q&A **Just to be sure** 😉  Plus, following these rules, you will be able to filter out assholes, who waste collective time by asking a specific questions about version 6.2.X of an obscure software. They just want to look smart in front of everybody by saying something that nobody can understand and it will make you feel shitty about not knowing this. We all experienced something like this during meetups, it's time to say no more. ## Comfort and location To attract more people, the venue must be able to host people. That could appear like an obvious one, but no. I'm sure we've all been to venue with not enough (personal) space. An university's lecture classroom is ideal. ## Food > We want beers and pizzas 🍕 No. Developers are normal person, they can eat different type of foods, be sure to propose something that does not contain meat for everyone to feel welcomed  ## Set and say your rules Have an agenda of the evening. Say it's totally fine to leave the room at any time, it will not be received as an insult to the speaker. Attendees can also wait for a break to leave the room if they want to show more respect to the speaker. It's totally OK to not applaud at the end of a talk. If you liked the talk, say it directly to the speaker, or contact her/him on twitter/email later. By setting some rules, some people blocked by the stress of answering live questions would dare to present a subject, which would be great for the meetup diversity. ## Start early, finish early. Companies should support employees leaving early when a free meetup happens, it's in their own interest. I do not want to and I cannot be attentive for several hours after 6pm, especially for technical subjects, and especially if I'm hungry. Make it short, and be sure to… (read below 😛) ## Have enough time for networking What I mean by networking, is not to share a business card and to recruit people by talking about how disrupting your company is. It's to make sure to say hi, being friendly and smile to other people. They share the same passion as you, you share at least one interest together, it's rare in this world, so meet new people!  A great icebreaker is to ask to all attendees to discuss for 4 minutes to the person next to them before the conference begins. Nothing is more painful than seeing a HR in a 2 hour technical meetup, they clearly want to be somewhere else. Be empathetic, say hi to them, and if you are looking for some fresh air you can help each other exchanging information 😄 ## Record talks It's the best advertising you can have to attract more people next time. I recommend [this article by Thomas Gasc @meltybro (french)](https://methylbro.fr/aventure/captation-video-des-meetups-au-live-streaming/) ___ Organizing meetups is no joke, thanks for all people who does it. I would not be the dev I am today without them : [Ubuntu](https://en.wikipedia.org/wiki/Ubuntu_philosophy). **What are your top advises for meetups organizers ?** ___ Special thanks to: * https://twitter.com/NDuforet * https://twitter.com/JDarsel {% user aurel_tyson %}
paulleclercq
115,022
Project Euler #4 - Largest Palindrome Product
Solving problem #4 from Project Euler, largest palindrome product
1,012
2019-05-28T13:46:49
https://dev.to/peter/project-euler-4-largest-palindrome-product-3165
projecteuler, challenge
--- title: Project Euler #4 - Largest Palindrome Product published: true description: Solving problem #4 from Project Euler, largest palindrome product tags: projecteuler, challenge series: Project Euler --- I'm bumping a "series" that I started last year, where we collectively work on problems from [Project Euler](https://projecteuler.net). This is [Problem 4](https://projecteuler.net/problem=4), finding the largest palindrome product. > A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. > > Find the largest palindrome made from the product of two 3-digit numbers.
peter
118,253
Blog post: Release 1.46 of Workflow (Perl)
a release announcement
0
2019-06-05T10:18:56
https://dev.to/jonasbn/blog-post-release-1-46-of-workflow-perl-5600
perl, workflow, release
--- title: Blog post: Release 1.46 of Workflow (Perl) published: true description: a release announcement tags: perl, workflow, release --- I have just released [Workflow](https://jonasbn.github.io/perl-workflow/) 1.46, a library for building simple state machines in Perl. The release contains a simple patch from an external contributor Oliver Welter. Not much has happened with Workflow for a long time, [lastest release was back in 2017](https://github.com/jonasbn/perl-workflow/releases/tag/1.45). So it was a pleasant surprise to receive [a PR](https://github.com/jonasbn/perl-workflow/pull/16). I had to address some issues with the Travis configuration to observe a successful build. The issue was due an issue with my [Dist::Zilla](http://dzil.org/) (`dzil`) configuration requiring a newer [Perl](https://www.perl.org/) than listed in the Travis configuration. After some [yak shaving](https://en.wiktionary.org/wiki/yak_shaving), the second build demonstrated [an older known bug](https://github.com/jonasbn/perl-workflow/issues/10), which pops up once in a while as a friendly reminder that I have to find the time to address this particular issue. Anyway I was able to get a release shipped quite quickly, not because the bug was critical, but simply to avoid having PRs hanging around for too long and out of respect to the contributor - thanks again Oliver. This brings me to emphasize some of the interesting aspects of _my_ software development life cycle ([SDLC](https://en.wikipedia.org/wiki/Software_development_process)), which was demonstrated with this release. 1. The ability to evaluate issue reports and change requests easily 1. The ability to build _swiftly_ and immediately 1. The ability release _swiftly_ and immediately The first part was bound to the branching strategy and having an established [toolchain](https://en.wikipedia.org/wiki/Toolchain) meaning: reviewing and consolidating changes (merge) and testing, all this using Perl, Dist::Zilla, Git/GitHub and the marvelous Perl test libraries. The ability to perform continuous integration (CI) of an incoming change from a branch (pull request) to a master branch, being _stable_ and always in a known state. Here using the same tools as listed above and Travis. The third part, being packaging and releasing could be accomplished with ease. Using Dist::Zilla and [PAUSE](https://pause.perl.org/pause/query?ACTION=pause_04about)/CPAN. If you want to read more about [my feedback loops](dev.to/jonasbn/blog-post-feedback-loops-1gm5) involved in the above process, I have written about it previously.
jonasbn
128,753
Right things to do after changing to better domain’s host
.Many things to do after pointing out the domain to another host for whatever rea...
0
2019-06-26T13:11:59
https://kevinhq.com/things-to-do-after-changing-the-domains-host/
webhosting, domain
--- title: Right things to do after changing to better domain’s host published: true tags: Web Hosting, domain, web hosting canonical_url: https://kevinhq.com/things-to-do-after-changing-the-domains-host/ --- .Many things to do after pointing out the domain to another host for whatever reasons. Missing one of them may lead a disaster to your side. There are many things which usually we missed to do after changing the domain’s host. By changing the domain’s host, I mean when you pointed the domain you own… [Read More »Right things to do after changing to better domain’s host](https://kevinhq.com/things-to-do-after-changing-the-domains-host/)
kevinhq
128,932
Using Browser Custom Events
A lot of times when writing things you may want want to react to certain events on your page. We do...
0
2019-06-26T18:35:08
https://dev.to/dropconfig/using-browser-custom-events-1do9
javascript, webdev, beginners
A lot of times when writing things you may want want to react to certain events on your page. We do this all the time with builtin ones like `click` or `keydown` But we can also make our own custom events and have the browser handle all the work for us! Since it's part of the DOM API we get free event code without installing another lib or rolling our own buggy version. [`CustomEvent`](https://developer.mozilla.org/en-US/docs/Web/API/CustomEvent/CustomEvent) is what we will be using. We'll wrap it a little to make it a bit neater to use as well. ## Making a custom event It's pretty simple ```javascript const event = new CustomEvent("myevent", {details: {some: "data"}}); document.dispatchEvent(event); ``` Notice we had to put our own custom data in the `details` key of the event. This is just a quirk of how they work. ## Listening for a custom event ```javascript function eventHandler(event){ const data = event.details.data; console.lo(data); } document.addEventListener("myevent", eventHandler) ``` ## Stopping listening ```javascript document.removeEventListener("myevent", eventHandler); ``` Pretty easy stuff. What's great is, we can also dispatch the event on an element so it doesn't bubble up to the dom. Keeping our code event more modularized. Just replace `document` with another element you have. ## A little wrapper. Because it's a little bit cumbersome to have to write all that everytime you want to use an event. Let's wrap it just a little. ```javascript function publish(eventName, data){ const event = new CustomEvent(eventName, {details: data}); document.dispatchEvent(event); } const events = []; function subscribe(eventName, cb){ const wrapper = function(event){ cb(event.details); } document.addEventListener(eventName, wrapper); events.push({ cb, wrapper, eventName }) } function unsubscribe(eventName, cb){ events.forEach((event)=>{ if(event.eventName === eventName && cb === event.cb){ document.removeEventListener(eventName, event.wrapper); } }) } export {subscribe, unsubscribe, publish}; ``` ### Usage ```javascript function eventHandler(data){ console.log(data); } subscribe("myEvent", eventHandler)) publish("myEvent", {some: "data"}); unsubscribe("myEvent", eventHandler); ``` Et voilà If you like my stuff please check out my site https://dropconfig.com
powerc9000
129,146
while loops that have an index
Perl5 got this syntax that allow to use a while loop without having to explicitly incrementing an...
0
2019-06-27T10:28:45
https://dev.to/smonff/while-loops-that-have-an-index-56h2
perl, testing, loop, each
Perl5 got this syntax that allow to use a while loop without having to explicitly incrementing an index by doing an `i++`. It is made possible by the [`each` function](https://perldoc.pl/functions/each). Let's demonstrate this in a simple test that check that and array and an array ref contains the same things: ```perl # t/01_foo_order.t use v5.18; use Test::More tests => 3; my $events_arr_ref = get_events(); my @expected_events = ('foo', 'bar', 'baz'); while ( my ( $i, $event ) = each( @$events_arr_ref )) { is @$events_arr_ref[$i], $expected_events[$i], "Array element [ $i ] is $expected_events[$i]"; } done_testing(); sub get_events { return [ 'foo', 'bar', 'baz' ]; } ``` Let's execute our test: ```bash prove -v t/01_foo_order.t 1..3 ok 1 - Array element [ 0 ] value is foo ok 2 - Array element [ 1 ] value is bar ok 3 - Array element [ 2 ] value is baz ok All tests successful. Files=1, Tests=3, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.07 cusr 0.00 csys = 0.10 CPU) Result: PASS ``` `while ( my ( $i, $event ) = each( @$events_arr_ref )) {}` makes possible to iterate on the `$events_arr_ref` array reference and for each element found, initializing `$i` and `$event` with the right value. This is quite the same than a `for` loop except that you don't have to increment the index and that it must be used in case you want to iterate on the whole array. I use it quite often, can be handsome if you want to avoid `$_`. Just yet another [TIMTOWTDI](https://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it)... **Sources**: - [*automatically get loop index in foreach loop in perl*](https://stackoverflow.com/questions/974656/automatically-get-loop-index-in-foreach-loop-in-perl) on StackOverflow - The [Perl5 documentation for the *`each` function*](https://perldoc.pl/functions/each)
smonff
129,182
Fork Me! FCC: Test Suite Template
A post by Sherry
0
2019-06-27T13:08:42
https://dev.to/sherrykay/fork-me-fcc-test-suite-template-5mj
codepen
{% codepen https://codepen.io/sherryk/pen/agVWzr %}
sherrykay
129,227
Poll: why does clicking a DEV comment link display the comment in isolation?
Do you like DEV's ux for linking to comments?
0
2019-06-27T14:34:36
https://dev.to/johncarroll/poll-why-does-clicking-a-dev-comment-link-display-the-comment-in-isolation-2k5n
discuss, meta
--- title: Poll: why does clicking a DEV comment link display the comment in isolation? published: true description: Do you like DEV's ux for linking to comments? tags: discuss, meta --- For those who aren't familiar, when you click on a comment link in DEV.to, you are taken to a page which displays only that comment in isolation ([example here](https://dev.to/ben/comment/329n)). If you want more context, you can (must) click `VIEW POST` or `VIEW FULL DISCUSSION` links. Medium works similarly. ## Question: Do you like this UX? To me, a comment is inherently contextual. - Is there ever a time when viewing a comment without context is desireable? When / why? - Is there a technical reason for the current UX? - Github comment links are powered by anchor tags, so the browser simply displays the full page and scrolls to the relevant section (very intuitive).
johncarroll