id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
329,379
Converting Excel to PDF in Java Application
PDF is a portable document format that cannot be easily edited or modified. So sometimes when we send...
0
2020-05-07T06:09:03
https://dev.to/codesharing/converting-excel-to-pdf-in-java-application-474i
excel, java, pdf
PDF is a portable document format that cannot be easily edited or modified. So sometimes when we send Excel files to others and don’t want the important formulas to be viewed or modified, we will convert the Excel to PDF. This article will demonstrate two methods to convert Excel to PDF by using Free Spire.XLS for Java: (1) Convert the whole Excel workbook to PDF. (2) Convert a single Excel worksheet to PDF. **Installation** **Method 1:** You need to download the [Free Spire.XLS for Java](https://www.e-iceblue.com/Download/xls-for-java-free.html) and unzip it. And then add the Spire.Xls.jar file to your project as dependency. ![](https://user-images.githubusercontent.com/63445660/81258850-2ed68d00-9069-11ea-927c-11db90113036.png) **Method 2:** If you use maven, you can easily add the jar dependency by adding the following configurations to the pom.xml. ```java <repositories>         <repository>             <id>com.e-iceblue</id>             <name>e-iceblue</name> <url>http://repo.e-iceblue.com/nexus/content/groups/public/</url>         </repository> </repositories> <dependencies>     <dependency>         <groupId>e-iceblue</groupId>         <artifactId>spire.xls.free</artifactId>         <version>2.2.0</version>     </dependency> </dependencies> ``` **The Excel test document including two worksheets:** ![](https://user-images.githubusercontent.com/63445660/81258957-6e04de00-9069-11ea-9162-f2274df37941.png) **Example 1: Spire.XLS for Java offers a method of workbook.saveToFile() to save the whole Excel workbook to PDF in Java.** ```java import com.spire.xls.*; public class ExcelToPDF { public static void main(String[] args) { //Load the input Excel file Workbook workbook = new Workbook(); workbook.loadFromFile("Input.xlsx"); //Fit to page workbook.getConverterSetting().setSheetFitToPage(true); //Save as PDF document workbook.saveToFile("ExcelToPDF.pdf",FileFormat.PDF); } } ``` ![](https://user-images.githubusercontent.com/63445660/81259013-91c82400-9069-11ea-98cc-8966e486c57f.png) **Example 2: Spire.XLS for Java offers a method of worksheet.saveToPdf() to save a single Excel worksheet to PDF in Java.** ```java import com.spire.xls.*; public class ExcelToPDF { public static void main(String[] args) { //Load the input Excel file Workbook workbook = new Workbook(); workbook.loadFromFile("Input.xlsx"); //Get the second worksheet Worksheet worksheet = workbook.getWorksheets().get(1); //Save as PDF document worksheet.saveToPdf("ToPDF2.pdf"); } } ``` ![](https://user-images.githubusercontent.com/63445660/81259058-aad0d500-9069-11ea-86c7-317374648787.png)
codesharing
329,392
Reimplementing JavaScript Array methods
Mastering JavaScript Array methods by reimplementing them
0
2020-05-07T06:54:05
https://dev.to/webit/reimplementing-javascript-array-methods-46bl
javascript, arrays, programming, webdev
--- title: Reimplementing JavaScript Array methods published: true description: Mastering JavaScript Array methods by reimplementing them tags: javascript, arrays, programming, webdev --- *This is cross-post from medium, where I published it first.* ![Array.filter() implementation](https://dev-to-uploads.s3.amazonaws.com/i/x3i280me2xhqijjrsc1v.png) Some time ago I’ve found list of [JavaScript tasks](https://github.com/Przemocny/zbior-zadan-html-css-js-react/blob/master/JS/). Those cover all developer career levels — Newbie/Junior/Mid and are fun way to practice programming. NOTE those tasks are written in Polish language but I’m gonna translate task requirements to English :) I’ve decided to give it a try and reimplement some of commonly used JavaScript [Array methods](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array). # The task > Task #6 —Array methods > since JS is functional language it’s worth mastering its basic methods > > .map > .filter > .reduce > .reduceRight > .every > .some > .entries > > Create functions that will work same way as original Array methods. Functions has to use for or while loops We also got function signatures: ```javascript function mapFn(array, callback){} function filterFn(array, callback){} function reduceFn(array, callback, initial){} function reduceRightFn(array, callback, initial){} function everyFn(array, callback){} function someFn(array, callback){} function entriesFn(array){} ``` Easy, right? Let’s check… ## Array.map() > The ***map()*** method creates a new array populated with the results of calling a provided function on every element in the calling array. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) That was easy to build. All we need is to execute callback function on each array element and return value into new array. When finished iterating over elements — return new array. Pretty easy… ```javascript function mapFn(array, callback) { const out = []; for (let i of array) { out.push(callback(i)); } return out; } ``` ## Array.filter() > The ***filter()*** method **creates a new array** with all elements that pass the test implemented by the provided function. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) Again, nothing fancy here. We need to create new array and push there elements only if callback’s test is passed: ```javascript function filterFn(array, callback) { const out = []; for (let i of array) { callback(i) && out.push(i); } return out; } ``` ## Array.reduce() > The ***reduce()*** method executes a ***reducer*** function (that you provide) on each element of the array, resulting in a single output value. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) Reduce, required a bit more work. Callback accepts up to 4 parameters, and function itself can have (optional) initial value. If initial value is omitted we need to take 1st array element instead of. Callback functions accepts 4 parameters: 1. accumulator (accumulates callback’s return value) 2. currentValue (current array element value) 3. index (current array index) 4. array (complete entry array) Additionally in case reducer has no initial value we need to take 1st array item as it! ```javascript function reduceFn(array, callback, initial) { let out = initial; for (let i in array) { // in case initial value is missing we take 1st element of an array if (out === undefined) { out = array[i]; continue; } out = callback(out, array[i], i, array); } return out; } ``` ## Array.reduceRight() > The ***reduceRight()*** method applies a function against an accumulator and each value of the array (from right-to-left) to reduce it to a single value. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/ReduceRight) It is similar function to previous one but it starts executing callback function from the right (from the end). Iterates from highest array index to lowest one. Similar to `Array.reduce()` initial value can be omitted — in such a case we need to take last array element as it. Again, initial value might be omitted so we need to take array item as it. In case `reduceRight()` it's a last item of array! ```javascript function reduceRightFn(array, callback, initial) { let index = array.length; let out = initial; while (--index > -1) { // in case initial value is missing we take last element of an array if (out === undefined) { out = array[index]; continue; } out = callback(out, array[index], index, array); } return out; } ``` ## Array.every() > The ***every()*** method tests whether all elements in the array pass the test implemented by the provided function. It returns a Boolean value. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/every) According to the description we need to build function that checks if every element in array passes callback test. That means if at least one check won’t pass — we need return false. That’s all! ```javascript function everyFn(array, callback) { for (let i of array) { if (!callback(i)) { return false; } } return true; } ``` That simple solution also covers special case: > **Caution:** Calling this method on an empty array will return true for any condition! ## Array.some() > The ***some()*** method tests whether at least one element in the array passes the test implemented by the provided function. It returns a Boolean value. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) As you can see `Array.some()` is similar to `Array.every()` the difference is subtle — we got `true` response as soon as at least one element passes callback test. ```javascript function someFn(array, callback) { for (let i of array) { if (callback(i)) { return true; } } return false; } ``` Again, special case is covered: > **Caution:** Calling this method on an empty array returns false for any condition! ## Array.entries() > The ***entries()*** method returns a new ***Array Iterator*** object that contains the key/value pairs for each index in the array. — [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/entries) That was most challenging for me. Most probably because I rarely create custom iterators or work with generators… Even though I guess I’ve nailed it? ;) ```javascript function entriesFn(array) { const out = {}; out[Symbol.iterator] = function* () { for (let i in array) { yield [+i, array[i]]; } }; return out; } ``` What do you think? Do you like such a practice tasks?
webit
329,460
git deep dive part 3: HEAD, index and working tree
git deep dive
0
2020-05-07T15:42:53
https://dev.to/kenakamu/git-deep-dive-part-3-head-index-and-working-tree-3oim
git
--- title: git deep dive part 3: HEAD, index and working tree published: true description: git deep dive tags: git --- In the [previous article](https://dev.to/kenakamu/git-deep-dive-part-2-additional-commit-to-master-and-new-branch-4n8f), I added branches and commits. Though I have good idea what's happening behind the scene, it's important to understand the concept of "tree" and how ``reset`` and ``checkout`` works. # Three trees HEAD, index and working tree are the three trees in git. The official document has very clear explanation of three trees concept as part of [Reset Demystified](https://git-scm.com/book/en/v2/Git-Tools-Reset-Demystified) article. ### HEAD As we already examined, I only need a commit to get entire snapshot. HEAD points to the current commit, which usually is the latest commit of the current branch, so we can consider HEAD as last committed snapshot. ```shell gitDeepDive> git ls-tree -r HEAD 100644 blob 44f41854d770f2a38d368936b14975d280cbd950 docs/article.txt 100644 blob 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 docs/news.txt 100644 blob 20fe8be9820a49252e2a4dd37a60e678cd5cda14 hello.txt ``` ### index The index contains "staging items". It maintain the reference of the latest blob. Initially, it has same ids as last commit. If I modify any file and run ``git add``, then index hold the latest id which is different from last commit. ```shell gitDeepDive> git ls-files --stage 100644 44f41854d770f2a38d368936b14975d280cbd950 0 docs/article.txt 100644 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 0 docs/news.txt 100644 20fe8be9820a49252e2a4dd37a60e678cd5cda14 0 hello.txt ``` ### Working Tree (a.k.a Working Directory) Working tree is the root folder (gitDeepDive folder in this case). I usually modify files in this directory and git takes care of the rest. # What does staging file mean? When I ``git add``, I stage files. git creates blob and ids behind the scene. Let's see the behavior again. ### Modify files and stage them Run the following command to create new file and stage it. ```shell git checkout dev echo "cool blog" > blog.txt echo "The third line" >> hello.txt git add .\blog.txt .\hello.txt ``` Then, check the index. blob.txt is added, and id of hello.txt is changed as I modified it. ```shell gitDeepDive> git ls-files --stage 100644 a3b6a8ee47c62758ed838e056da40f4c83fdc55a 0 blog.txt 100644 44f41854d770f2a38d368936b14975d280cbd950 0 docs/article.txt 100644 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 0 docs/news.txt 100644 e4838ed8db44f8614513a8c1417e408b9d1b367c 0 hello.txt ``` Following objects are created or modified in .git directory. - index file was updated - a3 and e4 folder is added in objects - b6a8ee47c62758ed838e056da40f4c83fdc55a was added in a3 folder - 838ed8db44f8614513a8c1417e408b9d1b367c was added in e4 folder Now blob.txt and updated hello.txt exists in both .git directory and working tree. ### Un-stage files When I want to "unstage" the file, which means I want to remove the change from index/staging but keep working on the file in working tree, I can do ``git restore``. ```shell git restore --staged hello.txt ``` Then index restores the original id of hello.txt ```shell gitDeepDive> git ls-files --stage 100644 a3b6a8ee47c62758ed838e056da40f4c83fdc55a 0 blog.txt 100644 44f41854d770f2a38d368936b14975d280cbd950 0 docs/article.txt 100644 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 0 docs/news.txt 100644 20fe8be9820a49252e2a4dd37a60e678cd5cda14 0 hello.txt ``` Even though it restores the index file, the blob object still exists. ```shell gitDeepDive> git cat-file blob e4838ed8db44f8614513a8c1417e408b9d1b367c hello git The second line The third line ``` If I modify the hello.txt and stage it, new object is created as the content is different. git uses SHA-1 to hash the content to create id, so even 1 bit change will generate new hash and new object. ```shell echo "The fourth line" >> hello.txt git add .\hello.txt ``` See the index to confirm that hello.txt has different id. ```shell gitDeepDive> git ls-files --stage 100644 a3b6a8ee47c62758ed838e056da40f4c83fdc55a 0 blog.txt 100644 44f41854d770f2a38d368936b14975d280cbd950 0 docs/article.txt 100644 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 0 docs/news.txt 100644 7a218e826670e77d05c1c244b514a7f449056752 0 hello.txt ``` # Remove files from working tree To remove files from working tree, I can use git [rm](https://git-scm.com/docs/git-rm) command. If the file is staged (or cached in another terminology), I can select to delete it from both index and working tree or just from working tree. I can also use git [clean](https://git-scm.com/docs/git-clean) to delete files from working tree. Or, I can simply delete from directory without using git. ```shell git rm -f blog.txt ``` As I used ``-f`` parameter, it removes from both working tree and index. ```shell gitDeepDive> git ls-files --stage 100644 44f41854d770f2a38d368936b14975d280cbd950 0 docs/article.txt 100644 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 0 docs/news.txt 100644 7a218e826670e77d05c1c244b514a7f449056752 0 hello.txt gitDeepDive> ls -Name docs hello.txt ``` However, the created blob remains as expected. ```shell gitDeepDive> git cat-file blob a3b6a8ee47c62758ed838e056da40f4c83fdc55a cool blog ``` # Optimize space I have so many files in objects folder. Some of them are orphaned that no commit reference them. git takes care of these files automatically, but you can clean up them manually with git [gc](https://git-scm.com/docs/git-gc) and [prune](https://git-scm.com/docs/git-prune)command. ```shell git gc git prune ``` As a result, it removed all unrooted files and pack other files. The .git folder looks like below. ```shell .git │ COMMIT_EDITMSG │ config │ description │ HEAD │ index │ ORIG_HEAD │ packed-refs ├─info │ exclude │ refs │ ├─logs │ │ HEAD │ │ │ └─refs │ └─heads │ dev │ master │ test ├─objects │ ├─info │ │ commit-graph │ │ packs │ │ │ └─pack │ pack-32b528fb3da0ed8dd7c96bf4608b5874805561e1.idx │ pack-32b528fb3da0ed8dd7c96bf4608b5874805561e1.pack └─refs ├─heads └─tags ``` I can see the packed file with git [verify-pack](https://git-scm.com/docs/git-verify-pack). ```shell gitDeepDive> git verify-pack -v .\.git\objects\pack\pack-32b528fb3da0ed8dd7c96bf4608b5874805561e1.idx 367c2d000be0ffbb640252384c820ce472fe32a4 commit 246 161 12 2adbcacc0047a991956dedb4b16691ba244674b3 commit 259 171 173 16f1fa822d53d12329e9a68c7463c5697bddc7d1 commit 203 132 344 44f41854d770f2a38d368936b14975d280cbd950 blob 14 23 476 2b366bf2f2784dbf26fcd56e1cedb3afc1345753 blob 29 33 499 7a218e826670e77d05c1c244b514a7f449056752 blob 57 50 532 2baf027b74c551817c2a5ef6a3472ccc8e99738c tree 75 80 582 30962e4266975d43d1698bec735caa2e17ba3223 tree 68 78 662 20fe8be9820a49252e2a4dd37a60e678cd5cda14 blob 26 36 740 129b57b6945a4e9e56abaf5b229701565e2c6cdd tree 68 78 776 79a776223b60cb98e81a58d0ec92f00242ca7dcb tree 75 79 854 2b54426c8ded2b5334352e13b3ae62231ab67fee blob 11 20 933 a2cf761ea993127a4aae5762806441cc18d730f5 tree 37 47 953 ``` The branch information is packed into packed-refs file. # Summary I explain the relationships between HEAD, index and working tree and I hope this demystify some of the behavior. I explain reset in the next article to see how I can play with these three trees more. [Go to next article](https://dev.to/kenakamu/git-deep-dive-part-4-use-reset-to-control-branch-and-three-trees-23a)
kenakamu
329,510
Event-driven integration #4 - Outbox publisher (feat. IHostedService & Channels) [ASPF02O|E043]
In this episode, we’ll implement the outbox publisher, or better yet, two versions of it, one better suited for lower latency and another for reliability. As we continue our event-driven path, this will be a good opportunity to introduce a couple of interesting .NET Core features: IHostedService (and BackgroundService) and System.Threading.Channels.
71
2020-05-07T10:26:44
https://blog.codingmilitia.com/2020/05/07/aspnet-043-from-zero-to-overkill-event-driven-integration-04-outbox-publisher-feat-ihostedservice-channels/
dotnet, aspnetcore, efcore
--- title: Event-driven integration #4 - Outbox publisher (feat. IHostedService & Channels) [ASPF02O|E043] published: true date: 2020-05-07 09:30:00 UTC tags: dotnet,aspnetcore,efcore canonical_url: https://blog.codingmilitia.com/2020/05/07/aspnet-043-from-zero-to-overkill-event-driven-integration-04-outbox-publisher-feat-ihostedservice-channels/ series: "ASP.NET Core: From 0 to overkill" description: "In this episode, we’ll implement the outbox publisher, or better yet, two versions of it, one better suited for lower latency and another for reliability. As we continue our event-driven path, this will be a good opportunity to introduce a couple of interesting .NET Core features: IHostedService (and BackgroundService) and System.Threading.Channels." cover_image: https://dev-to-uploads.s3.amazonaws.com/i/xuvx7p7cqex50kkpjrk4.jpg --- In this episode, we'll implement the outbox publisher, or better yet, two versions of it, one better suited for lower latency and another for reliability. As we continue our event-driven path, this will be a good opportunity to introduce a couple of interesting .NET Core features: `IHostedService` (and `BackgroundService`) and `System.Threading.Channels`. **Note:** depending on your preference, you can check out the following video, otherwise, skip to the written version below. {% youtube xnn6AnYyC5g %} The playlist for the whole series is [here](https://www.youtube.com/playlist?list=PLN0oN9Azm_MMAjk3nhRnmHdr1l0160Dhs). <br /> ## Intro In the [previous episode](https://blog.codingmilitia.com/2020/04/28/aspnet-042-from-zero-to-overkill-event-driven-integration-03-storing-events-in-the-outbox-table/), we implemented the outbox, as well as storing the messages in it transactionally. In this episode, we'll implement the outbox publisher (two versions in fact) that's responsible for reading the messages from the table, push them to the event bus and delete them after they're published successfully. Something we'll see that the outbox publisher takes into consideration is that multiple instances might be running concurrently. Due to this, the publisher is coded in a way to try to avoid publishing the same message multiple times, "try" being the keyword here, as it's not a guarantee we can achieve with this kind of solution. As briefly pointed out, we'll in fact implement two versions of the outbox publisher, the first geared towards reducing event publishing latency, while the second aimed at reliability, ensuring all the events are published even in the face of transient failures. As you might be suspecting from this quick intro, we could live with just the second one, simplifying our work, but the first allows us to play with a .NET Core feature we haven't used so far, [`System.Threading.Channels`](https://devblogs.microsoft.com/dotnet/an-introduction-to-system-threading-channels/). Another .NET Core feature we'll use in this episode is running [tasks in the background](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1), by implementing `IHostedService`, which can be done directly or by inheriting from the `abstract` `BackgroundService` class. Before getting on with business, to situate ourselves in the event-driven integration path, we can take a look at the diagram introduced in [episode 40](https://blog.codingmilitia.com/2020/04/13/aspnet-040-from-zero-to-overkill-event-driven-integration-transactional-outbox-pattern/): [![situating ourselves](https://dev-to-uploads.s3.amazonaws.com/i/y7i47xbmgvu1akp2l0be.png)](https://dev-to-uploads.s3.amazonaws.com/i/y7i47xbmgvu1akp2l0be.png) ## Outbox publisher Let's start with our main outbox publisher, which is triggered every time a new message is stored. This is the implementation that gives us lower event publishing latency, as it doesn't rely on polling, but on being listening for new work. As introduced, this implementation is not completely reliable by itself. The reason for this, is that from the time the publisher is triggered, to the time it publishes the events, something might go wrong, like the server going down, and the event that caused the publisher execution would remain in the outbox pending publishing. Due to this, we need additional strategies to ensure all events are published, regardless of transient failures. As a result, this outbox publisher implementation becomes more of an optimization, to try to publish the events as soon as possible, as well as an opportunity to play with Channels 🙂. ### Notify when a new message is stored Picking up where we left in the previous episode, in terms of implementation, we had a comment in the `AuthDbContext`'s `SaveChangesAsync` method to "publish the events persisted in the outbox". What we'll do is not actually publish the events, as the comment mentioned, but instead notify some interested component that messages were persisted and it can proceed to publish them. As publishing the event is not required to fulfill the user's request, this reduces the time spent waiting for the request to complete. We could create an interface to represent this notification behavior, but lately I've been more adept of creating delegates instead of single-method interfaces (depending on the scenario of course). With this in mind, we can create an `OnNewOutboxMessages` delegate. `Data\OnNewOutboxMessages.cs` ```csharp public delegate void OnNewOutboxMessages(IEnumerable<long> messageIds); ``` We could also achieve the same with a `Func`, but not only giving it a name can make it easier to understand, it also helps when configuring things in the dependency injection container, as we could have different `Func`s with the same signature but different purposes. Now we can inject the delegate in the `AuthDbContext` and use it when new messages are persisted in the outbox table. `Data\AuthDbContext.cs` ```csharp public class AuthDbContext : IdentityDbContext<PlayBallUser> { // ... public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = new CancellationToken()) { var eventsDetected = GetEvents(); AddEventsIfAny(eventsDetected); var result = await base.SaveChangesAsync(cancellationToken); NotifyEventsIfAny(eventsDetected); return result; } // ... private void NotifyEventsIfAny(IReadOnlyCollection<OutboxMessage> eventsDetected) { if (eventsDetected.Count > 0) { _onNewOutboxMessages(eventsDetected.Select(e => e.Id)); } } } ``` ### Outbox listener The `AuthDbContext` is ready to notify when a new message is added to the outbox, now we need to create the glue between said notification and some component that runs in the background and actually publishes things to the event bus. This is where we'll make use of `System.Threading.Channels`. `Channels` help us implement in-memory producer/consumer scenarios, optimized for async code. This fits our problem very nicely, as we want to notify (produce) when a new message is available in persistence, while having another component listening (consume) to that notification to act on it. To encapsulate this, we can create a class `OutboxListener` (not very happy with the name, but it'll do for now 😛). #### Creating a channel Firstly, let's look at the constructor. In there, we're creating the channel we'll use to publish the id of the message stored in the outbox, hence the `Channel<long>` type, meaning we'll have a channel that can contain `long`s, the type of our message ids. `Infrastructure\Events\OutboxListener.cs` ```csharp public class OutboxListener { private readonly ILogger<OutboxListener> _logger; private readonly Channel<long> _messageIdChannel; public OutboxListener(ILogger<OutboxListener> logger) { _logger = logger; // If the consumer is slow, this should be a bounded channel to avoid memory growing indefinitely. // To make an informed decision we should instrument the code and gather metrics. _messageIdChannel = Channel.CreateUnbounded<long>( new UnboundedChannelOptions { SingleReader = true, SingleWriter = false }); } // ... } ``` We can have bounded and unbounded channels, where the first is limited in size and we should elect a strategy to handle a full channel (e.g. wait for space or drop new items), while the latter doesn't have a size restriction. An unbounded channel can be a bit dangerous because if the consumer is slow to process items, memory will grow indefinitely. We'll go with unbounded for now, but keep in mind bounded is likely a better idea. When creating a channel, we can provide some options, in the unbounded channel case, through the `UnboundedChannelOptions` class. In this case, we're indicating that we'll have a single reader/consumer and multiple writers/producers. With these options, the channel instance we'll get can be optimized for our use case. If we were using a bounded channel, it would be through these options (using the `BoundedChannelOptions` class) that we would be able to set the capacity and the behavior of the channel when full. #### Writing to a channel With a channel instance in hand, we can start writing to it. This is done in the `OnNewMessages` method. Notice the method signature matches the delegate we created for `AuthDbContext` to use. This is no coincidence, as this will be configured in the dependency injection container to be provided to the `AuthDbContext`. `Infrastructure\Events\OutboxListener.cs` ```csharp public class OutboxListener { // ... public void OnNewMessages(IEnumerable<long> messageIds) { foreach (var messageId in messageIds) { // we don't care too much if it succeeds because we'll have a fallback to handle "forgotten" messages if (!_messageIdChannel.Writer.TryWrite(messageId) && _logger.IsEnabled(LogLevel.Debug)) { _logger.LogDebug("Could not add outbox message {messageId} to the channel.", messageId); } } } // ... } ``` A channel exposes two properties, `Writer` and `Reader` (of types `ChannelWriter<T>` and `ChannelReader<T>` respectively), which provide the methods to write to/read from it. For either case we have multiple options, not a single method for writing and reading, to adapt to our needs. Concerning the `ChannelWriter`, the methods available are: - `TryWrite`, as the name implies, tries to write to the channel, returning a boolean to indicate whether it wrote or not. Reasons for not writing may be that the channel is full or completed (no longer accepting new writes). - `WaitToWriteAsync` doesn't actually write to the channel, instead returning `ValueTask<bool>` that can be awaited to know when space is available to write. If the boolean returned is false, it means it isn't be possible to write anymore. - `WriteAsync` is a mix between `TryWrite` and `WaitToWriteAsync`. If there is space to write, it writes, otherwise waits for space to be available. - `TryComplete` is used when we don't want to write to the channel anymore, be it because we have nothing more to write or an exception happened and we want to stop all the things. Looking at the `OutboxListener` code, we're simply using `TryWrite`. There are a couple of factors for this decision. The most immediate explanation is, being an unbounded channel, `TryWrite` will always succeed because there are no space issues (the only way to return false, is if the channel is completed). Additionally, even if we were using a bounded channel, we could still ignore when not being able to write because, as introduced in the beginning of the post, we will have a fallback publishing any pending messages. If we didn't have this fallback, then we'd need to approach things differently. In this case we're be making a tradeoff between the time a user needs to wait for a request to complete and the latency of event publishing. #### Reading from a channel Like the `ChannelWriter`, the `ChannelReader` also exposes a number of methods with similar behavior, just applied to reading: - `TryRead` reads an item from the channel if there is one available, returning true in such a case, otherwise returns false. - `WaitToReadAsync` doesn't actually read, instead returning a `ValueTask<bool>` that can be awaited to know when an item is available to read. If the boolean returned is false, it means it isn't possible to read anymore (channel completed). - `ReadAsync` is a mix between `TryRead` and `WaitToReadAsync`. If there is an item to read, it reads, otherwise waits for an item to be available. If you look at our `OutboxListener` code, you'll notice we're not using any of these. `Infrastructure\Events\OutboxListener.cs` ```csharp public class OutboxListener { // ... public IAsyncEnumerable<long> GetAllMessageIdsAsync(CancellationToken ct) => _messageIdChannel.Reader.ReadAllAsync(ct); } ``` Besides the methods previously mentioned, `ChannelReader` also exposes a `ReadAllAsync`, used above, which returns an `IAsyncEnumerable`. If you never seen an `IAsyncEnumerable`, which wouldn't be surprising as it's a recent feature (introduced with .NET Core 3.0) like the name implies, it's like an `IEnumerable` but tailored for async scenarios. With it we can use a feature introduced in C# 8, `await foreach`, which allows us to handle async streams in a similar way to traditional iteration on collections. There's a section in ["What's new in C# 8.0"](https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8) about [asynchronous streams](https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#asynchronous-streams). ### Running in the background With the `OutboxListener` ready, we can now use it to be notified when new messages are stored in the outbox. To do this, we'll create a background task that starts the process of listening to these notifications. In .NET Core, we can create these kinds of background tasks by implementing an `IHostedService`, either directly or by inheriting from the `BackgroundService` class. The responsibility of this component, a class named `OutboxPublisherBackgroundService`, will be to listen for notifications and forward to an `OutboxPublisher` class that'll implement the remaining logic. `Infrastructure\Events\OutboxPublisherBackgroundService.cs` ```csharp public class OutboxPublisherBackgroundService : BackgroundService { private readonly OutboxPublisher _publisher; private readonly OutboxListener _listener; private readonly ILogger<OutboxPublisherBackgroundService> _logger; public OutboxPublisherBackgroundService( OutboxPublisher publisher, OutboxListener listener, ILogger<OutboxPublisherBackgroundService> logger) { _publisher = publisher; _listener = listener; _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { // TODO: one message at a time might hinder throughput, consider batching await foreach (var messageId in _listener.GetAllMessageIdsAsync(stoppingToken)) { try { await _publisher.PublishAsync(messageId, stoppingToken); } catch (Exception ex) { // We don't want the background service to stop while the application continues, // so catching and logging. // Should certainly have some extra checks for the reasons, to act on it. _logger.LogWarning(ex, "Unexpected error while publishing pending outbox messages."); } } } } ``` As we can see, we're inheriting from `BackgroundService`, which means we have a single method we need to implement, `ExecuteAsync`. This method returns a `Task`, that when completed means the service has finished its job. In our case, we want it to be running during the whole lifetime of the application, but in other cases we might just want to run some things asynchronously when starting the application. As for the implementation of `ExecuteAsync`, we're doing the `await foreach` mentioned earlier, handling each message id as it arrives. As noted in the comment, executing the event publishing logic one by one will likely hurt event publishing throughput, so we should consider batching, but we'll keep it simple for now. For each iteration, we make use of the `OutboxPublisher` class (which we'll see in the next section) to handle the event publishing logic. Besides that, we catch and log exceptions, because we don't want the service to stop while the application keeps running. Depending on the type of error though, we could probably improve this. ### Publish an event Publishing an event happens in the previously mentioned `OutboxPublisher` class. The `OutboxPublisher` logic consists of: - Reading the message for the given id from the outbox. - Publish the event to the event bus. - Delete the message pertaining the published event from the outbox. The code to implement this logic is a bit more complex then we would expect from this description, as we want to take some precautions due to the fact multiple publishers might be running concurrently, not in this service, where we have a single one, but we might have multiple instances of the auth service running (e.g. multiple servers or multiple containers). `Infrastructure\Events\OutboxPublisher.cs` ```csharp public class OutboxPublisher { private readonly IServiceScopeFactory _serviceScopeFactory; private readonly ILogger<OutboxPublisher> _logger; public OutboxPublisher(IServiceScopeFactory serviceScopeFactory, ILogger<OutboxPublisher> logger) { _serviceScopeFactory = serviceScopeFactory; _logger = logger; } public async Task PublishAsync(long messageId, CancellationToken ct) { using var scope = _serviceScopeFactory.CreateScope(); var db = scope.ServiceProvider.GetRequiredService<AuthDbContext>(); await using var transaction = await db.Database.BeginTransactionAsync(ct); try { var message = await db.Set<OutboxMessage>().FindAsync(new object[] {messageId}, ct); if (await TryDeleteMessageAsync(db, message, ct)) { // TODO: actually push the events to the event bus _logger.LogInformation( "Event with id {eventId} (outbox message id {messageId}) published -> {event}", message.Event.Id, message.Id, Newtonsoft.Json.JsonConvert.SerializeObject(message.Event)); await transaction.CommitAsync(); } else { await transaction.RollbackAsync(ct); } } catch (Exception) { await transaction.RollbackAsync(); throw; } } private async Task<bool> TryDeleteMessageAsync(AuthDbContext db, OutboxMessage message, CancellationToken ct) { try { db.Set<OutboxMessage>().Remove(message); await db.SaveChangesAsync(ct); return true; } catch (DbUpdateConcurrencyException) { _logger.LogDebug($"Delete message {message.Id} failed, as it was done concurrently."); return false; } } } ``` Let's go through `PublishAsync`. The first thing that comes up is actually not logic related, but needed, which is creating a dependency injection scope and getting a `DbContext` instance from there. We need to do this, because we passed the `OutboxPublisher` to the `OutboxPublisherBackgroundService` through the constructor, and `OutboxPublisherBackgroundService` will live for as long as the application lives. As a `DbContext` shouldn't live for that long (e.g. the change tracker keeps things in memory), we need to control its lifetime manually. As for actual publishing logic, the first thing we do is starting a transaction. As you might be suspecting, this is due to the precautions I mentioned regarding concurrent publishing. Immediately after querying the database to get the message with the provided id, we call a `TryDeleteMessageAsync` method, that not only tells the `DbContext` the message should be removed, it actually calls `SaveChangesAsync` to make it so in the database, not just in-memory. Remember though, that we're in a transaction, so even if the deletion is done in the database, it's not committed yet. We do this because if there's a concurrent publisher executing, which for some reason tries to delete the same message, it will be locked until the current transaction is committed or rolled back. This way we minimize the likelihood of publishing the same event multiple times. `TryDeleteMessageAsync` returns a boolean, where true means the message was successfully deleted and we can proceed with publishing the event, while false is returned when deletion wasn't successful, as we can see in the code, due to a `DbUpdateConcurrencyException`. `DbUpdateConcurrencyException` is the exception that's thrown when a change fails in the database due to another happening concurrently, in this case, another component beat the current executing code to deleting the outbox message. When deletion of the message is successful, we can publish the event and commit the changes to the database. In the code above there's a log representing the actual publishing to the event bus, as we'll implement that in the coming episodes using [Apache Kafka](https://kafka.apache.org/). If the message wasn't successfully deleted (or if an unexpected exception occurs), we rollback the transaction. With this, we wrap up the latency oriented outbox publisher implementation, we can proceed to the reliability oriented version. ## Fallback outbox publisher Before getting into the implementation details, let's review why do we need to have a fallback for the outbox publisher we just implemented. The most important reason is to handle cases where a transient failure makes us lose the message ids that were written to the in-memory channel used in the outbox publisher flow. An example of such a failure is the server (or container) going down. Additionally, having this fallback allows us, as we saw, to have a more naive implementation. Examples of this are: - If we used a bounded channel and items were dropped, we didn't worry because the fallback would pick them up. - If the event bus is temporarily down, causing an error to occur when publishing an event, we didn't worry with retries and related patterns, the fallback would pick things up. This is not to say that the current implementation couldn't use some extra improvements, it likely could, but having this fallback lets us get away with some less thought out approaches. ### Read and publish events The `OutboxFallbackPublisher` class, which implements the logic to publish any events that got left behind, has many similarities to the `OutboxPublisher` seen previously, being the major difference that it looks for any messages left on the outbox table, instead of just for a given message id. Let's start with the core logic. `Infrastructure\Events\OutboxFallbackPublisher.cs` ```csharp public class OutboxFallbackPublisher { // ... public async Task PublishPendingAsync(CancellationToken ct) { // Invokes PublishBatchAsync while batches are being published, to exhaust all pending messages. while (!ct.IsCancellationRequested && await PublishBatchAsync(ct)) ; } // returns true if there is a new batch to publish, false otherwise private async Task<bool> PublishBatchAsync(CancellationToken ct) { using var scope = _serviceScopeFactory.CreateScope(); var db = scope.ServiceProvider.GetRequiredService<AuthDbContext>(); await using var transaction = await db.Database.BeginTransactionAsync(ct); try { var messages = await GetMessageBatchAsync(db, ct); if (messages.Count > 0 && await TryDeleteMessagesAsync(db, messages, ct)) { // TODO: actually push the events to the event bus _logger.LogInformation( "Events with ids {eventIds} (outbox message ids [{messageIds}]) published -> {events}", string.Join(", ", messages.Select(message => message.Event.Id)), string.Join(", ", messages.Select(message => message.Id)), Newtonsoft.Json.JsonConvert.SerializeObject(messages.Select(message => message.Event))); await transaction.CommitAsync(); return await IsNewBatchAvailableAsync(db, ct); } await transaction.RollbackAsync(ct); // if we got here, there either aren't messages available or are being published concurrently // in either case, we can break the loop return false; } catch (Exception) { await transaction.RollbackAsync(); throw; } } // ... } ``` As we want to publish all pending events, not just some, `PublishPendingAsync`, which is the only public method of the class, keeps looping while there are pending messages in the outbox, moving the batch publishing logic to `PublishBatchAsync`. Looking at `PublishBatchAsync`, it's very similar to what we saw in the original `OutboxPublisher`. The main differences we can spot are a call to `GetMessageBatchAsync`, which will provide a number of messages, not a single specific one, as well as returning a boolean indicating if there are more messages available to publish. Let's now drill down into the methods used to support this logic. `Infrastructure\Events\OutboxFallbackPublisher.cs` ```csharp public class OutboxFallbackPublisher { private const int MaxBatchSize = 100; private static readonly TimeSpan MinimumMessageAgeToBatch = TimeSpan.FromSeconds(30); // ... private static Task<List<OutboxMessage>> GetMessageBatchAsync(AuthDbContext db, CancellationToken ct) => MessageBatchQuery(db) .Take(MaxBatchSize) .ToListAsync(ct); private static Task<bool> IsNewBatchAvailableAsync(AuthDbContext db, CancellationToken ct) => MessageBatchQuery(db).AnyAsync(ct); private static IQueryable<OutboxMessage> MessageBatchQuery(AuthDbContext db) => db.Set<OutboxMessage>() .Where(m => m.CreatedAt < GetMinimumMessageAgeToBatch()); private async Task<bool> TryDeleteMessagesAsync( AuthDbContext db, IReadOnlyCollection<OutboxMessage> messages, CancellationToken ct) { try { db.Set<OutboxMessage>().RemoveRange(messages); await db.SaveChangesAsync(ct); return true; } catch (DbUpdateConcurrencyException) { _logger.LogDebug( $"Delete messages [{string.Join(", ", messages.Select(m => m.Id))}] failed, as it was done concurrently."); return false; } } private static DateTime GetMinimumMessageAgeToBatch() { return DateTime.UtcNow - MinimumMessageAgeToBatch; } } ``` Both `GetMessageBatchAsync` and `IsNewBatchAvailableAsync` use `MessageBatchQuery` to have the base query to obtain pending messages. The rationale I used was, if the message is there for more than 30 seconds, it probably means it got left behind, so we should publish it. Using this base query, `GetMessageBatchAsync` fetches a batch of messages, while `IsNewBatchAvailableAsync` simply checks if there are any messages pending that match the defined criteria. `TryDeleteMessagesAsync` is the same as we saw in the `OutboxPublisher`, differing just in that it deletes multiple rows, not just one. `GetMinimumMessageAgeToBatch` is a helper method to calculate the minimum age a message should be to qualify as pending (side note, using `DateTime.UtcNow` directly is not great for unit testing). ### Scheduling execution To wrap things up about the `OutboxFallbackPublisher`, we need to schedule its execution. To do this, we can again resort to a `BackgroundService`. `Infrastructure\Events\OutboxPublisherFallbackBackgroundService.cs` ```csharp public class OutboxPublisherFallbackBackgroundService : BackgroundService { private readonly OutboxFallbackPublisher _fallbackPublisher; private readonly ILogger<OutboxPublisherFallbackBackgroundService> _logger; public OutboxPublisherFallbackBackgroundService( OutboxFallbackPublisher fallbackPublisher, ILogger<OutboxPublisherFallbackBackgroundService> logger) { _fallbackPublisher = fallbackPublisher; _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { while (!stoppingToken.IsCancellationRequested) { try { await _fallbackPublisher.PublishPendingAsync(stoppingToken); } catch (Exception ex) { // We don't want the background service to stop while the application continues, // so catching and logging. // Should certainly have some extra checks for the reasons, to act on it. _logger.LogWarning(ex, "Unexpected error while publishing pending outbox messages."); } await Task.Delay(TimeSpan.FromSeconds(30), stoppingToken); } } } ``` Similarly to the `OutboxPublisherBackgroundService`, we just want to get the publisher to do its work. In this case, as we don't subscribe to anything, we take a polling approach. We call the publisher to process any pending messages, and when it's done, we "sleep" for 30 seconds, instead of hammering the database continuously. ## Wiring everything together To get everything working together, what's left is setting things up in the dependency injection container. This is done in an `EventExtensions` class created to keep the `Startup` class clean. `IoC\EventExtensions.cs` ```csharp public static class EventExtensions { public static IServiceCollection AddEvents(this IServiceCollection services) { services.Scan( scan => scan .FromAssemblyOf<UserRegisteredEventMapper>() .AddClasses(classes => classes.AssignableTo(typeof(IEventMapper))) .AsImplementedInterfaces() .WithSingletonLifetime() ); services.AddSingleton<OutboxListener>(); services.AddSingleton<OnNewOutboxMessages>(s => s.GetRequiredService<OutboxListener>().OnNewMessages); services.AddSingleton<OutboxPublisher>(); services.AddSingleton<OutboxFallbackPublisher>(); services.AddHostedService<OutboxPublisherBackgroundService>(); services.AddHostedService<OutboxPublisherFallbackBackgroundService>(); return services; } } ``` The scan for event mappers was already there, from previous episodes, so the new stuff is what comes after. `OutboxListener`, `OutboxPublisher` and `OutboxFallbackPublisher` are registered as usual. They're all singletons, `OutboxListener` really needs to be, because we need to keep using the same channel to notify of new messages. `OutboxPublisher` and `OutboxFallbackPublisher` don't need to be singleton by themselves, but as they'll be used by the background services that have the same the lifetime as the application, as we already discussed, it makes sense to make them singleton as well. The registration of `OnNewOutboxMessages` might be slightly different from what's common, because we want to associate a specific instance method with the delegate. That's why we're making use of overload that accepts a `Func`, where we get an `IServiceProvider` to obtain the `OutboxListener` from which we want to bind the `OnNewMessages` method with the delegate used by the `AuthDbContext`. Finally, `OutboxPublisherBackgroundService` and `OutboxPublisherFallbackBackgroundService` are registered using the `AddHostedService`, which internally registers the background service as a singleton. ## Outro That does it for this episode. We implemented the outbox publisher, two versions of it to be more precise, while playing with some interesting features of .NET Core - channels and background services. Summarizing, the main topics we looked at were: - Using channels to implement in-memory producer/consumer scenarios, optimized for async code. - Implementing background tasks using `IHostedService`/`BackgroundService`. - Reading and publishing messages from the outbox, taking concurrent execution into consideration. As a quick reminder, the achieved solution might be a bit overkill, as we could get away with just the polling solution, but we wouldn't have the opportunity to play with all the things we did 🙂. In the next episodes, we'll introduce Apache Kafka and implement event publishing/subscription on top of it. Links in the post: - [An Introduction to System.Threading.Channels](https://devblogs.microsoft.com/dotnet/an-introduction-to-system-threading-channels/) - [Background tasks with hosted services in ASP.NET Core](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1) - [BackgroundService](https://github.com/dotnet/runtime/blob/master/src/libraries/Microsoft.Extensions.Hosting.Abstractions/src/BackgroundService.cs) - ["What's new in C# 8.0"](https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8) - [Event-driven integration #1 - Intro to the transactional outbox pattern [ASPF02O|E040]](https://blog.codingmilitia.com/2020/04/13/aspnet-040-from-zero-to-overkill-event-driven-integration-transactional-outbox-pattern/) - [Event-driven integration #3 - Storing events in the outbox table [ASPF02O|E042]](https://blog.codingmilitia.com/2020/04/28/aspnet-042-from-zero-to-overkill-event-driven-integration-03-storing-events-in-the-outbox-table/) The source code for this post is in the [Auth](https://github.com/AspNetCoreFromZeroToOverkill/Auth/tree/episode043) repository, tagged as `episode043`. Sharing and feedback always appreciated! Thanks for stopping by, cyaz!
joaofbantunes
329,548
Quick Demo! Two months of building.
Now in Supabase: Set up Postgres in less than 2 minutes Auto-generated APIs! (they are a bit flaky,...
0
2020-05-07T11:57:31
https://dev.to/supabase/quick-demo-one-month-of-building-21no
showdev, postgres, supabase
Now in [Supabase](https://app.supabase.io): - Set up Postgres in less than 2 minutes - Auto-generated APIs! (they are a bit flaky, go easy) - Query your database directly from the dashboard - Analyze your queries and make them faster :rocket: ## Important Notes 1/ Supabase is NOT production ready. Have a play around and let us know what you think. We don't track ANYTHING (although we do store your github name and email). We'd like to keep it that way, so we rely on your direct feedback to steer our product roadmap. 2/ We are free to use while we're in alpha thanks to the generosity of Digital Ocean's startup program. But the funds aren't unlimited - please shut down the database when you're done :pray:. 3/ The database takes about 2 minutes to build. It's worth it, we promise. If you've ever wanted use Postgres, we're the easiest in the market. ## Follow for updates At Supabase, we're building some amazing tools that make Postgres as easy to use as Firebase. Some of the things we're working on: #### Simple interface Why are database interfaces so hard to use? The Supabase team has built products for 70-year-olds, so we're confident we can make something easier for developers: ![Interface](https://dev-to-uploads.s3.amazonaws.com/i/b4o39am95zcl5vl54j75.png) #### Connectors Send realtime database changes to other systems, like queues or webhooks (Slack notifications!): ![Connectors](https://dev-to-uploads.s3.amazonaws.com/i/aom5r917s792cc081bbz.png) #### And more - ⚡ Realtime listeners! Subscribe to your database just like you would with Firebase. - 🤖 Instant RESTful APIs that update when you update your schema. Supabase introspects your schema and updates your API and documentation. - 📓 Auto-documentation for your APIs and Postgres schema. What's better than documentation? Documentation that you don't have to manually keep up to date. We'll announce all our future features with more freebies here on DEV first. Follow us so that you don't miss out. ![Follow](https://dev-to-uploads.s3.amazonaws.com/i/jf7adc5e3kbdu4luxvt4.gif) And make sure to star us on github! [https://github.com/supabase/supabase](https://github.com/supabase/supabase) Note: this was originally "One month of building", but I'm going to round up to 2 months. We started Supabase in January, and the work on the platform started mid-March.
supabase_io
329,557
How I learned a lot from deploying an app that does nothing
I did it! Frontend, backend, database, deployment. It was all me! ...aaaand it's empty. Today I w...
0
2020-05-08T07:33:43
https://dev.to/reiallenramos/how-i-learned-a-lot-from-deploying-an-app-that-does-nothing-3ipc
webdev, showdev, codenewbie
I did it! Frontend, backend, database, deployment. It was all me! ![app_screenshot](https://dev-to-uploads.s3.amazonaws.com/i/vnec0rgig27wzdjb1jz6.png) ...aaaand it's empty. Today I want to share my journey on a personal project with a huge disclaimer that it's not exciting nor groundbreaking. My intention was not to please anybody with this project, but what I truly wanted when the idea came to me was just to create a password-free authentication stack then deploy. It's that simple. Or so I thought. ## Quick backstory... Hello! I've been on DEV for just less than a month but I already consider it my new Twitter. Not that my DEV feed is filled with sarcastic comments or witty retorts (which my Twitter unfortunately became), but what I enjoy is the streamlined feed of user-created tutorials intertwined with their own personal stories. Dreading to open Twitter just to quench my thirst for 'social life', DEV quickly transformed from a mere alternative to my go-to medium for online interaction. As a young developer, it's exciting to dabble in the newest and hippest technologies. Jumping on the ReactJS bandwagon when I was just starting out a little more than 2 years ago, the concept of reactive components and seeing your page automagically update without refreshing was apple to my eyes. About a year after, my radar picked up VueJS. It steadily gained its reputation for having a flatter learning curve than React. Decided to try it out and much to my suprise, Vue components proved to be easily digestible. Without hesitation, I turned my back on React and picked up this young, shining, Vuetiful framework. ![distracted_boyfriend](https://dev-to-uploads.s3.amazonaws.com/i/k8rbrshq3yfqykl0d5v0.jpg) ## Anyway, the app About a week ago I was laying on my bed getting ready to sleep when the idea came to me: create a template to bootstrap my process and do the heavy lifting every time I want to create a new web app. In other words, a boilerplate. If I do it right, I will effectively skip several forks-in-the-road such as choosing a frontend framework and a corresponding CSS library, what language to code the server in, which database to use, and what authentication method to implement. If I do it wrong, we'll, at least I'll learn. ## Action-first mindset ![procrastination](https://dev-to-uploads.s3.amazonaws.com/i/xbtij7oc4g5w0p4pcje9.gif) When learning something for the first time, my manager used to say "first, make it work, then understand it later." This mindset is perfect for lazy people like me who digress from crafting attack strategies and making burndown charts. Just do it! What could go wrong! Break things but only just before reaching the point of no return. Break things then fix them. So I took it upon myself to learn the basic devops workflow. No more procrastinating by reading one tutorial after another. One of my favorite productivity YouTubers [said](https://youtu.be/lSXbNQV2UTQ) "...with an action-first mindset, you're going to get burned. But the lessons you get by getting burned are the ones you remember." After several days of getting entrapped by the vicious cycle of installing packages, learning how to use them, breaking a previously installed dependency, and copy-pasting stackoverflow answers, I was able to duct-tape the following technologies together: ||Technologies I learned|notes| |:---|:---|:---| |Frontend|NuxtJS, Vuetify, Vuex|The trickiest part was binding the port in the host machine's IP.<br>Also, Vuetify is magical :fireworks:| |Backend|ExpressJS, NodeJS|Created public and protected RESTful APIs| |Authentication|JWT, OTP|JWT was pretty straight forward: generate a token from a secret + the given email, then saving in the browser storage and/or cookies was automatically handled by Nuxt. Had to use a simple random string generator for the OTP| |Database|MongoDB|First time to use MongoDB, let alone a NoSQL db| |Caching service|Redis|Storing OTPs for a certain duration| |operations|Docker|Learned how to build my app into an image and peform proper tagging. Push it to Docker Hub for later use in a cloud VM| |deployment|AWS|Provisioning an AWS Lightsail instance and create a startup script. Also, deploying in GKE incurred about $5 for just one night. True story!| ![system_diagram](https://dev-to-uploads.s3.amazonaws.com/i/4a645ocu0dbp8g0fr5ol.png) I'll spare you the implementation choices because they're all personal and highly opinionated. What I'm excited to share is that I was able to put everything together! Definitely still a newbie in this industry, but there's not a single strand of doubt in my mind that I leveled up! Sure, if we take a deep dive into the code, it's still rough on the edges. But hey, how many people can say they deployed their code on the public internet? :wink: If you're still reading and want to check it out, visit the [app here](http://3.1.243.203/) and the [code here](https://github.com/reiallenramos/nuxtjs-otp-boilerplate). Please be gentle, the VM I provisioned only has 512MB RAM :laughing:. What do you think? Did I achieve my goal of creating a password-free authentication? I'd be delighted to hear your thoughts. Thanks for reading!
reiallenramos
329,749
Tutorial how does Git Rebase work and compare with Git Merge and Git Interactive Rebase
This article was originally published at: https://www.blog.duomly.com/git-rebase-tutorial-and-compari...
0
2020-05-11T07:54:41
https://www.blog.duomly.com/git-rebase-tutorial-and-comparison-with-git-merge/
github, beginners, programming, git
This article was originally published at: https://www.blog.duomly.com/git-rebase-tutorial-and-comparison-with-git-merge/ --- ###Intro### There are many ways of working with git, if they're clean, and don't do damages, probably most of them are good.  But same as space vs. tab, in the IT world is a war between fans of git rebase, and fans of git merge.  There are tons of arguments about: -Which way is better? -Which one is cleaner? -Which is more comfortable? -Which one gives a cleaner git graph?  -Why it's important, and which one is more dangerous?   In this article, I will explain to you a few differences between git merge, git rebase, and the git interactive rebase. I will tell a bit about what pros and cons (there are no better or worse options, mostly in IT, there are just preferences). I'll try to answer for a few the most frequently asked questions about them as well.   Let's go!   If you prefer video, here is the youtube version. {% youtube 296lTWWwIxE %}   ###Git rebase:### ![Git rebase example](https://dev-to-uploads.s3.amazonaws.com/i/7c8n0latu23mvjpvtkuy.png)   ###How does git rebase work### When we develop features, we usually create the feature branch from our main branch. We use git rebase when we want to take all commits from our feature branch, and move them into the main branch.   This type of git rebase will not give us so much possibility to manipulate on each commit, and will take all of them and move into the destination.   To have much better control over all of the commits is worth to take a look into the git interactive rebase, that will give us much more customization possibilities.   ###How do you develop a rebase### Starting the git rebase is very simple, and we just need one line of the code inside our terminal.   We can develop a rebase by typing: ``` git checkout feature-branch git rebase your-main-branch(for example master or develop) ``` ###How do I fix rebase conflicts### I would like to show you two ways of solving conflicts with rebase.   **Solve git conflict:** You can solve conflicts manually, add files by: ``` git add ``` and type: ``` git rebase --continue ```   **Skip commit:** You can skip the commit by typing: ``` git rebase --skip ``` ###Is git rebase safe### If we know what we do, yes, git rebase is safe. Anyway, we need to care. Because even if we do not make big damages, kicked rebase can put us into a time-consuming journey about fixing the issue.   ###Why git rebase is dangerous### Git rebase can be a bit dangerous when we do it without care. Especially in the time-pressure project. There is needed to use force push, and it rewrites some history.  Of course, there is a possibility to undo kicked rebase. Still, it takes a bit more time than, for example, just reverting commits.   ###Git rebase pros### -Clean git graph -Easier access to one, a single commit -Cleaner main branch   ###Git rebase cons### -When rebase remote you need to use force push -Can be dangerous because it rewrites the history -Not very easy for the beginners -With normal rebase we have not so many possibilities to manipulate commits   ###Git interactive rebase:### ![Git rebase example](https://dev-to-uploads.s3.amazonaws.com/i/7c8n0latu23mvjpvtkuy.png)   ###How does git interactive rebase work### Git interactive rebase works very similarly to the normal rebase. It gives us a visible editor that can help us to manage each commit easily, so we don't are blind with commits that we move. It's especially helpful with big branches, big repositories, and not very clean git history.   ###Git interactive rebase example### To do git interactive rebase, we need to follow similar steps as we did with a normal one, type in your terminal: ``` git checkout feature-branch git rebase -i your-main-branch(for example master or develop)  ```   Next, you will see a list of commits that you will be able to select one of the methods for each of them: -pick, you will keep and push commit to the main -reword, you will change the message of the commit -edit, that means you will be able to edit the commit -squash, commits with that method will be squashed into one -fixup, similar to squash, but you will delete log of the commit -drop, it will remove a commit ###Git interactive rebase pros### -Same as normal rebase -The nice editor that gives us the possibility to easier manipulate of every commit -We can quickly clean the mess in our repo's history   ###Git interactive rebase cons### -Similar as normal rebase   ###Git merge### ![Git merge example](https://dev-to-uploads.s3.amazonaws.com/i/oph1s0qar8g4uvybw7fa.png)   ###How does git merge work### Git merge is a method that takes all content of our current branch and will put that inside our target branch.   For example, we can merge our feature branch into our master branch.   In this case, git will create a new merge commit and will take all of the content(history, code, commits) from the feature branch and will put all of that into our master branch.   ###What is the difference between merge and rebase### The main difference between merge and rebase is the fact that rebase creates a very clean and friendly git graph, and the merge can generate something like graph-spaghetti.   ###How to do git merge### We can do it in three ways:   The first method is a combination of fetch and merge. Mostly used to merge main-branch inside your branch to solve conflicts local before creating merge requests in your apps like GitHub or BitBucket. **The pull one:** ``` git checkout feature-branch git pull origin your-main-branch(for example master or develop) ``` **The merge:** ``` git checkout feature-branch git merge your-main-branch(for example master or develop) ```  ###Should I rebase before merge### You can do rebase to squash your commits, and clean your gitflow a bit. Next, you could do the merge and have a clean graph.   ###Git merge pros### -A high-speed method of joining branches -Easy for everybody -Very easy reverting when kicked
 ###Git merge cons### -Not clean logs -Git graph and history aren't very clean -Debugging using git methods like bisect can be more difficult   ###Conclusion### Congratulations, now you are the git rebase master!   I will not tell you which one is better because every project is different. Some are big, and for years, some of them are under time-pressure, some companies take care of quality a lot, and some don't at all.
 Anyway, now, you should be able how to recognize which one is better for your type of project, what are pros, cons, and dangers.   I hope I've explained to you the main points between these three main methods of updating branches, and you will not have problems to use them in the future.   If you still have questions or would you like to make me create some article for the topic that interests you, feel free to leave a comment!   <a href="https://www.duomly.com">![Programming courses online](https://dev-to-uploads.s3.amazonaws.com/i/psj47nfvtda80mfvnkwb.jpg)</a>   Thanks for reading, Radek from Duomly
duomly
329,768
the #1 Ruby benchmarking tool you didn't know you need
Attention web developers working with lots of data - below are first class problems to consider: sl...
0
2020-05-09T16:50:41
https://dev.to/andy4thehuynh/the-1-ruby-benchmarking-tool-you-didn-t-know-you-need-153l
postgres, rails, ruby, heroku
Attention web developers working with lots of data - below are first class problems to consider: - slow database queries - page time outs When customer support barks at you to resolve a 500 error, what do you do? Initial instincts suggest the two problems are related. Especially in applications with lots of user data. Here's step #1.. your only dance move for Ruby benchmarking: `Benchmark#realtime` Here is an example from my day job at [Kajabi](kajabi.com): *CX gets assigned with a customer support ticket. They log into Kajabi's super admin dashboard to search for the customer (a User). The page returns a 500 error and it doesn't render.* ![page timeouts](https://media.giphy.com/media/wZmCr7odNxKP6/giphy.gif) You want to know how long it's taking to retrieve the user. That's the first step to ensure you're going down the **right rabbithole.** ```Ruby $ Benchmark.realtime { User.search("Sharon Jackson") } => 22.409309996990487 ``` The result is the elapsed time. Searching a user cost 22 seconds! yikes. Heroku has a threshold of 30 seconds before it fails to load a page. Postgres caches our first request so that subsequent requests read from a cached result. Heroku does this to ensure quicker response times. I went and recorded how long it took before the page timed out. Approximately 22 seconds. Down the rabbithole you go from there.
andy4thehuynh
329,811
Looking at Pipedream's Event Sources
Before I begin, know that everything I'm discussing here is currently in beta form. It may, and will,...
0
2020-05-11T22:02:06
https://www.raymondcamden.com/2020/05/07/looking-at-pipedreams-event-sources
webdev, serverless, javascript
--- title: Looking at Pipedream's Event Sources published: true date: 2020-05-07 00:00:00 UTC tags: webdev,serverless,javascript canonical_url: https://www.raymondcamden.com/2020/05/07/looking-at-pipedreams-event-sources cover_image: https://static.raymondcamden.com/images/banners/water_source.jpg --- Before I begin, know that everything I'm discussing here is currently in beta form. It may, and will, change in the future so please keep that in mind if you are reading this in some post-Corona paradise where we can actually _do_ things out in public. The feature I'm talking about today adds a really fascinating feature to [Pipedream](https://pipedream.com/) - Event Sources. Let me start off by explaining why this feature came about. Imagine you're building a workflow based on a RSS feed. RSS feeds contain a list of articles for a publication of some sort. Each item will contain a title, link, some content, and more properties. Let's say you want to send an email when a new item is added to the feed. Right now you would build this like so: - Setup a CRON trigger. Your schedule would depend on the type of feed. For my blog a once a day schedule would be fine. For something like CNN, maybe once every five minutes. - Parse the RSS feed. There's a RSS action that does this for you: ![RSS parser](https://static.raymondcamden.com/images/2020/05/es1.png) By that way, it may not be obvious, but that action actually supports _multiple_ feeds which is pretty bad ass. - Then take the items and email them. This is simple enough, but you've got a few problem. How do you know what's new? Luckily you don't have to worry about that, the RSS action Pipedream supplies uses the $checkpoint feature I [blogged](https://www.raymondcamden.com/2020/04/04/using-state-in-pipedream-workflows) about last month to remember this for you. Cool. So that's that. But this also assumes you're ok working with multiple items at once. In the case of "email me new items", that makes sense. You want one email with all the new items. The same applies to a Twitter search workflow. You want a packet of results. But what about a scenario where you want to process each item individually? Well ok, you work in a loop. For every item do - whatever. Again, for simple workflows that would be enough. But for anything complex, you may have trouble. Pipedream workflows don't support a "loop this step N times" type logic. I know they are considering conditional steps, but I'm not sure about looping. One solution would be to build a second workflow that takes a singular item in as input. You then have a two workflow solution. The first one is responible for gathering the data and creating a list (with optional filtering involved) and then it calls out to the second workflow which handles unique items. I used an approach like this here: [Building a Reddit Workflow with Pipedream](https://www.raymondcamden.com/2020/04/20/building-a-reddit-workflow-with-pipedream) So as I said, you have solutions, and that's good, but Event Sources really make this so much simpler. At a basic level, an event source is custom code you write to handle defining a custom workflow trigger event. By default, your workflows can be trigger by time (CRON), URL, email, or the REST API. Event Sources lets you define _anything_ as a source for firing workflows. Imagine you wanted workflow based on the full moon? Event sources would allow that. (Werewolves will love you.) A bit more realistically, what about a workflow that triggers on the first Monday of the month? That's not possible with CRON, but event sources would allow that as well. Event sources consist of a schedule and your code. The schedule determines how often it runs. For something like the full moon or "first monday" example, once a day would make sense. The code is whatever your logic is. The "magic" part that makes it an event source then is that it simple emits data for every instance of an event. You can find out more at the [docs](https://docs.pipedream.com/event-sources/), but let's look at an example. Imagine our RSS scenario. Given that we can parse RSS and know what's new, our RSS event source would then emit data for every item: ```js items.forEach(item=>{ this.$emit(item, { id: this.itemKey(item), summary: item.title, ts: item.pubdate && +new Date(item.pubdate), }) }) ``` Here's another snippet for an event source that fires on the first X of the month: ```js const currentDay = new Date().getDay(); // In UTC if (currentDay === parseInt(this.targetDayOfWeek)) { this.$emit({ dayOfWeek: this.targetDayOfWeek, },{ summary: "First target day of the month" }); } ``` So how do you use it? When you create a new workflow you can now select from Event Sources as a source: ![List of sources](https://static.raymondcamden.com/images/2020/05/es2.png) In the screenshot above you'll see a number of items below SDK. Those are all _previous_ event sources I've used. When you add a new event source, you configure it and name it, and it makes sense that you may want to use them again. If you click on Event Source, you then get a list of available sources. (Note that you can add a 100% customized one using the CLI. Also note that you can edit the code of an event source.) ![List of event sources](https://static.raymondcamden.com/images/2020/05/es3.png) Once you select it, you can then set up the parameters. Each event source will be different. ![Configured source](https://static.raymondcamden.com/images/2020/05/es4a.png) In this case I used Pipedream's blog's RSS feed. At the bottom (not shown on the screen shot above) is a Create Source button. After doing so, your event source is configured and ready to be used in your workflow: ![New configured ES](https://static.raymondcamden.com/images/2020/05/es9.png) Well almost. By default event sources are turned off. See the little toggle on the right. I believe they do this for cases where you may want to setup your workflow first before it starts firing off events. Just don't forget. Event sources have their own administration panel at Pipedream. You can view them at [https://pipedream.com/sources/](https://pipedream.com/sources/). ![ES Editor](https://static.raymondcamden.com/images/2020/05/es5.png) For each event source you see a history of past events, logs, and configuration. You can also modify the code which is pretty cool. When I was playing around this feature earlier this week, I needed to slightly modify the RSS event source and it took all of two minutes. This is an incredibly powerful addition to Pipedream. All of a sudden you have workflows based on any custom logic. Currently they've got event sources for Airtable, FaunaDB, Google Calendar, and more. If you go to the Event Sources "admin" page, [https://pipedream.com/sources](https://pipedream.com/sources) and click +, you can browse them. Also, Pipedream built a page specific for [RSS-based](https://rss.pipedream.com/) workflows that will give you some great examples. I've got a demo I've already built on this I'll be blogging about later this week. As always, I'm curious to know if any of my readers are playing with this, so let me know in a comment below if you've checked this out yet. _Header photo by [Arseny Toguley](https://unsplash.com/@tetrakiss?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on Unsplash_
raymondcamden
329,817
Current development trends in software engineering
Starting from some basic insights, it is important to know in which age group our respondents belong: 35% of developers worldwide are between 25 and 34 years old. The second largest demographic – almost 28%- is the young developers, aged 18 to 24 years old.
0
2020-05-07T17:06:44
https://dev.to/stateofdevnation/current-development-trends-in-software-engineering-3j24
developertrends, programminglanguages, frontendframeworks, developerpopulation
--- title: Current development trends in software engineering published: True description: Starting from some basic insights, it is important to know in which age group our respondents belong: 35% of developers worldwide are between 25 and 34 years old. The second largest demographic – almost 28%- is the young developers, aged 18 to 24 years old. tags: developer trends, programming languages, front-end frameworks, developer population Cover image: https://dev-to-uploads.s3.amazonaws.com/i/k1uaqywmi5x17yacyifo.jpg --- Every year we conduct two global, independent developer surveys engaging more than 30,000 developers. We track development trends across platforms, revenues, apps, tools, languages etc. The 18th Developer Economics survey ran from November 2019 to February 2020 with more than 17,000 developers and tech-makers participating, allowing us to analyze and understand development trends on major areas such as mobile, cloud, desktop, IoT, web, augmented and virtual reality/, machine learning and games. **It’s no secret that we are data-enthusiasts. Data is in our DNA.** After each survey wave, we transform these data into graphs and insights and offer part of them as resources to our developer community. Our methodology is founded on 9 essential and non-negotiable qualities: magnitude, impartiality, inclusivity, consistency, substantive, engagement, diligence, confidence and breadth. See more on how our [methodology] (https://www.slashdata.co/methodology/) allows us to understand and profile developers. Our goal is not only to help the world understand developers but also to add value to all the developers out there, by offering them the necessary insights to benchmark themselves and make smarter business decisions based on current development trends. So let’s have a look at what our developers are saying, shall we? Starting from some basic insights, it is important to know in which age group our respondents belong: 35% of developers worldwide are between 25 and 34 years old. The second largest demographic – almost 28%- **is the young developers, aged 18 to 24 years old.** ##What age group are you in?## ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/tism3ql5vs0dx1ub8y3y.png) **Just over half of our respondents reported having less than 5 years of coding experience.** As our research covers both professionals and amateurs such as hobbyists and students, the experience mix makes perfect sense and is representative of the coding skills of the global developer population. We find that the young and relatively inexperienced are the first to jump into emerging sectors drawn by the hype, and they play a key role in their evolution. ##How many years have you been working on software projects?## ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/hc7ionft3n11kj5qtsod.png) Focusing on programming language preferences of mobile and backend developers, we find that Java is the third option for backend developers, while the most popular choice of mobile developers. The first choice of backend developers is instead Javascript with over half using it for cloud development. ##Which programming languages do you use to write code that runs on the device in your mobile apps?## ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/j7bj1h94sg5758xoskwd.png) ##Which programming languages do you use to write code that runs on the server?## ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/s81sfuobaxk2dk8vw7d3.png) When it comes to **front-end frameworks or libraries for web applications** most programmers use jQuery (49.7%) and Bootstrap (48%). Other frameworks our respondents stated they’re using are React (42.9%), Vue (28%) and Angular (2+) (25.2%). What about trends in augmented and virtual reality (AR/VR)? Almost half of the developers working on AR/VR use C#. Moreover, as is typical of a still-emerging sector, **almost 60% of respondents said they are hobbyists in this field.** Last but not least **game development**. Developers mostly prefer to create adventure and action game apps with 44% of respondents choosing each of these. 36% create Arcade games while almost 23% choose Role Playing or Strategy games. ##Which categories do your games fit in?## ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/sxl6rkejdc45gizx8e9g.png) For more insights from our latest survey, you can check out the Developer Economics [graphs dashboard](https://www.developereconomics.com/resources/graphs/). It’s also a great opportunity to benchmark yourself against the global average. Enjoy! Looking for a more thorough report analysing the developer population and trends? Download our next State of the Developers Nation report 18th Edition. You will find it [here](https://www.developereconomics.com/resources/reports/state-of-the-developer-nation-q4-2019/).
developernationsurvey
329,920
My first post as a Storyblok ambassador
Storyblok @storyblok...
0
2020-05-07T20:33:52
https://www.storyblok.com/tp/tailwindcss-express-js-amp-sites
amp, tailwindcss, node, storyblok
{% tweet 1258410263021342720 %} 🎉 I'm officially an ambassador at Storyblok 🎉 I wanted to share it with you, I'm very excited, it's the technology that has allowed me to create [my blog](https://www.dawntraoz.com/blog/) and being part of it is a dream come true 😍 In this article, I show you how to build a valid AMP layout to your Express.js app using Storyblok stories. Also, I show you how to add TailwindCSS in AMP and keep it valid. Check it out at [How to use TailwindCSS, Express.js and Storyblok for AMP powered websites](https://www.storyblok.com/tp/tailwindcss-express-js-amp-sites). I hope you like it 🥰 Any feedback you have will be welcome, and don't worry, I'm still working on the dashboard with TailwindCSS, news soon! 🦾 Thank you for reading me 💜
dawntraoz
329,991
How to handle Database in Spring Boot
Core data for Spring Boot with Database. Please you may use more source in there link. This provide...
0
2020-05-08T01:23:29
https://dev.to/urunov/how-to-handle-database-in-spring-boot-560
security, database, serverless, spring
Core data for Spring Boot with Database. Please you may use more source in there [link] (https://github.com/Urunov/SpringBoot-Database). This provides Database implementation in the Spring Boot. Especially, we'll briefly inform here concept of Spring, Spring Boot, JDBC, JPA, H2 and result of our experience project(there is link using github). # Spring The Spring Framework is an application framework and inversion of control container for the Java platform. Spring is HA (high avilability). Spring Boot Most Spring Boot applications need minimal Spring configuration. Features. Create stand-alone Spring applications # JPA JPA (The Java Presistence API) = This module deals with enhanced support for JPA based data access layers.Indeed, the JPA is set of the rules to interfaces. * JPA follows Object-Relation Mapping (ORM). It is a set of interfaces. It also provides a runtime EntityManager API for processing queries and transactions on the objects against the database. It uses a platform-independent object-oriented query language JPQL (Java Persistent Query Language). Why should we use JPA? JPA is simpler, cleaner, and less labor-intensive than JDBC, SQL, and hand-written mapping. JPA is suitable for non-performance oriented complex applications. API (Application programming interface) is a document that contains a description of all the features of a product or software. It represents classes and interfaces that software programs can follow to communicate with each other. An API can be created for applications, libraries, operating systems, etc. JPA Implementations JPA is an open-source API. There is various enterprises vendor such as Eclipse, RedHat, Oracle, etc. that provides new products by adding the JPA in them. There are some popular JPA implementations frameworks such as Hibernate, EclipseLink, DataNucleus, etc. It is also known as Object-Relation Mapping (ORM) tool. #JDBC Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client may access a database. Architecture of the Project Implementation. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/glx02kwoyoeq3k9ck73r.JPG) # H2 * H2 is a relational database management system written in Java. It can be embedded in Java applications or run in client-server mode. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/noa3b0p9djq4o7zmy9a6.JPG) # Spring Boot + JPA + Hibernate + MySQL|Oracle Real project configuration. [Source](https://github.com/Urunov/SpringBoot-Database). Create a Maven Project You may use your tool(Eclipse, IntelliJ Idea, etc.), this example we used IntelliJ then create a new Maven project and name it SpringJDBC. At the end of this tutorial, we’ll get the following project structure: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/267vzg8rks9t9t10i1wg.JPG) ## POM.XML <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.10.RELEASE</version> </parent> * Configuration DB ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/fm5xz2ec634dc116mufh.jpg) * Repositories We define our repositories' interfaces under spring.jdbc.dao. Each repository extends Spring CrudRepository, which provides a default implementation for the basic find, save, and delete methods — so we don’t care about defining implementation classes for them.
urunov
330,011
Add Serverless Functions to Any Static Site
Quickly add a serverless backend to any static site, including React, Vue, or other SPA static sites.
0
2020-05-08T02:34:03
https://swank.dev/blog/add-serverless-functions/
netlify, serverless, node
--- title: Add Serverless Functions to Any Static Site published: true description: Quickly add a serverless backend to any static site, including React, Vue, or other SPA static sites. tags: netlify, serverless, node --- Adding just a bit of backend functionality to your Netlify-hosted static site is a perfect use-case for serverless functions. Let's get up and running! ## Why? Whether you want to keep a third-party or proprietary API key or secret from being shipped to the browser, or you just need a little server-side functionality, a serverless function can bridge the gap. ## Prepare Your Project **First, we need to make sure our project is hosted on Netlify.** Let's connect our project to a Netlify and get set up using [Netlify Dev](https://www.netlify.com/products/dev/), which will allow us to test our functions locally: 1. Create a Netlify account if you don't have one already. 2. Ensure you have the Netlify CLI installed locally. You can do this by running `npm i -g netlify-cli`. If you run into a permissions issue, check out the [NPM docs](https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally) on the issue. 3. Authenticate with Netlify by running `netlify login`. 4. Initialize your Netlify project by running `netlify init`. This will create a site on Netlify and associate your project with that new site. ### Configure a Functions Directory **Now that we're set up with a Netlify project, we need to tell Netlify where to find our functions.** 1. Create a new directory at the root of your project. I typically name this directory something like, `/api`. 2. Create a config file to tell Netlify where to look for your functions: ```toml # netlify.toml [dev] functions: '/api' ``` ### Create a Function **Now that Netlify knows where to look for our functions, we can write our first one!** Create a new file in the `/api` directory: ```js // testy.js exports.handler = async (event, context) => { return { statusCode: 200, body: JSON.stringify({ message: 'yup, it works' }) } } ``` ### Test Locally Using Netlify Dev **With our function created, let's make sure it works!** 1. Start your dev server by running `netlify dev`. You may need to [choose or configure](https://github.com/netlify/cli/blob/master/docs/netlify-dev.md#netlifytoml-dev-block) a start command. 2. Visit [http://localhost:8888/.netlify/functions/testy](http://localhost:8888/.netlify/functions/testy) ### Deploy If your local function is working correctly, go ahead and deploy it to Netlify with `netlify deploy`! --- Thanks for reading! Need some help? Feel free to [reach out](https://twitter.com/briansw).
briansw
330,030
HTTP 🤩🤩 !!!! Flutter
Flutter is meant to create UI/Client oriented stack, but without a backend we cannot deliver a full f...
0
2020-05-08T03:50:47
https://dev.to/prakashselvaraj/http-flutter-1dp2
flutter, flutterdev
Flutter is meant to create UI/Client oriented stack, but without a backend we cannot deliver a full fledged business functionality to the user. In this post we are going to see how we can consume the http services. Let see in action 😇 Our first step would be add the package [HTTP](https://pub.dev/packages/http) to *pubspec.yaml* file We are going to see the Four major methods of HTTP (GET, PUT, POST, DELETE) using the open API [JsonPlaceholder](https://jsonplaceholder.typicode.com). Package ❤️🥰 http ❤️🥰 does all the things for us, we just need to use it as rest client like PostMan ###Get🧐 ``` javascript import 'dart:convert'; import 'package:http/http.dart'; import 'models/post.dart'; Future<List<Post>> getPosts() async { Client client = Client(); try { var response = await client.get('https://jsonplaceholder.typicode.com/posts'); List posts = jsonDecode(response.body); return posts.map((post) => Post.fromJson(post)).toList(); } finally { client.close(); } } ``` ###POST😮 ``` javascript Future<Post> newPost(Post editedPost) async { Client client = Client(); try { String url = 'https://jsonplaceholder.typicode.com/posts/'; var body = jsonEncode(editedPost.toJson()); var response = await client.post( url, body: body, headers: <String, String>{ 'Content-Type': 'application/json; charset=UTF-8', }, ); var post = jsonDecode(response.body); return Post.fromJson(post); } finally { client.close(); } } ``` ###PUT🙄 ``` javascript Future<Post> editPost(Post editedPost) async { Client client = Client(); try { String url = 'https://jsonplaceholder.typicode.com/posts/${editedPost.id}'; var body = jsonEncode(editedPost.toJson()); var response = await client.put( url, body: body, headers: <String, String>{ 'Content-Type': 'application/json; charset=UTF-8', }, ); var post = jsonDecode(response.body); return Post.fromJson(post); } finally { client.close(); } } ``` ###DELETE🥺 ``` javascript Future deletePost(int id) async { Client client = Client(); try { String url = 'https://jsonplaceholder.typicode.com/posts/${id}'; await client.delete(url); print('post deleted succesfully'); } finally { client.close(); } } ``` for full sample [GitHubRepo](https://github.com/Prakash-Selvaraj-Ash/prakash.selvaraj-outlook.com/tree/master/http_tuto) Happy Fluttering 😇😇
prakashselvaraj
330,051
Want to develop real time app
Hey guys, this is kind of a newbie question. So right now, I'm not too much experienced with MERN (Mo...
0
2020-05-08T04:35:00
https://dev.to/andrykwiatow/want-to-develop-real-time-app-5078
help, react, node, mongodb
Hey guys, this is kind of a newbie question. So right now, I'm not too much experienced with MERN (Mongo, Express, React NodeJS) development... I mean I've made some projects but that's it. I want to develop a real-time app and I suggested MERN as the solution for this app, even though I don't know too much about, I thought I would be much easier to get started. My question is: Do you guys think it will be harder for me developing this app with very few knowledge? and What topics should I learn more about before diving into it? Thanks a lot
andrykwiatow
330,184
JavaScript .flatMap()
In my previous post, I wrote about Celebrating JavaScript .flat() and how to flatten arrays, giving a...
0
2020-05-16T04:58:35
https://dev.to/katkelly/javascript-flatmap-2gi7
beginners, codenewbie
In my previous post, I wrote about [Celebrating JavaScript .flat()](***LINK HERE***) and how to flatten arrays, giving a lot of love to `flat()`. I naturally wanted to follow it up with a post about `flatMap()` and look at how it works and what it does. ###flatMap() The `flatMap()` method is a super merger of `flat()` and `map()`. *Although based on the order of operations maybe it should be called `mapFlat()` 🤔.* **`flatMap()` goes through each element using a mapping function first before the returned result is flattened into a new array of depth one.** It's just like using `map()` followed by `flat()` with a depth of 1, but is slightly more efficient (an excellent 2-for-1 method). **As `flatMap()` only flattens 1 level, if you need to flatten beyond 1 level, you can separately call `map()` then `flat()` on your array.** ####Syntax ```javascript arr.flatMap(function callback(currentValue[, index[, array]]) { // do work and return element for newArray } ``` The callback function for `flatMap()` takes three arguments: * currentValue: the current element being processed * index (optional): the index of the currentValue * array (optional): the array `map()` was called upon ```javascript const arr = ['take me out', '', 'to the ball', 'game']; arr.flatMap(a => a.split(' ')); // ["take", "me", "out", "", "to", "the", "ball", "game"] ``` Using `flatMap()` is useful when you want to add and remove items during a `map()`, as it can map many to many items by handling each input item separately, versus `map()` itself that is always one-to-one. This means the resulting array can grow during the mapping, and it will be flattened afterward. [MDN web docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap) has a great example below that highlights how `flatMap()` works to add and remove elements. ```javascript // Let's say we want to remove all the negative numbers and // split the odd numbers into an even number and a 1 let a = [5, 4, -3, 20, 17, -33, -4, 18] // |\ \ x | | \ x x | // *[4,1, 4, [], 20, 16, 1, [], [], 18] // *this line helps to visualize what the array will look // like during mapping, with negative numbers represented as empty [] a.flatMap( (n) => (n < 0) ? [] : (n % 2 == 0) ? [n] : [n-1, 1] ) // [4, 1, 4, 20, 16, 1, 18] ``` `flatMap()` is yet another useful addition to the JavaScript Array toolbox and I’ll be using this when I need to going forward. Happy coding! *Resources* [Array.prototype.flatMap()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flatMap)
katkelly
330,843
Code Tryouts - Natural Selection
Hey, In the last week I wrote simulation in Unity3d, In this simulation we have animals that can walk...
0
2020-05-09T14:28:16
https://dev.to/eranelbaz/code-tryouts-natural-selection-1ap1
simulation, life, gene, animals
Hey, In the last week I wrote simulation in Unity3d, In this simulation we have animals that can walk around the map and they search for food, water and another animal to mate with. Each animal currently have 2 genes - Speed and Search Radius. When a new animal is born it can inherent each gene from one of the parents, and in 80% of the cases the gene will also change a bit. I let the simulation run twice, each time for 5 minutes and the results are ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vjumgeciaw6bgynhz3oa.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/0xhv08xslhsjrnj68c6v.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/2zsohcz3ns407fp0ozyo.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/1i9895ba8isdpr4pc358.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/fxfhrdc4wntztk2z2r4v.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/c437h8epibgfhwa2gzhy.png) What that shocked me was that in the second run the Search radius didn't raise all the time as the first run, and I think that the reason for that is because the population explodes, and so there are more animals that are probably closer to water and food. I think I'll revisit this project in the future and will add predators and will see what happens. For more information you can watch the development process in youtube: {% youtube 7q5YEbVfQKA %} Click [here](https://github.com/eranelbaz/naturalselection) to this project github page Click [here](https://drive.google.com/open?id=12Aur4OPsm_lEYYqKjvRAcQMCfYSitHnz) to download the run data Have a great day :yum: > Like this post? > Support me via [Patreon](https://www.patreon.com/eranelbaz) > Subscribe to my [YouTube Channel](https://www.youtube.com/channel/UCVUNeBGM5wZJKcOx0QwAaTA?sub_confirmation=1)
eranelbaz
330,233
Vue.js + GitHub + Tailwind Css
I implemented this small application that generates a card with Tailwind CSS based on data from the G...
0
2020-05-08T10:41:58
https://dev.to/daviducolo/vue-js-github-tailwind-css-1iho
showdev, vue, github, todayilearned
I implemented this small application that generates a card with Tailwind CSS based on data from the GitHub REST API V3 using the Vue.js framework. {% github davidesantangelo/github-vue-card %} deployed on netlify at https://github-vue-card.netlify.app
daviducolo
330,293
When do you write CSS and when CSS/CSS3 or CSS3?
A post by pandaquests
0
2020-05-08T12:03:22
https://dev.to/pandaquests/when-do-you-write-css-and-when-css-css3-or-css3-3lj9
css, career, webdev, web
pandaquests
330,308
Let's code: graphs in java
I have been obsessed with graphs for a few years now. This obsession has resulted in three projects:...
0
2020-06-18T01:47:45
https://dev.to/moaxcp/lets-code-graphs-in-java-1kem
java, tdd, tutorial, graphs
I have been obsessed with graphs for a few years now. This obsession has resulted in three projects: [graph-dsl](https://github.com/moaxcp/graph-dsl), [graphs](https://github.com/moaxcp/graphs), and [graph-wm](https://github.com/moaxcp/graph-wm). Graphs are excellent data structures that have helped me in a few situations at work. In this tutorial we will develop a mutable DirectedGraph class in Java using Test Driven Development (TDD). This tutorial starts each step with a goal. The goal is to either write a failing test, refactor existing code, or write production code. With each step you will get closer to representing a directed graph with a full suite of tests proving it works. These steps will help you practice TDD and provide insight into how TDD can help you write better code. Before we start coding here is an introduction to graphs and the definitions used to design the code. # Graph <a name="graph"></a> `Graph` is not always well defined in programming languages. There are different implementations in java used for different situations. It is difficult for me to start coding a graph without some plan. First we need to put some concepts in place. A graph encapsulates vertices and edges. A vertex is a value stored in the graph while an edge is a connection between those values. This implementation will contain a set of vertices and a set of edges. Using a `Set` implies uniqueness. Only unique vertices and edges are allowed in the graph. These objects must correctly implement `equals` and `hashCode`. Identity for edges can be tricky. It depends on if the graph is directed or undirected. In an undirected graph only one edge can be between any two vertices. In a directed graph one or two edges can be between any two vertices. If there are two edges between two vertices those edges must be in opposite directions. These properties will affect the implementation of equals and hashCode for edges. At this point we have enough information to start coding but I want to explain TDD. # TDD Here are the three laws of TDD (from Clean Coder by Robert C. Martin): >1. You are not allowed to write any production code until you have first written a failing unit test. >2. You are not allowed to write more of a unit test than is sufficient to fail -- and not compiling is failing. >3. You are not allowed to write more production code that is sufficient to pass the currently failing unit test. These laws imply a workflow which consists of: 1. write a test that fails 2. write small production changes until all tests pass 3. refactor production code 4. refactor test code **refactor** 🔄 change code in a way that does not fail tests and does not change the api or functionality. To help facilitate TDD test status will be shown as a ❌ when failing and a ✔️ when passing. The TDD workflow quickly switches between writing tests to writing production code. In These steps there very little refactoring but a future tutorial may introduce refactoring. Lets get started. # Step 1 <a name="step-1"></a> Like many tutorials this one is broken down into steps. Steps can be linked to using the form "[step-1](#step-1)" for example. ``` [step-1](#step-1) ``` The goal in using steps is to help discussion in the comments. The `DirectedGraph` class is production code. The class should not be written until there is a failing test according to [law #1](#TDD). Here is that test. ```java public class DirectedGraphTest { @Test void constructor() { DirectedGraph graph = new DirectedGraph(); } } ``` **Test Satus:** ❌ failing, write production code # Step 2 <a name="step-2"></a> **Workflow:** The criteria in law #1 is met and now the workflow changes to add production code. A failure to compile is considered a test failure according to law #2 and so no more unit test code should be written. **Action:** Create the `DirectedGraph` class. Most IDEs are helpful in situations where a class, method, or variable is missing. I am using intellij to create the class. ```java public class DirectedGraph { } ``` **Test Satus:** ✔️ passing, write failing test # Step 3 <a name="step-3"></a> **Workflow:** According to [law #3](#TDD) we should not write more production code. The only valid option is to follow [law #1](#TDD) and write a failing unit test. **Test Design:** Back to the first test, it is obviously incomplete. The purpose of a constructor is to initialize the object. We have already discussed that `DirectedGraph` will have two `Set`s. This test should show that the `graph` is initialized by checking that these two `Set`s are empty. **Action:** Add failing check for vertices. ```java @Test void constructor() { DirectedGraph graph = new DirectedGraph(); assertThat(graph.getVertices()).isEmpty(); } ``` **Test Status:** ❌ failing, write production code # Step 4 <a name="step-4"></a> **Workflow:** The `getVertices()` method does not compile and now the test is failing ([law #2](#TDD)) and production code can be added ([law #1](#TDD)). **Action:** The method can be added using the IDE but intellij does not know the return type to use. Here is the code. ```java public Set<T> getVertices() { } ``` **Production Design:** In this case the IDE does not know how to design this method and I added the return value as a generic. The test is driving that these design decisions need to be made now. I am deciding that vertices must be a member variable and returned by `getVertices()`. I am also deciding that vertices are generic. Making vertices generic also implies that `DirectedGraph` is now generic. ```java public class DirectedGraph<T> { public Set<T> getVertices() { return null; } } ``` **Test Status:** ❌ failing, write production code # Step 5 <a name="step-5"></a> **Action:** The test will still fail but it does compile. At this point we need to return an empty Set for the test to pass. ```java public Set<String> getVertices() { return HashSet<>(); } ``` **Test Satus:** ✔️ passing, write failing test Returning a new `Set` seems like an odd thing to do but I believe it is in line with the laws of TDD. Vertices will likely not become a member variable until a failing test shows it is needed. # Step 6 <a name="step-6"></a> **Refactor Test:** 🔄 The test shows some warnings about the use of raw types. This can be fixed by adding a type parameter. ```java @Test void constructor() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThat(graph.getVertices()).isEmpty(); } ``` **Test Satus:** ✔️ passing, write failing test # Step 7 <a name="step-7"></a> **Workflow:** Now the test is passing and another failing unit test is needed (law #2). Remember the current goal is to check the constructor. There is a test that checks vertices but what about edges. I want to add an assertion that looks something like this: ```java assertThat(graph.getEdges()).isEmpty() ``` **Design:** But what is an edge? I have not decided how edges are represented in a graph. At this point I need a failing test but it doesn't have to be the constructor test. The context needs to switch to a focus on designing an edge. **DirectedEdge class** A `DirectedEdge` class should contain the **source** vertex and the **target** vertex of the edge. These are the **endpoints** of the edge. `DirectedEdge` must also use a generic type so the graph can specify different vertex types when creating an edge. Since edges will be in a `Set` they must implement `equals` and `hashCode`. I am also deciding that `DirectedEdge` should be an immutable value class which means **source** and **target** will be required constructor parameters and the class is final. This is enough information to create tests. ```java public class DirectedEdgeTest { @Test void constructor() { DirectedEdge<String> edge = new DirectedEdge<>(); } } ``` **Test Status:** ❌ failing, write production code # Step 8 <a name="step-8"></a> As was in the case of `DirectedGraphTest` the `DirectedEdge` class may be generated in an IDE. ```java public final class DirectedEdge<T> { } ``` **Test Satus:** ✔️ passing, write failing test #Step 9 <a name="step-9"></a> **Action:** The test now passes but is the constructor correct? Lets add verification of the source vertex. ```java @Test void constructor() { DirectedEdge<String> edge = new DirectedEdge<>(); assertThat(edge.getSource()).isEqualTo("A"); } ``` **Test Status:** ❌ failing, write production code # Step 10 <a name="step-10"></a> The method `getSource()` does not exist but also notice that the test is expecting it to be equal to "A". First `getSource()` can be added to `DirectedEdge`. ```java public T getSource() { return null; } ``` The test will compile but still fail expecting the source to be "A". ```java public T getSource() { return (T) "A"; } ``` **Test Satus:** ✔️ passing, write failing test # Step 11 <a name="step-11"></a> **Action:** Add verification for target vertex. ```java @Test void constructor() { DirectedEdge<String> edge = new DirectedEdge<>(); assertThat(edge.getSource()).isEqualTo("A"); assertThat(edge.getTarget()).isEqualTo("B"); } ``` **Test Status:** ❌ failing, write production code # Step 12 <a name="step-12"></a> **Action:** Add `getTarget()` method to `DirectedEdge`. ```java public T getTarget() { return (T) "B"; } ``` **Test Satus:** ✔️ passing, write failing test # Step 13 <a name="step-13"></a> **Action:** replace default constructor with required args constructor. ```java @Test void constructor() { DirectedEdge<String> edge = new DirectedEdge<>("A", "B"); assertThat(edge.getSource()).isEqualTo("A"); assertThat(edge.getTarget()).isEqualTo("B"); } ``` **Test Status:** ❌ failing, write production code # Step 14 <a name="step-14"></a> **Action:** Add required args constructor (removing default constructor). ```java public final class DirectedEdge<T> { public DirectedEdge(T source, T target) { } public T getSource() { return (T) "A"; } public T getTarget() { return (T) "B"; } } ``` **Test Satus:** ✔️ passing, write failing test # Step 15 <a name="step-15"></a> **Action:** Add new constructor test with different source and target. ```java @Test void constructor_AB() { DirectedEdge<String> edge = new DirectedEdge<>("A", "B"); assertThat(edge.getSource()).isEqualTo("A"); assertThat(edge.getTarget()).isEqualTo("B"); } @Test void constructor_XY() { DirectedEdge<String> edge = new DirectedEdge<>("X", "Y"); assertThat(edge.getSource()).isEqualTo("X"); assertThat(edge.getTarget()).isEqualTo("Y"); } ``` **Test Status:** ❌ failing, write production code # Step 16 <a name="step-16"></a> **Action:** Add member variables and assign in constructor. ```java public final class DirectedEdge<T> { private final T source; private final T target; public DirectedEdge(T source, T target) { this.source = source; this.target = target; } public T getSource() { return "A"; } public T getTarget() { return "B"; } } ``` **Test Status:** ❌ failing, write production code # Step 17 <a name="step-17"></a> **Action:** Return member variables from getters. ```java public T getSource() { return source; } public T getTarget() { return target; } ``` **Test Satus:** ✔️ passing, write failing test # Step 18 <a name="step-18"></a> **Action:** Add null checks It doesn't make sense to allow a null `source` or `target`. These parameters should be checked in the constructor. A nice way to do this is with `Objects.requireNonNull`. ```java @Test void constructor_fails_on_null_source() { assertThatNullPointerException().isThrownBy(() -> new DirectedEdge<>(null, "B")); } ``` **Test Status:** ❌ failing, write production code # Step 19 <a name="step-19"></a> **Action:** write null checks ```java public DirectedEdge(T source, T target) { this.source = requireNonNull(source); this.target = target; } ``` **Test Satus:** ✔️ passing, write failing test # Step 20 <a name="step-20"></a> **Action:** Add check for exception message Checking for the exception is ok but there is not real way to prove that the exception is caused by a null `source`. One way to show this is to check the message for the `NullPointerException`. ```java assertThatNullPointerException() .isThrownBy(() -> new DirectedEdge<>(null, "B")) .withMessage("source must not be null"); ``` **Test Status:** ❌ failing, write production code # Step 21 <a name="step-21"></a> **Action:** Add message to NPE `requireNonNull` has another form that allows a message to be passed to the NPE. ```java this.source = requireNonNull(source, "source must not be null"); ``` **Test Satus:** ✔️ passing, write failing test # Step 22 <a name="step-22"></a> **Action:** Perform same test for `target` ```java @Test void constructor_fails_on_null_target() { assertThatNullPointerException() .isThrownBy(() -> new DirectedEdge<>("A", null)) .withMessage("target must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 23 <a name="step-23"></a> **Action:** Add check for `target` in constructor ```java public DirectedEdge(T source, T target) { this.source = requireNonNull(source, "source must not be null"); this.target = requireNonNull(target, "target must not be null"); } ``` **Test Satus:** ✔️ passing, write failing test # Step 24 <a name="step-24"></a> **Action:** Test `equals` and `hashCode`. Since edges are stored in a HashSet it makes sense to implement `equals` and `hashCode`. Testing the `equals` and `hashCode` contract is difficult even in this simple case. There are two member variables `source` and `target`. Instead of writing all of these tests there is a tool available which can do these tests for us. This will be done using [equals-verifier](https://jqno.nl/equalsverifier/). ```java @Test void equalsContract() { EqualsVerifier.forClass(DirectedEdge.class).verify(); } ``` **Test Status:** ❌ failing, write production code # Step 25 <a name="step-25"></a> **Action:** Write `equals` and `hashCode` methods using IDE. To generate this code in intellij I used the java 7+ format and source and target are non null. ```java @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; DirectedEdge<?> that = (DirectedEdge<?>) o; return source.equals(that.source) && target.equals(that.target); } @Override public int hashCode() { return Objects.hash(source, target); } ``` **Test Status:** ❌ failing, write production code The `equalsContract` test will still fail since `source` and `target` are non-null. **Action:** Suppress these warnings. ```java EqualsVerifier.forClass(DirectedEdge.class).suppress(Warning.NULL_FIELDS).verify(); ``` **Test Satus:** ✔️ passing, write failing test # Step 26 <a name="step-26"></a> **Action:** Add check for edges in `DirectedGraph` constructor Now that there is a `DirectedEdge` class edges in the `DirectedGraph` constructor class can be checked. ```java @Test void constructor() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThat(graph.getVertices()).isEmpty(); assertThat(graph.getEdges()).isEmpty(); } ``` **Test Status:** ❌ failing, write production code # Step 27 <a name="step-27"></a> **Action:** Add `getEdges` method ```java public Set<DirectedEdge<T>> getEdges() { return new HashSet<>(); } ``` **Test Satus:** ✔️ passing, write failing test # Step 28 <a name="step-28"></a> In this step we are testing that vertices can be added to the graph. **Action:** Make test for adding vertices ```java @Test void vertex_adds_new_vertex() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.vertex("A"); } ``` **Test Status:** ❌ failing, write production code # Step 29 <a name="step-29"></a> **Action:** Add `vertex` method ```java public void vertex(T vertex) { } ``` **Test Satus:** ✔️ passing, write failing test # Step 30 <a name="step-30"></a> **Action:** Add check for added vertex ```java @Test void vertex_adds_new_vertex() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.vertex("A"); assertThat(graph.getVertices()).containsExactly("A"); } ``` **Test Status:** ❌ failing, write production code # Step 31 <a name="step-31"></a> This test proves that `DirectedGraph` needs to store vertices. **Action:** Add vertices member and change `vertex` method to add the vertex. Change `getVertices()` to return vertices. ```java public class DirectedGraph<T> { private final Set<T> vertices; public DirectedGraph() { vertices = new HashSet<>(); } public Set<T> getVertices() { return vertices; } public Set<DirectedEdge<T>> getEdges() { return new HashSet<>(); } public void vertex(T vertex) { vertices.add(vertex); } } ``` **Test Satus:** ✔️ passing, write failing test # Step 32 <a name="step-32"></a> The set returned by `getVertices()` should be unmodifiable. **Action:** Add test ensuring vertices are unmodifiable. ```java @Test void vertices_are_unmodifiable() { DirectedGraph<Integer> graph = new DirectedGraph<>(); Set<Integer> vertices = graph.getVertices(); assertThatThrownBy(() -> vertices.add(10)) .isInstanceOf(UnsupportedOperationException.class); } ``` **Test Status:** ❌ failing, write production code # Step 33 <a name="step-33"></a> **Action:** return unmodifiable set in `getVertices()`. ```java public Set<T> getVertices() { return unmodifiableSet(vertices); } ``` **Test Satus:** ✔️ passing, write failing test # Step 34 <a name="step-34"></a> The same can be done for `getEdges()`. **Action: ** Add test for `getEdges()` ensuring set is unmodifiable. ```java @Test void edges_are_unmodifiable() { DirectedGraph<Integer> graph = new DirectedGraph<>(); Set<DirectedEdge<Integer>> edges = graph.getEdges(); assertThatThrownBy(() -> edges.add(new DirectedEdge<>(1, 2))) .isInstanceOf(UnsupportedOperationException.class); } ``` **Test Status:** ❌ failing, write production code # Step 35 <a name="step-35"></a> **Action:** return unmodifiable set in `getEdges()`. ```java public Set<DirectedEdge<T>> getEdges() { return unmodifiableSet(new HashSet<>()); } ``` **Test Satus:** ✔️ passing, write failing test # Step 36 <a name="step-36"></a> `null` vertices should not be allowed. **Action:** `vertex` fails on on null parameter. ```java @Test void vertex_fails_on_null_vertex() { DirectedGraph<Integer> graph = new DirectedGraph<>(); assertThatNullPointerException() .isThrownBy(() -> graph.vertex(null)) .withMessage("vertex must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 37 <a name="step-37"></a> **Action:** check for null vertices. ```java public void vertex(T vertex) { requireNonNull(vertex, "vertex must not be null"); vertices.add(vertex); } ``` **Test Satus:** ✔️ passing, write failing test # Step 38 <a name="step-38"></a> Now that vertices can be added to the graph, edges should be added to the graph. This method should ensure that the vertices are added if not present in the graph. This time the tests start by checking for error conditions. The idea is to test for errors before writing the happy path code. **Action: ** Add check for null source parameter to `edge` method. ```java @Test void edge_fails_on_null_source() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThatNullPointerException() .isThrownBy(() -> graph.edge(null, "B")) .withMessage("source must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 39 <a name="step-39"></a> The `edge` method does not exist and needs to be added. ```java public void edge(T source, T target) { } ``` **Test Status:** ❌ failing, write production code # Step 40 <a name="step-40"></a> Now the `edge` method needs to throw the exception. **Action: ** Add NPE to `edge` method. ```java public void edge(T source, T target) { throw new NullPointerException("source must not be null"); } ``` **Test Satus:** ✔️ passing, write failing test # Step 41 <a name="step-41"></a> The `edge` method should also check `target` for null. **Action: ** Add test for null target parameter in `edge` method. ```java @Test void edge_fails_on_null_target() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThatNullPointerException() .isThrownBy(() -> graph.edge("A", null)) .withMessage("target must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 42 <a name="step-42"></a> The `edge` method now needs to actually check source for a null value. **Action: ** Add null checks for source and target in `edge` method. ```java public void edge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); } ``` **Test Satus:** ✔️ passing, write failing test # Step 43 <a name="step-43"></a> The `edge` method should add the source vertex if it is not already in the graph. **Action: ** Add test for source vertex in graph after calling `edge`. ```java @Test void edge_adds_source_vertex() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.edge("A", "B"); assertThat(graph.getVertices()).contains("A"); } ``` **Test Status:** ❌ failing, write production code # Step 44 <a name="step-44"></a> **Action: ** Add source vertex to graph in `edge` method. ```java public void edge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); vertex(source); } ``` **Test Satus:** ✔️ passing, write failing test # Step 45 <a name="step-45"></a> The target vertex should also be added to the graph. **Action: ** Add test for target vertex in graph after calling `edge`. ```java @Test void edge_adds_target_vertex() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.edge("A", "B"); assertThat(graph.getVertices()).contains("B"); } ``` **Test Status:** ❌ failing, write production code # Step 46 <a name="step-46"></a> **Action: ** Add source vertex to graph in `edge` method. ```java public void edge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); vertex(source); vertex(target); } ``` **Test Satus:** ✔️ passing, write failing test # Step 47 <a name="step-47"></a> The actual edge should be added as well. **Action: ** Add test checking if edge is added to the graph. ```java @Test void edge_adds_an_edge() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.edge("A", "B"); assertThat(graph.getEdges()).containsExactly(new DirectedEdge<>("A", "B")); } ``` **Test Status:** ❌ failing, write production code # Step 48 <a name="step-48"></a> The test proves that `edges` needs to be a member variable which requires changes to the constructor `getEdges()` and the `edge` method. **Action: ** Add edges member variable, use member in `getEdges` and `edges` method. ```java public class DirectedGraph<T> { private final Set<T> vertices; private final Set<DirectedEdge<T>> edges; public DirectedGraph() { vertices = new HashSet<>(); edges = new HashSet<>(); } public Set<T> getVertices() {...} public Set<DirectedEdge<T>> getEdges() { return unmodifiableSet(edges); } public void vertex(T vertex) {...} public void edge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); vertex(source); vertex(target); edges.add(new DirectedEdge<>(source, target)); } } ``` **Test Satus:** ✔️ passing, write failing test # Step 49 <a name="step-49"></a> At this point a method can be added to remove edges. **Action: ** Add test for `removeEdge`. ```java @Test void removeEdge_fails_on_null_source() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThatNullPointerException() .isThrownBy(() -> graph.removeEdge(null, "A")) .withMessage("source must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 50 <a name="step-50"></a> **Action: ** `removeEdge` needs to be added in order to compile. ```java public void removeEdge(T source, T target) { } ``` **Test Status:** ❌ failing, write production code # Step 51 <a name="step-51"></a> **Action: ** Add check for a null source. ```java public void removeEdge(T source, T target) { requireNonNull(source, "source must not be null"); } ``` **Test Satus:** ✔️ passing, write failing test # Step 52 <a name="step-52"></a> **Action: ** Write a failing test checking for null target. ```java @Test void removeEdge_fails_on_null_target() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThatNullPointerException() .isThrownBy(() -> graph.removeEdge("A", null)) .withMessage("target must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 53 <a name="step-53"></a> **Action: ** Add check in `removeEdge` for null `target`. ```java public void removeEdge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); } ``` **Test Satus:** ✔️ passing, write failing test # Step 54 <a name="step-54"></a> `removeEdge` should fail if the edge is not found. **Action: ** Add test expecting exception when edge does not exist. ```java @Test void removeEdge_fails_on_missing_edge() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThatIllegalArgumentException() .isThrownBy(() -> graph.removeEdge("A", "B")) .withMessage("edge with source \"A\" and target \"B\" does not exist"); } ``` **Test Status:** ❌ failing, write production code # Step 55 <a name="step-55"></a> **Action: ** Add check to see if edges contains the edge and throw exception if it is missing. ```java public void removeEdge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); if(!edges.contains(new DirectedEdge<>(source, target))) { throw new IllegalArgumentException(String.format("edge with source \"%s\" and target \"%s\" does not exist", source, target)); } } ``` **Test Satus:** ✔️ passing, write failing test # Step 56 <a name="step-56"></a> **Action: ** Add test for removing and edge from the graph. ```java @Test void removeEdge_removes_edge() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.edge("A", "B"); graph.removeEdge("A", "B"); assertThat(graph.getEdges()).isEmpty(); } ``` **Test Status:** ❌ failing, write production code # Step 57 <a name="step-57"></a> Instead of calling `contains` the code can call `remove`. This will remove the edge and still throw an exception when the edge does not exist in the graph. **Action: ** Use `remove` instead of `contains`. ```java public void removeEdge(T source, T target) { requireNonNull(source, "source must not be null"); requireNonNull(target, "target must not be null"); if(!edges.remove(new DirectedEdge<>(source, target))) { throw new IllegalArgumentException(String.format("edge with source \"%s\" and target \"%s\" does not exist", source, target)); } } ``` **Test Satus:** ✔️ passing, write failing test # Step 58 <a name="step-58"></a> Now a method is needed to remove vertices. Lets write a test for it. **Action: ** Add test for removing vertices. ```java @Test void removeVertex_removes_vertex() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.vertex("A"); graph.removeVertex("A"); assertThat(graph.getVertices()).isEmpty(); } ``` The test will fail to compile. **Test Status:** ❌ failing, write production code # Step 59 <a name="step-59"></a> **Action: ** Add `removeVertex` method. ```java public void removeVertex(T vertex) { } ``` **Test Status:** ❌ failing, write production code # Step 60 <a name="step-60"></a> **Action: ** remove vertex. ```java public void removeVertex(T vertex) { vertices.remove(vertex); } ``` **Test Satus:** ✔️ passing, write failing test # Step 61 <a name="step-61"></a> **Action: ** add test for null vertex parameter in `removeVertex`. ```java @Test void removeVertex_fails_on_null_vertex() { DirectedGraph<String> graph = new DirectedGraph<>(); assertThatNullPointerException() .isThrownBy(() ->graph.removeVertex(null)) .withMessage("vertex must not be null"); } ``` **Test Status:** ❌ failing, write production code # Step 62 <a name="step-62"></a> **Action: ** Add null check to method. ```java public void removeVertex(T vertex) { requireNonNull(vertex, "vertex must not be null"); vertices.remove(vertex); } ``` **Test Satus:** ✔️ passing, write failing test # Step 63 <a name="step-63"></a> There is one more failure condition when removing vertices. If the vertex being removed has adjacent edges those edges should also be removed. **Action:** Add test checking that adjacent edges are removed when vertex is removed. ```java @Test void removeVertex_removes_adjacent_edges() { DirectedGraph<String> graph = new DirectedGraph<>(); graph.edge("A", "B"); graph.removeVertex("A"); assertThat(graph.getEdges()).isEmpty(); } ``` **Test Status:** ❌ failing, write production code # Step 64 <a name="step-64"></a> **Action:** Find all adjacent edges and remove them in `removeVertex`. ```java public void removeVertex(T vertex) { requireNonNull(vertex, "vertex must not be null"); Iterator<DirectedEdge<T>> i = edges.iterator(); while(i.hasNext()) { DirectedEdge<T> next = i.next(); if(next.getSource().equals(vertex) || next.getTarget().equals(vertex)) { i.remove(); } } vertices.remove(vertex); } ``` **Test Satus:** ✔️ passing, refactor 🔄 change code in a way that does not fail tests # Step 65 <a name="step-65"></a> Now is a time to refactor. The refactor is actually suggested by Intellij. It replaces the iterator with a call to `removeIf` using a lambda. **Action:** Refactor `removeVertex` to use `removeIf` when removing adjacent edges. ```java public void removeVertex(T vertex) { requireNonNull(vertex, "vertex must not be null"); edges.removeIf(next -> next.getSource().equals(vertex) || next.getTarget().equals(vertex)); vertices.remove(vertex); } ``` **Test Satus:** ✔️ passing, write failing test # Conclusion The `DirecteGraph` class is now able to create and remove vertices and edges. This tutorial described a [design](#design) for graphs and provided a step-by-step guide for building `DirectedGraph` using TDD. There are still several problems to solve but this is a good start in representing a graph as code.
moaxcp
330,316
JS Fundamentals: Object Assignment vs. Primitive Assignment
A quick look at object assignment in JavaScript aimed at newcomers to the language.
0
2020-05-08T12:32:45
https://dev.to/nas5w/js-fundamentals-object-assignment-vs-primitive-assignment-5h64
javascript, beginners, webdev, tutorial
--- title: JS Fundamentals: Object Assignment vs. Primitive Assignment published: true description: A quick look at object assignment in JavaScript aimed at newcomers to the language. tags: javascript, beginner, webdev, tutorial cover_image: https://dev-to-uploads.s3.amazonaws.com/i/t2oz7swuutjdt58jpspx.png --- # Introduction Something I wish I had understood early on in my JavaScript programming career is how object assignment works and how it's different from primitive assignment. This is my attempt to convey the distinction in the most concise way possible! # Learn JS Fundamentals Looking to learn more JS fundamentals? Consider [signing up for my free mailing list](https://buttondown.email/devtuts)! # Primitives vs. Objects As a review, let's recall the different primitive types and objects in JavaScript. **Primitive types:** Boolean, Null, Undefined, Number, BigInt (you probably won't see this much), String, Symbol (you probably won't see this much) **Object types:** Object, Array, Date, [Many others](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Objects) # How Primitive and Object Assignment Differ ## Primitive Assignment Assigning a primitive value to a variable is fairly staightforward: the value is assigned to the variable. Let's look at an example. ```javascript const a = 'hello'; const b = a; ``` In this case, `a` is set to the value `hello` and `b` is also set to the value `hello`. This means if we set `b` to a new value, `a` will remain unchanged; there is no relationship between `a` and `b`. ```javascript const b = 'foobar'; console.log(a); // "hello" console.log(b); // "foobar" ``` ## Object Assignment Object assignment works differently. Assigning an object to a variable does the following: - Creates the object in memory - Assigns a reference to the object in memory to the variable Why is this a big deal? Let's explore. ```javascript const a = { name: 'Joe' }; const b = a; ``` The first line creates the object `{ name: 'Joe' }` in memory and then assigns a reference to that object to variable `a`. The second line assigns a reference **to that same object in memory** to `b`! So to answer the "why is this a big deal" question, let's mutate a property of the object assigned to `b`: ```javascript b.name = 'Jane'; console.log(b); // { name: "Jane" } console.log(a); // { name: "Jane" } ``` That's right! Since `a` and `b` are assigned a reference to the same object in memory, mutating a property on `b` is really just mutating a property on the object in memory that both `a` and `b` are pointing to. To be thorough, we can see this in action with arrays as well. ```javascript const a = ['foo']; const b = a; b[0] = 'bar'; console.log(b); // ["bar"] console.log(a); // ["bar"] ``` ## This Applies to Function Arguments too! These assignment rules apply when you pass objects to functions too! Check out the following example: ```javascript const a = { name: 'Joe' }; function doSomething(val) { val.name = 'Bip'; } doSomething(a); console.log(a); // { name: "Bip" } ``` The moral of the story: beware of mutating objects you pass to functions unless this is intended (I don't think there are many instances you'd really want to do this). # Preventing Unintended Mutation In a lot of cases, this behavior can be desired. Pointing to the same object in memory helps us pass references around and do clever things. However, this is not always the desired behavior, and when you start mutating objects unintentionally you can end up with some _very_ confusing bugs. There are a few ways to make sure your objects are unique. I'll go over some of them here, but rest assured this list will not be comprehensive. ## The Spread Operator (...) The spread operator is a great way to make a _shallow_ copy of an object or array. Let's use it to copy an object. ```javascript const a = { name: 'Joe' }; const b = { ...a }; b.name = 'Jane'; console.log(b); // { name: "Jane" } console.log(a); // { name: "Joe" } ``` ### A note on "shallow" copying It's important to understand shallow copying versus deep copying. Shallow copying works well for object that are only one level deep, but nested object become problematic. Let's use the following example: ```javascript const a = { name: 'Joe', dog: { name: 'Daffodil', }, }; const b = { ...a }; b.name = 'Pete'; b.dog.name = 'Frenchie'; console.log(a); // { // name: 'Joe', // dog: { // name: 'Frenchie', // }, // } ``` We successfully copied `a` one level deep, but the properties at the second level are still referencing the same objects in memory! For this reason, people have invented ways to do "deep" copying, such as using a library like `deep-copy` or serializing and de-serializing an object. ## Using Object.assign `Object.assign` can be used to create a new object based on another object. The syntax goes like this: ```javascript const a = { name: 'Joe' }; const b = Object.create({}, a); ``` Beware; this is still a shallow copy! ## Serialize and De-serialize One method that _can_ be used to deep copy an object is to serialize and de-serialize the object. One common way to do this is using `JSON.stringify` and `JSON.parse`. ```javascript const a = { name: 'Joe', dog: { name: 'Daffodil', }, }; const b = JSON.parse(JSON.stringify(a)); b.name = 'Eva'; b.dog.name = 'Jojo'; console.log(a); // { // name: 'Joe', // dog: { // name: 'Daffodil', // }, // } console.log(b); // { // name: 'Eva', // dog: { // name: 'Jojo', // }, // } ``` This does have its downsides though. Serializing an de-serializing doesn't preserve complex objects like functions. ## A Deep Copy Library It's fairly common to bring in a [deep copy library](https://www.npmjs.com/package/deepcopy) to do the heavy lifting on this task, especially if your object has an unknown or particularly deep hierarchy. These libraries are typically functions that perform one of the aforementioned shallow copy methods recursively down the object tree. # Conclusion While this can seem like a complex topic, you'll be just fine if you maintain awareness about how primitive types and objects are assigned differently. Play around with some of these examples and, if you're up for it, attempt writing your own deep copy function!
nas5w
330,357
how developers can cope with stress
We are in a period where working from you living room, kitchen or garage is the norm. Remote work is...
0
2020-05-08T13:20:59
https://dev.to/brunner1000/how-developers-can-cope-with-stress-41pj
devops, productivity, codenewbie
We are in a period where working from you living room, kitchen or garage is the norm. Remote work is new to some industries. But it isn't the case for developers.  All a developer needs to get to work is his laptop, a connection to the internet and a good electricity supply. There are many software systems available. There is a side to remote work that devs need to deal with. STRESS. Is it not true that leaving your office and working from home will reduce your stress level. TAKE SOME DAYS OFF It is not a good idea to code everyday. You have to take some days off. Doing so will refresh your mind. We find that the time we spend doing nothing, is very important. EXTEND YOUR PROJECT'S DEADLINE Take a look at your schedule for your project. Are you giving it little time? In order to deal with stress, you could extend the deadline. For example, a typical website project could take up to a month to finish the first prototype. You could extend this time to about a month and a half. The additional half is so that you could include time to take rest and enjoy some hobby.
brunner1000
342,065
BULMA - A CSS Only alternative for Bootstrap
Monotonous Bootstrap After the usage of bootstrap in multiple projects i got monotonous wi...
0
2020-05-23T08:55:10
https://medium.com/@pradeepn/bulma-a-css-only-alternative-for-bootstrap-9a977e065dcc
bulma, bootstrap, css, uiframework
## Monotonous Bootstrap ## After the usage of bootstrap in multiple projects i got monotonous with bootstrap. Particular with [Bootstrap4](https://getbootstrap.com/). I lost my fun towards bootstrap and started looking for alternatives. The next famous UI framework that shine before me was Googles Material design. It looks good and inspire lot of fun but it requires lot of attentions. There is a steep learning curve in order to use the Material design. Search continues for alternatives. ## Why to include Script for UI Design ? ## One more thing that strikes me is the need for lot of javascripts that needs to include in order to use a UI framework. Frameworks like bootstrap, material design and other frameworks that come with a heavy js code for its interactions. In that search for the CSS Only alternative framework that got my attention is __[Bulma](https://bulma.io/)__. ## Bulma - To The Rescue ## Bulma is a very light weight framework with which you can able to build a whole website without the need for any additional UI library. You just need to link the CSS from [CDNJs](https://cdnjs.cloudflare.com/ajax/libs/bulma/0.8.2/css/bulma.min.css) and start to build. It size very less approximately __~190Kb__. Bulma contains all the need elements like * Responsive Design * Layout Helpers * Grid System * Utility Helpers * Form Utilities * Components If you have already used bootstrap then you can able to find all you bootstrap components available in Bulma with detailed [Documentation](https://bulma.io/documentation/), Starting from * Hero Component * Themed Buttons * Themed Alerts * Cards & Modal Dialogs * Menus & Dropdowns * Tabs & Accordion * Breadcrumbs & Paginations * etc. Off-course the interactions for tabs and accordion we have to write on our own. But with jQuery, we can achieve that in not more than a line. Bulma enables Rapid Application Development with very less or no learning curve. ## SAAS Support ## If you can work on SAAS than you can build you own theme with the help of exposed 419 Variables and Mixins. With the help of Node or Webpack we can create the task of building your own themed Css. If any one planning for a new project then you can try out Bulma. I am sure you won't regret it.
pradeepn
330,360
The Superlative Guide to Big O
Is there a computer science topic more terrifying than Big O? Don’t let the name scare you, Big O is not a big deal. Learn the fundamentals in this superlative guide.
0
2021-01-13T11:49:51
https://jarednielsen.com/big-o/
career, algorithms, computerscience, beginners
--- title: The Superlative Guide to Big O published: true description: Is there a computer science topic more terrifying than Big O? Don’t let the name scare you, Big O is not a big deal. Learn the fundamentals in this superlative guide. tags: career, algorithms, computerscience, beginners canonical_url: https://jarednielsen.com/big-o/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/6l5fcszjew2mfo1xll34.png --- Is there a computer science topic more terrifying than Big O notation? Don’t let the name scare you, Big O notation is not a big deal. It’s very easy to understand and you don’t need to be a math whiz to do so. In this series, you’ll learn the fundamentals of Big O notation with side trips in dynamic programming and proof by induction, all using examples in JavaScript. _This article originally published on [jarednielsen.com](https://jarednielsen.com/big-o/)_ Will it scale? _That_ is the question. Big O helps us answer it. Programming and problem solving are both metacognitive activities. We think about thinking. Big O is one more tool in our problem solving toolbox. While Big O is specifically used to calculate the order of a function, we can extend the mindset to problem solving (and life) in general and ask ourselves, ‘how can we improve?’ I’m an entirely self-taught developer. I’m lucky. I was never asked any questions about Big O in any technical interviews. Did I need it? No. I got by just fine without it. Would it have helped? Yes. Immensely. If you didn’t take the traditional, academic route to programming, there’s no urgency to learn Big O. There are too many other things to keep up on: libraries, frameworks, the latest spec! So Big O falls by the wayside and we hope it never comes up. If we do carve out time to learn it, it’s most likely in preparation for a technical interview. Will you scale? That is the question! ## Big O: The Superlative Guide The following list is a ‘table of contents’ of my articles about Big O. They weren’t all written in this order, but this is the order I recommend you read them. If you think otherwise or think there’s something missing, let me know on Twitter [@jarednielsen](https://twitter.com/jarednielsen). * [What is Big O Notation?](https://jarednielsen.com/big-o-notation/) * [Big O Linear Time Complexity](https://jarednielsen.com/big-o-linear-time-complexity/) * [How to Sum Consecutive Integers 1 to n](https://jarednielsen.com/sum-consecutive-integers/) * [Big O Quadratic Time Complexity](https://jarednielsen.com/big-o-quadratic-time-complexity/) * [Big O Logarithmic Time Complexity](https://jarednielsen.com/big-o-logarithmic-time-complexity/) * [Proof by Induction](https://jarednielsen.com/proof-induction/) * [How to Sum Consecutive Powers of 2](https://jarednielsen.com/sum-consecutive-powers-2/) * [Big O Recursive Time Complexity](https://jarednielsen.com/big-o-recursive-time-complexity/) * [Big O Recursive Space Complexity](https://jarednielsen.com/big-o-recursive-space-complexity/) * [Dynamic Programming: Memoization and Tabulation](https://jarednielsen.com/dynamic-programming-memoization-tabulation/) * [Big O Log-Linear Time Complexity](https://jarednielsen.com/big-o-log-linear-time-complexity/) * [How to Calculate Permutations and Combinations](https://jarednielsen.com/calculate-permutations-combinations/) * [Big O Factorial Time Complexity](https://jarednielsen.com/big-o-factorial-time-complexity/) * [What’s the Difference Between Big O, Big Omega, and Big Theta?](https://jarednielsen.com/big-o-omega-theta/) If you want all this (and more!) in one package, pick up a copy of [The Little Book of Big O](https://gum.co/big-o).
nielsenjared
330,446
Discover 7 amazing tips and tricks about the CSS background image
This article was originally published at https://www.blog.duomly.com/css-background-image-tutorial-wi...
0
2020-05-08T15:38:33
https://www.blog.duomly.com/css-background-image-tutorial-with-examples/
css, programming, beginners, webdev
This article was originally published at <a href-"css-background-image-tutorial-with-examples">https://www.blog.duomly.com/css-background-image-tutorial-with-examples</a> --- The background image is probably one of the CSS properties which all of us, front-end developers, used at least a few times in our careers. Most people think that there can’t be anything unusual about background-image, but after a quick research, I have a different conclusion. There are lots of questions askes about CSS background image every day in Facebook groups and lots of unknown tricks which can help us to achieve amazing effects and make stunning apps and websites. That’s the reason I decided to create this article to show you what magic can be done using such a simple CSS property. I gathered seven tips and tricks I believe will be the most useful and create some code examples where you can check what’s going on there for you. And, if as usually if you don’t like reading, jump to our Youtube channel for a video version. {% youtube bkEA_XbIQgg %} Let’s check what’s behind the background! ###1. How to fit a background image perfectly to a viewport?### Let’s start from something that’s more tip than a trick. How often it happened to you that you had to struggle with your background image to make in perfectly fitted and not stretched and unattractive? Let me show you the way how to make your background image always perfectly fitted to your browser window! {% codepen https://codepen.io/duomly/pen/xxwYBOE %} ###2. How to use multiple background images with CSS?### Hm, and what if I’d like to add more then one image in the background? That’s possible and not very difficult, but can give a nice result while you’ve got an idea to mix two graphics into something beautiful. I personally think that it’s super useful when we want to add a pattern on the top of a background image, so that’s what I will show you in this example. Let’s see how it works! {% codepen https://codepen.io/duomly/pen/eYpVoJR %} ###3. How to create a triangular background image?### Another exciting CSS background image trick is a triangular background image. It creates a really beautiful effect, especially when we would like to show some totally different options like day and night, or winter and summer. It is done by creating two divs, both for the full viewport, then it’s needed to add a background image to both of them, and next, the second div needs a clip-path property to create a triangle shape. Let’s see the code and result! {% codepen https://codepen.io/duomly/pen/RwWQmwW %} ###4. How to add a gradient overlay to my background image?### The fourth trick I’d like to show you in this article is about overlay on the background image. It can be useful when you would like to put some text on the image, but it’s too light, and the text is not visible, but it can also improve the image itself. For example, sunset images can be strengthened by adding a pink-orange gradient or red to transparent gradient. Let’s see how we can easily add a gradient overlay to the background image! {% codepen https://codepen.io/duomly/pen/rNOJgQE %} ###5. How to create a color-changing background image animation?### And what if you can decide which color is the best as an overlay for your background image? Then animations on background images are really useful. Using an animated overlay can give your website a great final effect, and for sure, people will remember it. Let’s see what we can do using background images and animations in CSS! {% codepen https://codepen.io/duomly/pen/gOavNOv %} ###6. How to make a grid background image?### Sometimes it’s a great idea to go a little bit more crazy, especially if the project is about art or photography, then a nice background image can be created with CSS grid and CSS background image. Oh, if you don’t know what’s CSS grid check it out <a href="https://www.blog.duomly.com/css-grid-tutorial/">here</a>. Let’s take a look! {% codepen https://codepen.io/duomly/pen/MWaQNWb %} ###7. How to set a background image as a text color?### Using background image with background-clip you can achieve a beautiful effect of the background image for text. In some cases, it may be very useful, especially when you’d like to create a big text header, but not as boring as a normal color. Let’s see the stunning effect we can get! {% codepen https://codepen.io/duomly/pen/wvKyVjG %} ###Conclusion### In this article, you could see 7 different tips and tricks to make amazing things with the background image. I’m pretty sure those hints will be helpful and allow you to get amazing results on your layouts. If you’d like to check out some more interesting CSS tips and tricks, check out our latest <a href="https://www.blog.duomly.com/css-border-with-examples-tutorial/">CSS borders tips and tricks article</a> and one of the previous <a href="https://www.blog.duomly.com/12-css-tips-and-tricks-which-help-you-to-create-an-amazing-websites/">CSS tips and tricks</a>. If you have ever used any customized solution for your background let me know in the comments, I will be happy to find out what more can be done with CSS background image property. Thank you for reading, Anna from Duomly <a href="https://www.duomly.com"> ![Duomly Programming Online Courses](https://dev-to-uploads.s3.amazonaws.com/i/c8ai87nh568jrh4b1m54.jpg) </a>
duomly
330,546
Story of a self-taught web developer: humble beginnings #1
🔥 Burnout 2 years ago I found myself in a bad place. I had worked in a fintech startup as...
6,520
2020-05-08T19:10:47
https://dev.to/anzelika/story-of-a-self-taught-web-developer-humble-beginnings-1-c72
beginners, womenintech
##🔥 Burnout 2 years ago I found myself in a bad place. I had worked in a fintech startup as a customer support for 4 years, and while I adored the people, any meaningful career change within the company never happened. I had side projects, sure, but at the end of the day my worth was still calculated very plainly by the amount of calls and e-mails I handled per day. While wholesome coworkers can keep you going, deep down where it really matters, it can be... well, quite soul crushing 😑 I no longer took pride in what I was doing. It felt like anyone could pick up the phone and give a customer their status update. I wanted to do more, but I wasn't in an environment where I could actually do more. And so I kept seething in internal turmoil, secretly envying developers working one floor above mine, creating magic one pixel at a time. All my life I had been an artsy-craftsy person, and coding seemed like such a polar opposite of that. How can I write such abstract gibberish when I could create stuff from polymer clay or leather? Well, I took a leap of faith and as it turns out, you sure as hell can do **BOTH**, and they're equally creative. ##💡 Turnaround My final kick in the butt was in January 2019 when I said goodbye to the workplace and lifestyle I had for years. I was hermiting on my own and had all the time in the world to focus on self-improvement. Since I had savings to sustain me without work for 6-8 months, it was quite clear that by the time it runs out, I should be in a position where I could apply for junior/entry level dev jobs. There was simply no plan B. I couldn't go back to working in a call-centre. I couldn't just move back in with my parents and declare myself a failure. I had time and money for a limited amount of time. No distractions. No excuses of a tired brain after long day at work. **This is it**. This is where I turn my life around. ##💻 Online learning I decided to start my journey with [Colt Steele's Bootcamp](https://www.udemy.com/course/the-web-developer-bootcamp/) and it was a great choice. Javascript was playfully etched to my mind by looping through all other dogs who are inferior to his Rusty and I was there for it 😄. Here are two examples of my take on what you build in the JS section of Colt's Bootcamp. In my defiance to Colt's dog-loving, I of course made it cat-themed. 🎨**Color game** {% codepen https://codepen.io/anzuj/pen/rRKpBQ default-tab=result %} 📝**To do list** {% codepen https://codepen.io/anzuj/pen/EzWqdJ default-tab=result %} Practicing with fun little projects like that got me the skills to make my first two solo projects with vanilla CSS, JS, jQuery and Bootstrap: 🧙**Magic School timetable generator** -> [visit](https://anzuj.github.io/czocha/) 🌲**Homepage for a small Fir tree business** ->[visit](https://anzuj.github.io/kuusetaimed/) As the bootcamp went to the backend part, I lost the willingness to follow through since Colt used a cloud-based IDE which no longer is live, and getting the out-dated code to work via alternative means was just too much of a hurdle for a newbie like me to chew through (2021 disclaimer: the course has now been updated!). At this point my artsy craftsy side kicked in as well, and I just wanted to continue experimenting with the "fun bits". I kept following Youtube code-alongs for more advanced CSS, for example I very much recommend this one that finally helped me build my first portfolio. {% youtube T7PnWnTgusc %} I'm very picky about online tutorials & mentors, since my ADHD plummets my attention span from 100 to 0 in 6 seconds. To be engaged in the learning, I need the mentor to project enthusiasm and think out loud with every step. Hand on my chest I can recommend these 3 guys whose material I always enjoy: * [Colt Steele](https://www.udemy.com/user/coltsteele/) * [NetNinja](https://www.youtube.com/channel/UCW5YeuERMmlnqo4oq8vwUpg) (Hi gang!) * [Maximilian Schwarzmüller](https://www.udemy.com/user/maximilian-schwarzmuller/) ##💼 Job hunting About halfway into my "hermit bootcamp" I moved countries to move in with my partner. It was more important than ever to get that first coding position for the living permit as well. But what did I really have to offer? * No bachelor's in anything * Not speaking the local language (German in this part of Switzerland) * Self-taught coding for a year * Not knowing any frameworks yet * No real work experience in web development Yeah.. not looking that great. But still, my amazing partner made sure I'd feel no financial pressure to stop my studies and encouraged me to apply to places even if the advertisement was in German and required skills I haven't even touched yet (SCSS, SASS, prototyping, React/Angular/Vue etc). My goal was to get an entry level/junior Front End web developer position. First consideration I got was from a startup who gave me a technical task of making interactive tables. It still makes me cringe, thinking back to that week I stayed up until 2-3AM, trying to chew through the algorithms of adding new data and manipulating DOM accordingly. I look back to [this](https://anzudev.blogspot.com/2020/01/technical-task-continues.html) blog post describing how I tried my hardest to do it while knowing only vanilla JS and ooff, it still hurts! I handed in something hitting only 4 of the required 7 features, so needless to say I didn't end up getting chosen. Now, at this point I've had 10 no-response applications and 1 failed technical task behind me. It was downright scary. Look at all these frameworks I don't know! What are all these acronyms? Why would anyone ever hire me? German? Now I need to fast track German too? I'm not exaggerating with this GIF: ![gif](https://media.giphy.com/media/cEOG7nGA7448M/giphy.gif) ##🎈 Opportunity arises Just when I was about to give up and apply at Starbucks, I got the most unexpected message in my inbox: ![email](https://i.ibb.co/b1860xQ/job.png) Will you look at that, my artsy craftsy side and plea for an internship caught their attention! This was the ray of hope I so desperately needed (it was actually my birthday) and visiting their office started a new chapter in my developer journey, 14 months after it started. *...to be continued*
anzelika
330,559
Rails 6 carrierwave production settings for digitalocean spaces with a custom subdomain
Hello, I recently worked on both Active Storage and CarrierWave to store user uploaded files and d...
0
2020-05-08T17:41:35
https://pikseladam.com/rails-6-carrierwave-production-settings-for-digitalocean-spaces
digitalocean, rails, carrierwave, storage
--- title: Rails 6 carrierwave production settings for digitalocean spaces with a custom subdomain published: true date: 2020-05-08 02:00:00 UTC tags: digitalocean, rails, carrierwave, storage canonical_url: https://pikseladam.com/rails-6-carrierwave-production-settings-for-digitalocean-spaces --- [![rcd](https://pikseladam.com/static/33e25b834295192f0dc69c356f60fca6/935bc/rcd.jpg "rcd")](/static/33e25b834295192f0dc69c356f60fca6/4b190/rcd.jpg) Hello, I recently worked on both Active Storage and CarrierWave to store user uploaded files and decided to use CW on my site. (Maybe i will tell why i choose CW over Active Storage in a different article) I'm using digitalocean spaces since it is basicly cheaper when you don't need a huge storage size.I also want to use custom subdomain for my stored files so i don't end up messy urls for my file urls. I don't know why but messy urls look dirty to me. Like developer doesn't even care. What a shame....Coulnd't find a good example for carrierwave settings to use in digitalocean spaces and wanted to share this info myself.I assume you are installed carrierwave before hand. ### TODO Create your digitalocean space. Create a subdomain and attach SSL to it within Digitalocean form. It can be done with couple of clicks.[![CDN subdomain](https://pikseladam.com/static/8ff7422f7dd223b95ff6ede188f3ecee/935bc/CDN.jpg "CDN subdomain")](/static/8ff7422f7dd223b95ff6ede188f3ecee/ab75a/CDN.jpg) Go to **API** panel and create a key. Store your key info somewhere safe. Give folder permissons for your domain.[![permissions](https://pikseladam.com/static/d7ee09a2a272f50180fa9a112f9b142d/935bc/domain_permias.jpg "permissions")](/static/d7ee09a2a272f50180fa9a112f9b142d/90b4c/domain_permias.jpg) Install necessary gems to use carrierwave with digitalocean. ```ruby gem "fog-aws" # storage for AWS S3 digitalocean ``` Go to your uploader.rb file, in my case it is `app/uploaders/image_uploader.rb`. ```ruby # storage :file. Change this to `fog` storage :fog ``` This is the `config/initializers/carrierwave.rb` file. This file configures carrierwave to use digitalocean spaces for your website in production. ```ruby CarrierWave.configure do |config| config.fog_credentials = { provider: 'AWS', # required aws_access_key_id: 'your-key-id', # required unless using use_iam_profile aws_secret_access_key: 'your-secret-key', # required unless using use_iam_profile region: 'fra1', # optional, default are different from aws. host: 'fra1.digitaloceanspaces.com', # optional, defaults to nil endpoint: 'https://fra1.digitaloceanspaces.com' } config.fog_directory = 'nameofyourspacesfolder' # required config.asset_host = "https://sub.domain.com" config.fog_attributes = { cache_control: "public, max-age=#{365.days.to_i}" } # optional, defaults to {} end ``` `config.asset_host` is doing the custom domain job here. Also be careful about `region` here. It is different from aws defaults. That should do it. Best Regards, Tuna
tcgumus
330,585
Park Street 11 - 3D CSS
Move your cursor around the screen to get a nice 3D effect, and scroll up and down to zoom in and out...
0
2020-05-08T18:32:50
https://dev.to/scriptype/park-street-11-3d-css-mjh
codepen
<p>Move your cursor around the screen to get a nice 3D effect, and scroll up and down to zoom in and out!</p> <p>Patterns on this building's facades always amazed me. It did so much so that I decided to re-make it with CSS! Bricks are done using grid. </p> <p>The place can be seen in Google Maps here: <a href="https://www.google.com/maps/@60.1561377,24.9508849,3a,22.1y,327.35h,101.71t/data=!3m6!1e1!3m4!1sd4qWGRwTewJ5RijP9fFKgg!2e0!7i13312!8i6656" target="_blank">https://www.google.com/maps/@60.1561377,24.9508849,3a,22.1y,327.35h,101.71t/data=!3m6!1e1!3m4!1sd4qWGRwTewJ5RijP9fFKgg!2e0!7i13312!8i6656</a></p> {% codepen https://codepen.io/pavlovsk/pen/YzyYbLV %}
scriptype
330,594
Best way to deal with immutable data in JS
Hey, Devs😎 I don't know how I miss it before, but I find out the best way to deal with immutable data...
0
2020-05-08T20:42:29
https://dev.to/vborodulin/best-way-to-deal-with-immutable-data-in-js-53l
react, webdev, javascript, redux
Hey, Devs😎 I don't know how I miss it before, but I find out the best way to deal with immutable data. ## Data and Structure types in JavaScript 1. Six Primitive types checked by <code>typeof</code> operator * <code>undefined</code> - <code>typeof undefined === 'undefined'</code> * <code>Boolean</code> - <code>typeof true === 'boolean'</code> * <code>String</code> - <code>typeof 'hello' === 'string'</code> * <code>Number</code> - <code>typeof 10 === 'number'</code> * <code>BigInt</code> - <code>typeof 10n === 'bigint'</code> * <code>Symbol</code> - <code>typeof Symbol() === 'symbol'</code> 2. <code>null</code> - special primitive type, <code>typeof null === 'object'</code> 3. <code>Object</code> Inclides <code>Array, Map, Set, WeekMap, WeekSet, Date</code>- <code>typeof {} === 'object'</code> 4. <code>Function</code> - <code>typeof () => {} === 'function'</code> ## Problem JavaScript assignment works in two ways. For primary types (Boolean, String, Number, BigInt, null, Symbol) assignment returns the new value. For complex types (Object) it returns a reference (pointer in memory) and any changes will impact all entries because all these entries are just references on the same pointer in memory. And the problem is that no guaranty that something will stay unchanged. The worst-case scenario is when the structure is used in different parts of the application. The mutation of this structure in one of the components can affect the bug in the whole application. And this bug is really hard to track. Where it was changed? What exactly was changed? Who also has access to the reference? But the history of change is not available and questions can’t be easily answered. In React-Redux stack we are used to handling immutable data, but sometimes it can be very tedious with ES6 native way; ``` function updateVeryNestedField(state, action) { return { ...state, first: { ...state.first, second: { ...state.first.second, [action.someId]: { ...state.first.second[action.someId], fourth: action.someValue } } } } } ``` Oh yeah😱 Looks familiar? ``` switch (action.type) { case ADD_NEW_AVAILABLE_COLOR_TO_CAR:{ const { color, model, manufacturer } = action.payload return {...state, manufacturer: { ...state.manufacturer, [manufacturer]: {...state.manufacturers[manufacturers], models: {...state.manufacturers[manufacturers].models, [model]: {...state.manufacturers[manufacturers].models[model], options: {...state.manufacturers[manufacturers].models[model].options, colors: {...state.manufacturers[manufacturers].models[model].options.colors, [color]: true} } } } } } default: return state } ``` Of course, you can say "hey buddy, you forgot about **immutable-js**" {% github immutable-js/immutable-js %} But I don't like it this way. It's an extra abstraction in your code with uncommon data structures for frontend developers. It seriously increases the entry threshold in your project for other developers. And debugging is really painful as hell. I have to click and click and click once again to expand the wrapped data in the console. However, it is just a simple nested list of objects. I can’t simply find out what’s inside😡 ## Solution {% github kolodny/immutability-helper %} Library **immutable-helpers** represents a simple immutable helper **update**: ``` import update from ' immutable-helpers'; const newData = update(myData, { x: {y: {z: {$set: 7}}}, a: {b: {$push: [9]}} }); ``` You can see it, right? It's really simple! The icing on the cake is a familiar approach that we really well know from [mongodb native driver](https://github.com/mongodb/node-mongodb-native): ``` db.products.update( { _id: 100 }, { $set: { quantity: 500, details: { model: "14Q3", make: "xyz" }, tags: [ "coats", "outerwear", "clothing" ] } } ) ``` List of available commands: * {$push: array} push() all the items in array on the target. * {$unshift: array} unshift() all the items in array on the target. * {$splice: array of arrays} for each item in arrays call splice() on the * target with the parameters provided by the item. * {$set: any} replace the target entirely. * {$merge: object} merge the keys of object with the target. * {$apply: function} passes in the current value to the function and updates it with the new returned value. And finally my personal small example of how organically it fits into the Redux reducers: ``` const reducer = (state = initialState, action: IAppAction): TState => { switch (action.type) { case CONVERSATIONS_ADD: { const { conversation } = action.data; return update(state, { [conversation.counterpartId]: { $set: conversation }, }); } case CONVERSATION_READ_SUCCESS: return update(state, { [action.data.counterpartId]: { unread: { $set: 0 } }, }); default: return state; } }; ``` You are welcome! But don't forget, it's not the tools that make you a good developer.
vborodulin
330,623
How to install Microsoft Office 365 on Linux (Not Online Version)?
This is a tutorial to get Microsoft Office 365 Word, Powerpoint and Excel. I have looked at other ver...
0
2020-05-08T22:16:38
https://dev.to/alex_dev123/how-to-install-microsoft-office-365-on-linux-not-online-version-426f
linux, ubuntu
This is a tutorial to get Microsoft Office 365 Word, Powerpoint and Excel. I have looked at other versions of this tutorial and they have simply found roundabout ways of using Word Online. Others just straight up recommend other tools, so if you are one of those people - people want Microsoft Office, NOT alternatives. `Person 1: I want to fly a plane` `Person 2: Use a bike instead :)` `Person 1: Gee thanks *rolls eyes and walks away*` #**Disclaimers:** * This is a method to acquire Microsoft Office tools like Word, Powerpoint and Excel for *'free'*. * However, a Microsoft Office 365 Subscription is assumed. * And, we use a **cracked** version of CrossOver to do this. I do not recommend you use the cracked version if you have the money to buy the premium version. Please support the developers behind this tool as they have done an amazing job. The reason behind this post is to provide an **option** for those searching for a free solution. ***It must be highlighted that if you choose to use the cracked version, you do it at your own risk***. The same method can be carried out using the premium version of the tool. #**Inspiration (credit to):** * [OldTechBloke Video Representation](https://www.youtube.com/watch?v=Eo2Dz9n4X7o) * This is essentially a written version of what he does with a few twists and places where I add more detail (to save you time otherwise used to research) Now that we got that out of the way, let's get on to the Method! #**Method:** I used Ubuntu 20.04 LTS to do this, but this process can be carried out for Manjaro, Arch and the other distros. ##1. Install Microsoft Office 365 Installer File 1. Run Windows, Open [Office 365 page](https://www.office.com/), log in to your account, and press the install Office option in the top-right corner. The option ensures you get the latest patch for office from Microsoft. 2. *OR*: Run a Virtual Machine such as Virtual Box. Install windows and perform step 1's operation. 2. *OR*: Download installer from [my MEGA link](https://mega.nz/file/bQR2wKaC#1QmJu0nd5sswEkROmfZc-hnX_QC5QPcyhM-RS9ShSPc). It must be noted this version seems to be for 64-bit systems. 3. To access File from Linux find the file on your Windows drive and make a copy of the file somewhere on your Linux drive. ##2. Install Wine Wine is the tool used to run Windows-native programs on Linux and macOS 1. Select your distro from the [Wine download page](https://wiki.winehq.org/Download) 2. Download and install wine 3. Check wine is installed (open terminal and type `winecfg`), it should display a new window with many options such as Libraries, Applications, Audio etc. 4. *OPTIONAL*: While you have the config open, go to Libraries and add new override `gdiplus` and set it to Native(Windows) - this should get rid of the common 'IOPL not enabled' error 5. *SITUATIONAL*: if you find you still get an 'IOPL not enabled' error after installing Word and trying to run it, read through this [forum](https://ubuntuforums.org/showthread.php?t=934720) to find possible solutions. ##3. Install CrossOver ###The Legit 'Legal' Way 1. Go to CodeWeavers' website, buy the subscription or log in 2. Download CrossOver software ###The 'Illegal' Way The way this works is by permanently extending the trial period of crossover software, so your software is never licensed but your trial period never runs out. 1. *OPTIONAL*: Watch a [tutorial](https://www.youtube.com/watch?v=MoTfhB-1lnQ) on how to install crossover if you wish - bear in mind the tutorial is shown on a Russian desktop (so may be hard to follow) 2. Go to description of the video tutorial above and download the respective filetype for your distro and download the crack file too. I apologise for not being descriptive for this part but I personally used a .deb filetype which is much simpler and I personally have no idea how to manage the other types. 3. Perform installation 4. Replace the downloaded cracked winewrapper.exe.so file in the following two locations `root/opt/cxoffice/lib/wine/` and `root/opt/cxoffice/lib64/wine/` (copy the cracked file in location and opt to replace file in location). Note for distros like Ubuntu you will have to open file explorer in root mode by using commands such as `sudo nautilus`, to perform replacements of files. ##4. Install Microsoft Office 1. Open CrossOver and click 'Install Windows Software' button 2. Search for 'office 365' and select it, click 'Continue' 3. Click 'Select Installer File' and locate your office installer file we got from Section 1 4. Click install, if Microsoft SP2 6.0 link error occurs just click 'Skip this Step'. Carry on with download till finishes and close installer. The next step is necessary not because I'm imposing Internet Explorer on you. I like my memes too but there is a line \*awkward face\*. It is because when installing IE8 tools that are needed for Office are installed. One tool in particular, in its absence, you are unable to authenticate your Office account and are stuck in a constant authentication loop. ##5. Install Internet Explorer 8 (Trust me on this) 1. Go to this [CodeWeavers Link](https://www.codeweavers.com/compatibility/crossover/internet-explorer-8) and select the green 'Install Now' Button in the middle-right of the screen. 2. Now you should have a .tie file downloaded. 3. Open it with crossover, should look like the install windows software menu you saw before when installing office 4. Click 'Continue' on select application menu and the select installer menu 5. *IMPORTANT*: in the select bottle menu, under compatible bottles, you should see the bottle containing your previous office installation (most likely named 'Microsoft_Office_365'), select that and continue. This step is crucial as each bottle represents "its own Windows instance". Therefore, for the tools, installed by IE8 installer, to be accessed by our installed office programs, we need both programs on the same bottle. 6. Go through the steps of installing IE8, making sure you click 'Restart Later' buttons to avoid any random restarts and crashes of install. It should work fine now! #**Final Note**: I made this post because I hate to switch to Windows to do my written assignments and I just can't use Libre or WPS. Must be noted, OneDrive along with many other office programs may not work but Word, Powerpoint, and Excel work adequately well. I hope this post was useful and helped.
alex_dev123
330,642
Bring fortune to your Linux terminal with cowsay figlet lolcat
We will cover a few interesting commands today, they are figlet toilet lolcat fortune cowsay shuf. To...
0
2020-05-08T21:07:41
https://www.chuanjin.me/2020/05/08/fortune-cowsay-lolcat/
linux
We will cover a few interesting commands today, they are `figlet` `toilet` `lolcat` `fortune` `cowsay` `shuf`. To learn a new command, I believe the best way is always to check `tldr` for the basic usage and then play with it. ![](https://www.chuanjin.me/images/figlet.png) ![](https://www.chuanjin.me/images/toilet.png) ![](https://www.chuanjin.me/images/lolcat.png) ![](https://www.chuanjin.me/images/fortune.png) ![](https://www.chuanjin.me/images/cowsay.png) Not only *cow* and *dragon*, but also many other options are available to choose. Check them with ``` ls /usr/share/cowsay/cows ``` To pick one randomly, `shuf` comes in. And to fetch only file name without extension, there're more than one way to do so. ![](https://www.chuanjin.me/images/shuf.png) Now, time to put them all together, and try to run it a few more times to see the results. ![](https://www.chuanjin.me/images/together.png) {% youtube b30SXMhuyqo %} ``` echo "Enjoy and have fun\!" | lolcat -a ```
chuanjin
330,791
Day39:Generate random value - 100DayOfRust
We can use external crate rand to generate random value. In Cargo.toml: rand="0.7.3" In main.r...
0
2020-05-09T03:53:39
https://dev.to/bitecode/day39-generate-random-value-100dayofrust-4dp8
rust
We can use external crate `rand` to generate random value. In `Cargo.toml`: ``` rand="0.7.3" ``` In `main.rs`: ```rust use rand::{prelude::*, Rng, distributions::Alphanumeric}; // Rng stands for "random number generator" fn main() { let num: u32 = rand::random(); println!("generated rand integer: {}", num); let c: char = rand::random(); println!("generated rand char: {}", c); let mut rng = rand::thread_rng(); // gen function is a generic-type function let b1: bool = rng.gen(); println!("generated boolean is {}", b1); let n1: u32 = rng.gen(); println!("generated integer is {}", n1); let f1: f32 = rng.gen(); println!("generated float is {}", f1); // generate number within range let range_num = rng.gen_range(0, 10); println!("number b/t 0-10 is {}", range_num); let range_decimal = rng.gen_range(0.0, 1.0); println!("number b/t 0-1 is {}", range_decimal); // generate a string with specified length let rand_str: String = rng.sample_iter(&Alphanumeric).take(15).collect(); println!("generated random string: {}", rand_str); // shuffle some array let mut nums: Vec<i32> = (1..10).collect(); // .shuffle need rand::prelude::* nums.shuffle(&mut rng); println!("num list: {:?}", nums); } ``` Run it `cargo run`: ``` generated rand integer: 2802536511 generated rand char: 񭌀 generated boolean is true generated integer is 497331788 generated float is 0.0054394007 number b/t 0-10 is 3 number b/t 0-1 is 0.23074173856274816 generated random string: 97lbEyO3SVJgt7k num list: [8, 1, 2, 3, 5, 4, 9, 6, 7] ``` ## Reference: * https://rust-lang-nursery.github.io/rust-cookbook/algorithms/randomness.html * https://rust-random.github.io/rand/rand/index.html * https://rust-random.github.io/rand/rand/trait.Rng.html#method.gen
bitecode
330,805
A Story of Becoming a Web Developer
Let me tell you a story about Silvestar, a fellow who learned how to code, took some chances during h...
6,515
2020-05-09T05:43:05
https://www.silvestar.codes/articles/a-story-of-becoming-a-web-developer/
career, beginners
Let me tell you a story about Silvestar, a fellow who learned how to code, took some chances during his career and become a solid, confident web developer. Silvestar never coded in his life. He thought he would work as an IT engineer. But the situation at the market made him apply to a web developer job ad. Future boss provided an opportunity for him to learn about web development. If Silvestar succeeds, he could work in this company. He was not the only candidate, so he needed to work extra hard to prove his worth. In the beginning, Silvestar was overwhelmed and didn’t know where to start. The boss helped candidates by providing learning materials. The boss also gave the assignment to every candidate. Few times a month, the boss organized a class where he showed the solution, answered questions from candidates, and talked about programming. > Everybody has to start somewhere. > > _I started from zero._ After a few months, Silvestar started to like programming. He didn’t sleep many nights because he had new ideas about how to solve problems. For every issue he addressed, he thought he is ready for work. But with every new assignment, he also learned he is not ready yet. Finally, after about a half a year, the boss decided to give him and a couple of other candidates a chance to prove themselves on a project. He was hired, he became a developer. In about a few months, as project complexity increased, Silvestar figured out what are his strengths and weaknesses. His role was all-around developer, but he liked the frontend: HTML, CSS, and jQuery. He enjoyed working on a user interface in any form, from slicing PSD to HTML, to adding interactions. He was working on different projects for a few years. Then a new opportunity arose. A local startup was hiring a frontend developer, and Silvestar decided to give it a shot. After completing a challenging task that he thought he could not do, he was hired. The company was big, with more than 20 people organized in different teams. He was part of a development team working on a solid product. He was quite happy and satisfied. ![Three banners with inspirational messages: don't give up, you are not alone, and you matter.](https://dev-to-uploads.s3.amazonaws.com/i/2msccdijlztp8nj1lidk.jpg) <small>_Image Credit: [Dan Meyers on Unsplash](https://unsplash.com/photos/hluOJZjLVXc)_</small> During this time, Silvestar learned a lot by attending meetings, collaborating with colleagues, working in teams, and presenting his work to other team members. He worked on challenging user interface parts, and he learned a lot about different Git workflows, project management techniques and tools, and coding principles and standards. Everyone was friendly and helpful. Then, after a couple of years, the company was acquired by some larger organization which led to shutting down the company in a few months because the product showed as not being sustainable. Silvestar and all workers were let go. During the time in a local startup, Silvestar heard of [Toptal], a network of talents working remotely as freelancers. Silvestar was tempted to give it a go, but his family and friends were suspicious that was the right call. He applied nevertheless. After a couple of attempts, Silvestar got in. He was lucky as he was one of the first of the talents that were accepted as UI developers. He got his first gig within a couple of months. > Fear is the compass. > > _I made some bold choices at the given moment, but they all paid off._ It has been three years since Silvestar started with freelance work. He invested a lot of time in his professional development, and he decided to improve his skillset by learning new technologies and techniques. He decided to share his findings on his blog. He had the opportunity to prove his worth on challenging project, and he collaborated with amazing clients and developers from all over the world. All in all, Silvestar is happy. He still has his doubts. He is still learning how to improve as a professional and as a developer. He still has to search for the same solutions online and still has to debug the same old bugs and issues. But he is not complaining, because he knows it is part of his job. For most of the time, Silvestar is quite happy how things turned out. ## Disclaimer In case you were wondering, I am Silvestar. And in case you are asking yourself why I am writing this it is because I wanted to tell you a story about how I become a freelance web developer: - I started from zero, - I had many doubts and fears, - I made some bold choices at the given moment, but they all paid off, - I never stopped learning. I hope this story inspired you to start with a new career or to start chasing your dreams. Remember, it is a process, and it takes time to succeed. But know that hard-work would pay off. ## Conclusion I would like to hear your story one day. That is why I started [The UI Development Mentoring Program]. I created a new site where you could [apply to become a mentee], or you could [find useful resources] and [tips]. If you need more inspiration, read these tips I gathered during my career: {% speakerdeck 50f4710fdf3f4089904fc722bc48b332 %} Happy coding! [Toptal]: https://www.toptal.com/resume/silvestar-bistrovic#trust-nothing-but-brilliant-freelancers [The UI Development Mentoring Program]: https://mentor.silvestar.codes [apply to become a mentee]: https://mentor.silvestar.codes/apply/ [find useful resources]: https://mentor.silvestar.codes/resources/ [tips]: https://mentor.silvestar.codes/tips/
starbist
337,304
🦕🦀Writing WebAssembly in Rust and running it in Deno!
Requirements You need to have to follow tools installed on your machine: rustc rustup ca...
0
2020-05-17T13:17:02
https://dev.to/lampewebdev/writing-webassembly-in-rust-and-runing-it-in-deno-144j
deno, rust, webassembly, beginners
### Requirements You need to have to follow tools installed on your machine: - rustc - rustup - cargo - deno These are standard things you usually use while developing Rust and working with Deno. We now need to install wasm specific tools for rust. First we need to add a compiler target like that: ```bash rustup target add wasm32-unknown-unknown ``` We also need to the `wasm-gc` tool ```bash cargo install wasm-gc ``` I'm usuing Visual Studio Code for development and for this project I also installed the following extensions: - Better Toml `bungcip.better-toml` - Rust `rust-lang.rust` ## Creating a rust lib For the rust part, we need to create a small cargo lib. We can do it with the following command: ```bash cargo new --lib wasm_deno_example cd wasm_deno_example ``` Next, we can open the project in VSCode and we need to add the dependencies for wasm to our `Cargo.toml`. ```toml [lib] crate-type =["cdylib"] ``` `cdylib` makes our project usable with other languages like C or in our case `wasm`. It also removes all the specific stuff that is needed for rust. ## Our small rust function We now will change the `src/lib.rs` file to the following code: ```rust #[no_mangle] pub extern "C" fn square(x: u32) -> u32 { x * x } ``` This is a simple function that takes a number and returns a number. Important here is that we add the `extern` keyword so this function can be imported in our Deno code. When you are reading post liks this you should understand the `x * x` 😉 ## Compiling the rust to wasm Now we can compile our rust code to wasm code. We first need to compile build it with the following command: ```bash $ cargo build --target wasm32-unknown-unknown ``` And we also need to strip it down from all the stuff we dont need. This will remove all the unneeded code for `wasm` and make the file way smaller. ```bash wasm-gc target/wasm32-unknown-unknown/debug/wasm_deno_example.wasm ``` That's it we now have a `wasm` binary ready to be loaded into Deno and executed. ## Run it with Deno We now need to create a `main.ts`. The name of the does not really matter it's just the one I will use here. When we need to add the following code to the file. ```ts const wasmCode = await Deno.readFile("./target/wasm32-unknown-unknown/debug/wasm_deno_example.wasm"); const wasmModule = new WebAssembly.Module(wasmCode); const wasmInstance = new WebAssembly.Instance(wasmModule); const { square, } = wasmInstance.exports; console.log(square(1)); console.log(square(2)); console.log(square(3)); console.log(square(4)); ``` Let us go through the steps. - 1. Simple loads the raw file. - 2. Makes a wasm module out of our file. So that we can work with it. - 3. Creates an instance of our module so that we can use the functions. - 4. We are importing our `wasm` `square()` function into our Deno code. - 5. We are console logging the square number of different numbers. Now let's execute this code with ```bash deno run --allow-read main.ts ``` The output should be the following: ```bash 1 4 9 16 ``` ## The Repo You can find the code in the following [Repo](https://github.com/lampewebdev/wasm_deno_example) Would you like to see more Deno content? Please let me know! I would like to make more posts and content about Deno! **👋Say Hello!** [Instagram](https://www.instagram.com/lampewebdev/) | [Twitter](https://twitter.com/lampewebdev) | [LinkedIn](https://www.linkedin.com/in/michael-lazarski-25725a87) | [Medium](https://medium.com/@lampewebdevelopment) | [Twitch](https://dev.to/twitch_live_streams/lampewebdev) | [YouTube](https://www.youtube.com/channel/UCYCe4Cnracnq91J0CgoyKAQ)
lampewebdev
338,736
Junior Developers Checklist for Landing a Remote Job
As more and more companies go remote, these positions were once the holy grail among developers. The competition is fierce, but it's not impossible to succeed. This post is to help you land a job. This is how I did it a year ago.
0
2020-05-19T09:34:13
https://dev.to/ugglr/junior-developers-checklist-for-landing-a-remote-job-2ldb
remote, webdev, beginners, career
--- cover_image: https://images.unsplash.com/photo-1499951360447-b19be8fe80f5?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=3450&q=80 title: Junior Developers Checklist for Landing a Remote Job published: true description: As more and more companies go remote, these positions were once the holy grail among developers. The competition is fierce, but it's not impossible to succeed. This post is to help you land a job. This is how I did it a year ago. tags: remote, webdev, beginners, career --- *TLDR Alert: Skip to the tips at the bottom* ## I can work from anywhere? That's what I asked myself two years ago, and ultimately changed my life's path for forever. After meeting during my exchange studies, moving back together to Sweden, getting married and having a child, my wife wanted to move back to her home city in China. We had been living in Sweden for many years and we had always talked about moving east, but when we finally decided, oh boy... time to hustle. ## Changing Career Path I was working as a Hardware engineer at the time at one of the largest surveillance camera companies in the world. The pay was actually quite sub-par, not enough for a single income household, but it was stable and the Swedish government does provide many benefits to families with small children. I've always been building websites, I got the highest grades on the subject in high school, and 3 years prior to our move I found a site called CodeCademy. Without knowing the ultimate benefit of it, I completed the free Python course, and later I found freeCodeCamp, you've might heard of it? (lol). I quickly ran through many of the courses through fcc, and also made my first portfolio site, just for fun. With that I also started following more and more dev people on twitter, I liked the energy. I had a short journey on teamtreehouse as well. As my day job fluctuated in workload, my son got born, progress on the coding sites became stagnant for years. ## Quitting my job It was not easy to resign, the uncertainty ahead, we had little savings and a small boy to support. However I was optimistic, I had 3 years of experience as a Electronics Engineer, and I also from university soon had completed a Bachelors degree in Mandarin, at least I would be able to find a job, right....? ## A long way from home After selling most of our possessions back in Sweden, getting on a plane with 3x20kg 3x10kg 3xback-packs and a full size baby stroller we landed in Shanghai. 2 months went while settling in, at the meantime I was looking for hardware engineering positions. As time went by it became increasingly clear to me that getting into a Chinese Company as an Hardware R&D Engineer was going to be tough. I was directly told by many that they don't allow foreigners to enter the company. It did not matter that I was qualified with more than 3 years experience, a Masters Degree and spoke good Mandarin. But during the time looking at all the job adverts I came to a realisation: > The amount of Software related positions out-numbered hardware positions 5 to 1. And they were paying double on average. Career switcher in China? They are not taking that bet, and my Chinese friends looked at me like I was crazy. OK... I started to look into it and as the dev community had changed from my previous stint with software, my eyes opened up to remote working. Unlike before people had started teaching programming on youtube, remote communities were popping up, and sites dedicated to remote workers was available. *I can work remotely?!* Not only that, there were success stories from self-taught programmers everywhere. *Not only was I in need of re-inventing myself professionally but also I was so excited about the possibility of working from anywhere that I became obsessed.* I looked at the remote job listings and determined that the number of React positions were far greater than those of Angular or Vue. Done deal off to the races we go! I started researching everything I could get my hands on, youtube, medium, twitter, anything that might be useful. I joined online communities, found mentors online and wen't into tutorial purgatory. ## Tutorial Hell For me YouTube tutorials became the way to learn, I could code along the teacher and after slowly starting to understand the best way for me to learn I could go back, re-engineer the projects and understand the parts that were important. I opened my Github account where everything would go, did not matter what I was learning, it would be on GitHub so I can refer back to my old code. I understood quickly that consistency was key, and I was pushing code everyday. ## Standing on my own They call it tutorial hell or tutorial purgatory for a reason. It's hard to stop and build something by yourself. Even the smallest things would make me choke, and flush me with self-doubt. Maybe I'm not ready? Maybe just one more tutorial and things click into place? Forget about it... I went back to the drawing board, and asked myself: > How can I get real programming experience, without a job? And did the following thought experiment: > If someone puts founder on their CV and builds software does that experience count? Given that he/she does not (probably) get payed for it? Does the money matter at all? Is it only valid experience if we are getting payed for it? I answered myself: **It's all about the work you produce** So I started thinking: If I'm working for myself (unpaid lol), What can I build that gives me real experience? **This was the most crucial thought conclusion to take me forward** Build genuine competency! ## Build things like your life depends on it I cut tutorials out of my life and went to work. My first PR was a new portfolio starter to the Gatsby project. I still remember the feeling today when it got accepted. I asked several people to review my design before finalising it, and I shared my success with the people who was rooting for me. > No matter how small, move forward. No matter how small your success is, celebrate it with people who believes in you. ## Your biggest supporter is a stranger. The fact that I was engaged in the online community made me succeed. But you have to be brave enough to ask for support. There's so many awesome people out there and they will help you, they won't pull you down, and they won't heckle you for asking trivial questions. But also you want to be respectful, people have jobs and other engagements, you still have to do the work by yourself, none want's to share their energy with someone who demands attention or comes off as lazy. ## I wanted to quit and got lucky 9 months in and I was soon burning out, I had two job interviews left with two take home coding challenges. I think I got lucky when I needed it the most. Because the hiring engineer I met took his time with my application and he appreciated my struggle and consistency. He hired me to the team and August 1 2019 I started working full time remotely. Without getting into numbers I also earn more, with what I think is a fair market salary that can support my family. I also see more development in that area than what I previous did. After talking to him and discussing what made me stand out, here's my ultimate checklist for getting ahead of your competition: I was thinking for juniors but honestly for anybody who want's to go remote. ## The List | Todo | why | | ---- | --- | |GitHub | Those green tiles matter, I was able to show consistent code pushing for 9 months straight. There was projects related to the position I wanted (react) but also branching out with backend, and other languages | | Contribute to Open-Source | This might sound daunting for anyone to get into but it really makes all the difference. It does not matter how small of a contribution you are making, correct some docs, fix grammar problems are all things that you could do right off the bat. You can make repos where you collect resources etc. | | Personal Site | I made sure that my personal site looked alright, all links working, no typos, easy structure for hiring parties to find the information they are looking for. All projects links to a hosted version, and to the source code. I linked to Github right at the top, and other small things like: my email looks professional, up to date CV etc. | | Start blogging | Writing about your daily progress is a win-win situation. You help others struggling with the same thing, you help yourself understand it better, and you take steps towards building your own developer brand. Potential hiring party can go in and see your progress and they can see how you communicate ideas or code to others, further they can see a glimpse of who you are and build a perception of you as a person. | | Stable internet | It sounds like a no-brainer but if working remote it becomes very clear if your connection is not great. Would you hire someone to work for you remotely if they keep disconnecting? probably not right? | | Comfortable to Share screen | As a remote developer you'll be sharing your screen a lot. | |Clear communication | Being able in a clear and concise way explain code, talk about complex topics. Code is sometimes difficult to explain, because we are not used to talking while we code and our mind-maps of how things fit together will be very different. No one will hire you if you can not explain what you are doing. | | Be in the now | Be alert, be present, answer questions within reasonable time on Slack or Email. And turn everything off during your interview. | | Calm environment | If you have constant background noise, like motorcycles, trucks, vacuum cleaner, screaming, you get the point etc. You will not be liked in your everyday meetings. Dare I say low key hated. So find a quiet spot to work. | | Voice Quality | This ties into the above row, but if your microphone is bad, buy a new one. Record yourself, listen to it and you'll understand what the other end hears. | | Energy | You want to send a lot of positive energy to the person you are talking to. You have to be interested in what you do and like to talk about things related to the field. Don't be a biggot, asshole, racist, or other negative. at least pretend... | | Know what you know, and you want to continue learning. | If there's something you don't know, come clean and say that you have not used said thing yet. It shows that you know what you don't know, and explain that you are willing to learn that asap if that's a required skill. Maybe fire up a new repo and do something with it, and send it soon after the meeting ended. It shows you are a self starter and can learn new things when required. | IMO if you do these things and can present them through your online presence then you are way ahead of the competition. And further to note much of the list is not even related to coding skills. They are what's called soft skills and many companies are realising that they are waaay more important than your technical skill. Chances are that the project they are hiring for are using new technologies anyway and they are looking a really solid team player. **Shameless plug** Here's what I collected during the time I was searching 1.5 years ago. Some stuff might be out of date but in general I think it's still valid information: It's companies who hire remotely, resources, communities etc. all for getting a remote job. [Remote-Junior-Developer-jobs-directory](https://github.com/ugglr/Remote-Junior-Developer-jobs-directory) If you want me to write more about this topic let me know. I think I'm going to write some more posts about working remotely and what I have picked up along the way. **Hope I could help!**
ugglr
338,742
Serverless Cobol
I'm currently in the process of writing a post covering how to run Laravel Serverless on Google Cloud...
0
2020-05-19T08:32:25
https://dev.to/atymic/serverless-cobol-22dh
serverless, docker, showdev, php
I'm currently in the process of writing a post covering how to run Laravel Serverless on Google Cloud. There I was, sitting there hashing out the post and I wrote this: > This pretty much means you can run anything that responds to HTTP request, hell you could even run Cobol if you wanted! Well, I can't really claim that without actually trying it, can I? Well, an hour or so later and were running serverless cobol! https://serverless-cobol-max7gzuovq-uc.a.run.app/cobol+dev.to=%3C3 It was actually surprisingly easy to build a docker container to run cobol, despite it being a 60+ year old language. There's a modern re-implementation called [GnuCobol](https://en.wikipedia.org/wiki/GnuCOBOL) which support modern OSes like Ubuntu. Here's the [dockerfile](https://github.com/atymic/serverless-cobol/blob/master/Dockerfile) which compiles the cobol source code & sets up apache (which interfaces with the cobol program as a CGI script). ```docker FROM ubuntu:bionic RUN apt-get update RUN apt-get install software-properties-common -y RUN add-apt-repository ppa:lud-janvier/gnucobol RUN apt-get update RUN apt-get install gnucobol apache2 -y COPY . /var/www/public WORKDIR /var/www/public RUN cobc -x -free serverless.cbl -o the.app RUN chmod +x ./the.app COPY docker/apache.conf /etc/apache2/sites-available/000-default.conf EXPOSE 8080 RUN echo "Listen 8080" >> /etc/apache2/ports.conf && \ chown -R www-data:www-data /var/www/ && \ a2enmod rewrite && \ a2enmod cgid CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"] ``` Building the container & getting the cobol code to run was the hard part, deploying to Cloud Run is incredibly easy - two command and a few minutes later and we're running cobol in the cloud! Is it useful? Definitely not. Was in fun? Sure was 🙃 [Source code on Github](https://github.com/atymic/serverless-cobol). Now to do some actual work 😬
atymic
338,804
Migrate Azure DevOps Repos to GitHub in 8 PROBLEMS
Migrating your Git repository from Azure DevOps Repos to GitHub should be easy. But it is not always...
0
2020-05-19T10:03:54
https://dev.to/n3wt0n/migrate-azure-devops-repos-to-github-in-8-problems-2d87
github, devops, git, migration
Migrating your Git repository from Azure DevOps Repos to GitHub should be easy. But it is not always like that. We need to take care of Authentication, Clone, Branches, Tags, History... and much more... In this video we will tackle all this problems and we will successfully migrate our repo to GitHub (__including full history, branches and tags__)! Don't worry if you are not overly familiar with Git and it's command line, I will try to be as clear as possible. And stay with me until the end for a bonus content. ### Video If you are a __visual learner__ or simply prefer to watch and listen instead of reading, here you have the video with the whole explanation, which to be fair is much more complete than this post. {% youtube SR0L6czMr1A %} If you rather prefer reading, well... let's just continue :) ### Problem 1: How to get the code from Azure DevOps This is fairly easy to do, you just need to have the URL of the repository you want to move over to GitHub. And you need to have access to that repo of course. The URL is something like in this format: ``` https://dev.azure.com/ORGANIZATION/PROJECT/_git/REPONAME.git ``` To retrieve that, just head over your project repo and you can use the Clone button there. Now that we have the URL, we can clone the code. ### Problem 2: How to authenticate to your Azure DevOps Git Repo There are different ways to do so. You could use the same credentials you use to authenticate to Azure DevOps portal itself, but this would open a browser windows for you to insert username and password. If you have to migrate only one repo than I guess this is fine, but if you want to perform multiple migrations, or you are on a machine that it is nor yours you may not want this. Another option is to genereate new Git Credentials (_in the Clone dialog we used before the get the URL_), and use those instead. Again, not very suitable if you want to automate migrations, also because the credentials may vary from repo to repo. The way I prefer doing this is with a PAT, __Personal Access Token__. which basically replace both username and password. To create a PAT simply access your Azure DevOps portal, go to the small icon with the user image next to your picture, and click on Personal Access Tokens. Here you can create a new Token, and assign specific permissions to it. For the scope of Git Repo migration we'd just need the "Read" permission under "Code". So now we are ready to get the code. ### Problem 3: How to clone a repository using a PAT? This is rather easy, just prepend the PAT to the "dev.azure.com" in your url, like this. ```PowerShell $AzDOPAT = 'PeRsOnAl_AcCeSs_ToKeN_fOr_AzUrE_dEvOpS' $AzDOOrg = 'ORGANIZATION_NAME' $AzDOPrj = 'SOURCE_PROJECT_NAME' $AzDORepo = 'SOURCE_REPOSITORY_NAME' git clone https://$AzDOPAT@dev.azure.com/$AzDOOrg/$AzDOPrj/_git/$AzDORepo . ``` In this case I use variables to make it more re-usable, but I think you get the general meaning. Ok, now we have the code, at least ### Problem 4: How to clone all the rest (branches, Tags, etc)? We could actually do it manually, but let's instead use the git command that does it for us: ```PowerShell git clone --mirror https://$AzDOPAT@dev.azure.com/$AzDOOrg/$AzDOPrj/_git/$AzDORepo . ``` This is almost the same command we used before, with the exception of the ___--mirror___ flag. What this does is cloning the repo in a "special state" called mirroring which copies every object in it. Indeed, the result of that command looks pretty different than the original repo. Cool, now we have everything we need: code, branches, tags... ### Problem 5: How to link it to the GitHub destination repository? The cloned repo now only has the reference to the source repository in Azure DevOps. It's called origin. We need to add another remote repo, which will be the one in GitHub. First of all we need the URL for GitHub, which is in this format: ``` https://github.com/USERNAME/REPONAME.git ``` or ``` https://github.com/ORGANIZATION/REPONAME.git ``` And this is for both Public and Private repositories, no differences. Next we need to add this as a new remote. ```PowerShell $GHUser = 'GITHUB_USERNAME' $GHRepo = 'GITHUB_TARGET_REPOSITORY_NAME' git remote add GHorigin "https://github.com/$GHUser/$GHRepo.git" ``` To do so we can use the "git remote add" command. It needs a name for it, in my case i decided to call it "GHorigin", but it can be anything you want. And we need to pass to it the GitHub URL we got before. Again, here I'm using variables to compose the URL but you can pass it directly. ### Problem 6: How to push all the objects to the target repo? As in one of the previous steps, we could push code, branches, tags, etc, separately with different commands. But once again, git comes in help: ```PowerShell $GHPAT = 'pErSoNaL_aCcEsS_tOkEn_FoR_gItHuB' $GHUser = 'GITHUB_USERNAME' $GHRepo = 'GITHUB_TARGET_REPOSITORY_NAME' git push --mirror GHorigin git push --mirror "https://$GHPAT@github.com/$GHUser/$GHRepo.git" ``` In fact the "--mirror" switch cannot only be applied to the "git clone" command as we have seen before, but that can be applied also to the "git push" as you can see here. If you have the GitHub credentials stored in your machine, or you want to insert them interactively then you can use the first command and push to the new origin, which in my case is this _GHorigin_. If instead you want to do this more programmatically, or you don't want to save your credentials, you can use the second command which uses the Personal Access Token instead. ### Problem 7: How to get a Personal Access Token in GitHub? To create a PAT in GitHub, go to the Settings, then Developer Settings, and finally Personal Access Tokens. Here you can Generate a new PAT. For migration purposes, we need to assign it proper permissions. If your repository is public, then select just "_public_repo_". If, instead, you want to push to a private repository, you'd need to select the whole "repo" section. When you have your PAT, you can use it in that command. We are almost done, just one more thing. ### Problem 8: This leaves the local repo in an "unusable state" That's right, the "_git clone --mirror_", as we have seen, clones the repository in its "RAW" format, which then cannot be used as a normal working copy. If you want to migrate the repo AND then use it as a working copy, I got you covered. In fact I have created the [Azure DevOps To GitHub Repo Migrator - GitHub Repository](https://github.com/n3wt0n/AzureDevOpsToGitHubRepoMigrator) which contains a few utilities to migrate your Azure DevOps repository to GitHub. {% github n3wt0n/AzureDevOpsToGitHubRepoMigrator %} Starting from the Scripts Folder, we have the "migrate-mirror.ps1" script which performs the migration as we have seen before. But we also have the "migrate.ps1" script which instead migrate the repository and leave it in an "usable" state. Instead of using the __--mirror__, we clone first the code, then we clone all the remote branches (excluding the HEAD and the master, which we already have) Then we add the new origin as we have seen before. After doing that, we push first the code and all the branches with the "--all" switch and then we push the tags using the "--tags" switch Finally, we remove the source origin and optionally we rename the GitHub origin, which in my case is "GHorigin", to just origin. And that's it. In my GitHub repository I've shown there are also some other implementations which allow you to execute your migration in a Docker container, and even run it inside Azure Container Instances. Take a look at the video at the top of this post ([here for simpler reference](https://youtu.be/SR0L6czMr1A)) to see those examples more in depth. ### References and Links - [GitHub repo with all the examples](https://github.com/n3wt0n/AzureDevOpsToGitHubRepoMigrator) - [Video with all the explanation and examples] (https://youtu.be/SR0L6czMr1A) __Like, share and follow me__ 🚀 for more content: 📽 [YouTube](https://www.youtube.com/CoderDave) ☕ [Buy me a coffee](https://buymeacoffee.com/CoderDave) 💖 [Patreon](https://patreon.com/CoderDave) 🌐 [CoderDave.io Website](https://coderdave.io) 👕 [Merch](https://geni.us/cdmerch) 👦🏻 [Facebook page](https://www.facebook.com/CoderDaveYT) 🐱‍💻 [GitHub](https://github.com/n3wt0n) 👲🏻 [Twitter](https://www.twitter.com/davide.benvegnu) 👴🏻 [LinkedIn](https://www.linkedin.com/in/davidebenvegnu/) 🔉 [Podcast](https://geni.us/cdpodcast) <a href="https://www.buymeacoffee.com/CoderDave" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 30px !important; width: 108px !important;" ></a>
n3wt0n
338,814
Constructor Overloading in C++
Constructor Overloading in C++ A constructor can overload to overloading functions in a similar way....
0
2020-05-19T10:29:01
https://dev.to/nikhilk29693498/constructor-overloading-in-c-3858
<a href="https://www.chlopadhe.com/constructor-overloading-in-c/">Constructor Overloading in C++</a> A constructor can overload to overloading functions in a similar way. Constructors overloaded have the same name (class name) but a varying number of arguments. A specific constructor is named according to the number and form of arguments transferred. Since multiple constructors are present, the argument should also be passed on to the constructor when constructing an entity. Constructors could also be overloaded as with other member functions. If you have both default and parameterized constructors specified in your class, you will have overloaded constructors, one without parameter and one with parameter. In a class, you can have any number of Constructors varying in the parameter list.
nikhilk29693498
338,822
The Knowledge Base: A Podcast on Personal Productivity Tools
The Why Today, the landscape of productivity tools is vast and vibrant. Each one of us is...
6,777
2020-05-19T11:16:04
https://dev.to/bozho/the-knowledge-base-a-podcast-on-personal-productivity-tools-2j53
discuss, productivity, tools, digitalgarden
# The Why Today, the landscape of productivity tools is vast and vibrant. Each one of us is using a different set of tools to manage their own personal knowledge base. We also vary in what parts of our knowledge base we manage: some annotate books, others stick to their to do lists, bookmarks and/or personal finances. I think that there's a tremendous value in mapping this landscape. # What is it? **The Knowledge Base is a podcast and a community dedicated to reviewing the tools in the ecosystem and finding the best workflows.** The podcast will be used to interview some of the builders of the best apps and experts on the topic. The community will be used to share knowledge and experience. # The Goal Working on improving our productivity results in developing good habits which, we all know, are essential to self-improvement. When these are shared in the community, it's tremendously easier to find the best patterns. Our goal is to make this improvement x100 easier. # The Next Steps I want to hear your opinions, before I share the next steps with you. Are you with me? Do you have any suggestions to make this better? In the meantime you can follow this [Twitter account](https://twitter.com/theknowledgeba5). Let's build The Knowledge Base together!
bozho
338,904
Google Maps & Google Places in React Tutorial
A post by Leigh Halliday
0
2020-05-19T11:41:29
https://dev.to/leighhalliday/google-maps-google-places-in-react-tutorial-npk
tutorial, video, react, googlemaps
{% youtube WZcxJGmLbSo %}
leighhalliday
338,923
Closure in Javascript
The closure in javascript is one of the main concepts which each javascript developer needs to grasp....
0
2020-05-18T00:00:00
https://dev.to/marekdano/closure-in-javascript-1gn6
javascriptfundamentals, interviewing
--- slug: 'closure-in-javascript' title: 'Closure in Javascript' draft: false published: true path: '/posts/closure-in-javascript' layout: post description: tags: - 'Javascript fundamentals' - 'Interviewing' category: Coding cover_image: https://dev-to-uploads.s3.amazonaws.com/i/4f40d80big6b5d0ik5e5.jpg date: '2020-05-18' author: 'Marek Dano' --- The closure in javascript is one of the main concepts which each javascript developer needs to grasp. It is also used in the interviews for frontend developers. So, what's the **closure**? We can understand it by great definition from [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures) documentation where stays: > A closure is a special kind of object that combines two things: a function, and the environment in which the function was created. The environment consists of local variables that were in-scope at the time that the closure was created. In other words, the closure is created when a function is returned from another function and that returned function has access to the outer function's scope. The closure is created at the function creation time. Let say we have a function of ```javascript function getFamily(familyName) { return firstName => `${firstName} ${familyName}` } ``` We can create a `family` function by calling the function `getFamily` and passing `familyName` into that function. Calling the function `getFamily` returns a function. The closure is created with the scope defined. The scope contains `familyName`. If we call that returned function (closure) from `getFamily`, in our case `family` and passing `firstName` into that function we can get the full name. The reason we can get `familyName` is that we have access to the outer scope of the returned function where `familyName` exists. Remember the variable `familyName` was created when the function of `family` was created. Hopefully what I said now makes sense when we execute the following code ```javascript const family = getFamily('Smith') const fatherFullName = family('John') const motherFullName = family('Emma') console.log(fatherFullName) // John Smith console.log(motherFullName) // Emma Smith ``` The closure is also used when we want to keep the variables defined in the function private, not to be accessible outside of the scope. The variables can then be modified inside of the scope. Consider this extended code ```javascript function getFamily(familyName) { const familyMembers = [] function addMember(firstName) { familyMembers.push(firstName) } const listOfFamilyMembers = () => familyMembers.toString() const getFamilyName = () => familyName return { addMember, listOfFamilyMembers, getFamilyName, } } ``` The variable of `familyMember` won't be accessible. It can be called or modified in the functions which are defined in the scope when the function of `getFamily` is created. If we want `familyMember` to be accessible from `getFamily` we can add it to the object returned from that function, but in this case, the variable won't be private anymore. Now please follow the code and let me know what will be logged. Try to execute the code in your head first before testing it in your preferred javascript console. ```javascript const family = getFamily('Smith') family.addMember('John') family.addMember('Emma') family.addMember('Josh') console.log(family.listOfFamilyMembers()) // ??? console.log(family.getFamilyName()) // ??? ```
marekdano
338,935
Building my new site with gridsome(vue.js)
Originally published here. I'm really excited to finally launch my new website 🥳. It's been a labor...
0
2020-05-19T13:07:34
https://lewiskori.com/blog/building-my-new-site-with-gridsome-vue-js/
vue, showdev, growth, webdev
Originally published [here](https://lewiskori.com/blog/building-my-new-site-with-gridsome-vue-js/). I'm really excited to finally launch [my new website](https://lewiskori.com/) 🥳. It's been a labor of love and in terms of growth, I must say I really enjoyed working on it. For the tech stack, I went out of my comfort zone as I am majorly a backend developer. So I used the opportunity to polish on my frontend skills. I utilized my favorite javascript framework, vue.js. I used their static site generator, gridsome. [Bulma](https://bulma.io/) was used for CSS. In this article, I'll explain how this decision came to be, what I was using before, and my thoughts on gridsome. - [What I was using before](#what-i-was-using-before) - [Why I switched to gridsome](#why-i-switched-to-gridsome) - [Benefits of gridsome](#benefits-of-gridsome) - [Extra features](#extra-features) - [Challenges of gridsome](#challenges-of-gridsome) - [Was it worth the switch to gridsome](#was-it-worth-the-switch-to-gridsome) - [What next](#what-next) - [credits](#credits) ### What I was using before As aforementioned, I am primarily a backend developer, so the first version of my website wasn't up to date with the modern web trends. I did this on purpose because at the time my main aim was to perfect my backend skills and so heavily concentrated on that aspect. I used Django(python web framework), Postgresql, and a template from colorlib which I extended and modified to suit my needs. With time, I wrapped that with docker and redeployed the entire site. I used that project as a learning opportunity. You can read all about the lessons I learned [here](https://lewiskori.com/blog/lessons-learnt-from-building-and-deploying-a-portfolio-website/). [Here's version one of the site](https://v-one.lewiskori.com/) for comparison. ### Why I switched to gridsome So my site was working fine and I absolutely loved it. With time however and as I became more experienced in the backend, that curiosity bug that most developers come shipped with 😅 began nudging at me. Since I'd been learning vue.js and came to love it, I thought this would be a great opportunity to flex my frontend muscles a bit. Besides, what better way to learn than doing? Other than these reasons, It's important as a developer to keep up to date with the ever-changing tech field. My old site missed two important features that I really wanted. continuous deployment and better code highlighting in markdown. I saw netlify as an easy solution to the continuous deployment challenge. For context, here's a snapshot of my previous syntax highlighting, ![old_code_highlight](https://res.cloudinary.com/lewiskori/image/upload/v1589888452/old-syntax_m5efn6.png) ### Benefits of gridsome ![gridsome_advantages](https://res.cloudinary.com/lewiskori/image/upload/v1589888523/Screenshot_2020-05-19_Modern_Site_Generator_for_Vue_js_-_Gridsome_rlv8my.png) As highlighted above, gridsome comes with a plethora of advantages. Building on the awesome vue framework, it manages to be simple to understand, their documentation is exceptional, to say the least, and I got to solve the two challenges I had mentioned. To deploy to netlify, all you have to do is link your GitHub repo to netlify. From there, netlify will monitor for changes and update your site automatically. The [gridsome docs](https://gridsome.org/docs/deploy-to-netlify/) offer more on this. For code highlighting, I could now embed from various sources including gists and codepen. As a bonus, the new site has the capabilities to embed Spotify content for music lovers 🕺🏼. This aside, the basic syntax highlighting came to this ```python class moviesCrawl(Spider): name="movies" url_link="https://www.themoviedb.org/movie?page=1" page_number=15 start_urls=['http://api.scraperapi.com/?api_key='+ API_KEY + '&url=' + url_link + '&render=true'] ``` #### Extra features Some additional features that were implemented for the new site are 1. The site is now a PWA! So awesome. 2. Improved SEO by utilizing Vue Meta. 3. Writing content in Markdown. ### Challenges of gridsome The development process was fairly fun as their documentation was well written and thought out. However, I lacked some material which is not a bad thing in itself as it forces you to figure stuff on your own. I'm keen to write a comprehensive tutorial on using gridsome with the lessons I learned. In case you're interested, [subscribe to my newsletter](https://mailchi.mp/c42286076bd8/lewiskori) and you'll get the content as soon as it's out. ### Was it worth the switch to gridsome Without a shadow of a doubt yes!! The site took me a little over a month. Working tirelessly on my off-work hours. But in the end, the effort was worth it. In the process, I've come to appreciate the modern web and extremely curious to explore graphql which gridsome utilizes. ### What next This won't be the end as no project is ever complete, I'll be making a few modifications and I'd appreciate any input to the design. In the coming days, I'll make the entire codebase completely open-source for use to anyone who may want such a site. In terms of content, be sure to [watch out](https://mailchi.mp/c42286076bd8/lewiskori) as I'll double down on more backend tutorials with python and golang. Thanks for reading this post. Should you have any questions feel free to leave a comment below. My [twitter dm](https://twitter.com/lewis_kihiu/) is always open as well. #### credits 1. The design was highly inspired by [Brittany Chiang's](https://brittanychiang.com/) Gatsby site. 2. The [gridsome starter blog source code](https://github.com/lewis-kori/gridsome-starter-blog) gave me a lot of insight into where documentation lacked.
lewiskori
338,956
Web Development Firms: The Role Of Website In Business
The role of a website in your business is great. That is why you need one of the web development firm...
0
2020-05-19T12:44:39
https://dev.to/mssvs/web-development-firms-the-role-of-website-in-business-19jf
<em><strong>The role of a website in your business is great. That is why you need one of the web development firms.</strong></em> Did you know that <a href="https://www.pymnts.com/news/retail/2018/omichannel-ecommerce-consumer-habits/" rel="nofollow">88% among customers</a> spend time to evaluate online products before deciding to make a purchase? It has a vital implication. In doing business online, it is important that your business website can provide the most essential information and content to the searchers. If the website is lacking the substantial facts and information, the tendency is your business will perform weakly. And you don’t want this to happen, do you? That is why you need to understand the role of a website in your business success. In today’s business undertakings, it is necessary to have a website. No business must exist without it. Why? A site serves as a platform where the owners will be able to showcase and offer the products under their brand. And because of this necessity, it advised that you get the services of one of the <a href="https://medium.com/theymakedesign/front-end-development-companies-984ed848c39b">web development firms</a> today. Why not create a website by yourself? Yes, there are do-it-yourself procedures on how to design and develop a website. But when your brand site is created by a professional, it can provide more opportunities and possibilities for your business to grow and prosper. Having a professional website is ultimately a must. In the past, you have to establish a physical store, also known as a brick-and-mortar store, for the purpose of retailing or wholesaling your brand products. But today, there’s a shift from that traditional fashion. Nowadays, it’s more important to establish a strong online presence. Failure to do this can lead to a detrimental impact against your business startup. Even if you’re an existing venture, it is necessary to have a website that can serve as your online store. <h3>The need for digital marketing strategy</h3> In the past, businesses had chosen to pay ads on TV, prints, billboards, and other physical ways and means. But at present, there is what we call “digital marketing.” This marketing strategy is far different from the past techniques. Today, it’s more vital to stabilize the presence of your biz site on the web because you believe that more customers can be penetrated digitally. Meaning, the present wonders in marketing are on the Internet. So, every piece of digital marketing content is quite necessary. And to make sure that your website is going to stand out, of course, it is imperative that you’re going to hire one of the <a href="https://www.ramotion.com/agency/web-design/">top web development companies</a>, like Ramotion. This web design firm has the tools, know-how, and resources to be used to ensure the conversions of a website. If in the past, the advertisements were done on prints and televisions, today it’s done on YouTube, Facebook, Twitter, Instagram, and of course, Google (the most popular search engine). Because of this reality, the business marketers have shifted their focus. Nowadays, Internet marketers are popular. They are professional marketers who can work with businesses in increasing the potential of the digitized business endeavor. In other words, marketing at present is more on digitization. The need to strengthen the Internet marketing aspect of your business lies in the fact that most buyers prefer to buy through the web. Internet shopping or online shopping is the present practice that people observe. So, if you don’t have a website, you will surely be left behind by those competitors who have a site for their business. Remember that a website should respond to the needs and demands of the customers. The published content on the site should be relevant, thick and informative. The site visitors and users must find your brand as a source of important solutions. And there are steps you should observe and perform in order to grasp the goal of having success. To some extent, every piece of information the users may have from your site must provide a vivid idea about your solution. <h3>The solution should be effective</h3> The efficacy of the products or services you’re offering to the potential customers should be present. It means that the offers, whatever they are, must be able to address the concerns and issues of the customers. Once you can provide them with what they really want to have, you will be appreciated as one of the best brands in your respective biz category. But if you fail to please and satisfy the audience, it can cause a number of drawbacks which can eventually lead to business failure. Increasing the engagement of the potential customers is a must. How to do it? Well, you need to go back to the different online platforms. Aside from your website (where of course you can do on-site Internet marketing), you also have to consider the other channels such as Facebook, YouTube and Google. There you can provide content and you can pay influencers to promote your brand products or services. <h3>The solution must be visible</h3> Your brand website must be visible to the target audiences. Otherwise, you’re not going to make profit out of your business. It means your site has to rank on the different search engines. How to achieve this? You need to do proper SEO (search engine optimization) wherein every page on your site is optimized and accorded to the rules and algorithms of Google, Yahoo!, Bing, and even Facebook. When your brand solution becomes eye-catching to a lot of people, you’re going to have more conversions. Why? Because more people are able to find your company website. They are able to read your informative blog posts, to see your listed products. They can appreciate you as a provider of true and legitimate solutions to their existing problems. And every time they may need products or services, they automatically resort to finding your website again and buying your brand offers repeatedly. It is not hard to understand, right? If in the past all commodities and services were found in the malls and supermarkets, today they can be found in the different online stores globally. Look at Amazon, an ecommerce giant. Everything is there from books, to car accessories and food supplies. You can buy whatever you want from Amazon. The implication is simple. You need to boost your online presence through a business website. This is a great chance available for you If you want to become successful. Gone are the days when you have to build a store in a particular land area. Today, the business landscape and commerce activities are digitized. All things are marketed through the Internet. So, you should get the vital services of web development firms if you want your <a href="https://www.practicalecommerce.com/how-do-you-measure-ecommerce-success" rel="nofollow">business to become successful</a>. <img src="https://dev-to-uploads.s3.amazonaws.com/i/8zprjobun7n254crzvjz.jpg"> <b>Read more:</b> <a href="https://www.bloglovin.com/@110186/front-end-development-company-explaining">Front End Development Company: Explaining The Context Of Tech-Based Web</a> <a href="https://theymakedesign.hatenablog.com/entry/front-end-website-developer">Front End Website Developer Cites Ways To Boost Website Performance</a> <a href="http://theymakedesign.mystrikingly.com/blog/top-front-end-developers">Reasons Why You Need Top Front End Developers</a> <a href="http://theymakedesignreal.tilda.ws/top-development-companies-for-front-end">Top Development Companies: A Responsive Website Is Solution To Achieve Success</a> <a href="https://web-developers.webflow.io/front-end-web-developer">Being A Front End Web Developer: Things To Know</a>
mssvs
339,081
Home Surveillance System With Node and a Raspberry Pi
Have you ever wondered how to build a home surveillance system? Perhaps to monitor your children,...
0
2020-05-19T14:07:46
https://www.nexmo.com/blog/2020/05/19/home-surveillance-system-with-node-and-a-raspberry-pi
sms, video, node, raspberrypi
--- title: Home Surveillance System With Node and a Raspberry Pi published: true date: 2020-05-19 13:31:31 UTC tags: sms,video,node,raspberrypi canonical_url: https://www.nexmo.com/blog/2020/05/19/home-surveillance-system-with-node-and-a-raspberry-pi --- Have you ever wondered how to build a home surveillance system? Perhaps to monitor your children, supervise vulnerable people in their home, or to be your home security system? This tutorial will guide you through how to the introductory process to build one. In this tutorial, you get to build a small and cheap home surveillance system using a Raspberry Pi 4 with a Raspberry Pi Camera module and motion sensor. The software side of this will be using [Vonage Video API](https://www.vonage.com/communications-apis/video/) (formerly TokBox OpenTok) to publish the stream and [Vonage Messages API](https://developer.nexmo.com/messages/overview) to notify the user that motion gets detected by SMS. Here are some of the things you’ll learn in this tutorial: - How to set up a Raspberry Pi, - Install a Raspberry Pi camera and motion sensor, - How to use [Vonage Messages API (formerly Nexmo)](https://dashboard.nexmo.com/getting-started/messages) to send SMS, - How to use [Vonage Video API (formerly TokBox OpenTok)](https://tokbox.com/developer/) to create and view a live stream. ## Prerequisites - Raspberry Pi 4 - Raspberry Pi Camera module - Motion Sensor (HC-SR501 PIR) - [Vonage account](https://dashboard.nexmo.com/sign-up?utm_source=DEV_REL&utm_medium=blog&utm_campaign=home-surveillance-system-with-node-and-a-raspberry-pi) - [TokBox Account](https://tokbox.com/account/user/signup?utm_source=DEV_REL&utm_medium=blog&utm_campaign=home-surveillance-system-with-node-and-a-raspberry-pi) - Node & NPM installed on the Raspberry Pi ## Raspberry Pi Installation and Setup The Raspberry Pi Foundation is a UK-based charity enabling people worldwide to solve technological problems and express themselves creatively using the power of computing and digital technologies for work. On their site is a great [step by step guide](https://projects.raspberrypi.org/en/projects/raspberry-pi-setting-up) on what each part of the Raspberry Pi device is, how to get the Operating System installed, and how to get started with using a Raspberry Pi. There are also many other resources to help with troubleshooting any issues you may be having, and lots of other projects that may interest you. ## Camera and Motion Sensor Installation ### Installing Raspberry Pi Camera Module This tutorial uses a Raspberry Pi 4 and the official Raspberry Pi Camera module, although there should be no issues using other cameras. The photograph below is of the Raspberry Pi and a Camera Module used in this article: ![Raspberry Pi](https://www.nexmo.com/wp-content/uploads/2020/05/raspberry-pi.jpeg "Raspberry Pi") Connect the Camera Module via the ribbon cable into the Raspberry Pi’s Camera Module port. The photograph below shows where you should install the Camera Module ribbon: ![Raspberry Pi with Camera](https://www.nexmo.com/wp-content/uploads/2020/05/raspberry-pi-camera-ribbon.jpeg "Raspberry Pi with Camera") ### Enabling SSH and Camera [Secure Shell (SSH)](https://www.ssh.com/ssh/) is a software package that enabled a secure connection and control of a remote system. The Raspberry Pi in this tutorial will run in headless mode, which means without a monitor, keyboard or mouse. With SSH enabled, you will be able to connect to the device remotely on your computer or phone. To enable SSH, in the Raspberry Pi terminal, run: ```bash sudo raspi-config ``` You will see a screen like an image similar to what’s shown below: ![Enable SSH & Camera](https://www.nexmo.com/wp-content/uploads/2020/05/raspi-config.png "Raspi-config") Choose option 5 – `Interfacing Options` - From the next menu, choose Option P1 for `Camera`, then select `Yes`, - Following this choose Option P2 for `SSH`, again select `Yes`. You have now enabled the Camera module and SSH on your Raspberry Pi. ### Installing the Motion Sensor The next step is to wire the Raspberry Pi to a motion sensor. This tutorial uses the HC-SR501 PIR motion sensor; however, other motion sensor modules should work fine. Please refer to their wiring guides for wiring them to your Raspberry Pi. First, take the sensor and connect three wires to it. I’ve used red for the live, blue for the GPIO, and black for ground. For the sensor in this example, the first pin is ground, second GPIO and third live as shown: ![Wiring Sensor to Raspberry Pi Pt1](https://www.nexmo.com/wp-content/uploads/2020/05/sensor-wiring-pt1.jpeg "Wiring a Motion Sensor Pt 1") A great example to describe each of the pins on the Raspberry Pi is on [The Raspberry Pi Website.](https://www.raspberrypi.org/documentation/usage/gpio/) The diagram illustrates the layout of the GPIO pins, as shown below: ![GPIO Pinout Diagram](https://www.nexmo.com/wp-content/uploads/2020/05/GPIO-Pinout-Diagram-2.png "GPIO Pinout Diagram") The final part is connecting the wires to the Raspberry Pi. The live (red) wire needs to be connected to one of the `5V power` pins on the Pi, referring to the diagram above I used pin 2. The ground (black) wire needs to be connected to one of the `GND` pins on the Pi, again referring to the diagram I used pin 6. The final wire to join is the GPIO (blue) wire, which needs to connect to one of the `GPIO` pins. In this example, I used pin 12, labelled “GPIO 18”. The final wiring setup is shown below: ![Wiring Sensor to Raspberry Pi Pt2](https://www.nexmo.com/wp-content/uploads/2020/05/sensor-wiring-pt2.jpeg "Wiring a Motion Sensor Pt 2") ### Testing Motion Detection Now all the hardware is installed and configured, and it’s time to build the code for the project. However, first, a Node project needs creating, to test for motion testing and prepare for the project ahead. This project is where you will write all of the motion detection and video streaming code. To create a new Node project, make a new directory, change to that directory and run `npm init`. Running the commands listed below do all three of these: ```bash mkdir /home/pi/pi-cam/ cd /home/pi/pi-cam/ npm init ``` Follow the instructions requested, set a name for the project and leave the rest of the inputs as defaults. The following commands create a new `index.js`, which will store the majority of your code, and install a new package called `onoff` that allows the controlling of the GPIO pins: ```bash touch index.js npm install onoff ``` Inside your new `index.js` file copy the following code which reads the GPIO pin 18 to alert if motion has been detected, or alert when the movement has stopped. ```javascript const gpio = require('onoff').Gpio; const pir = new gpio(18, 'in', 'both'); pir.watch(function(err, value) { if (value == 1) { console.log('Motion Detected!') } else { console.log('Motion Stopped'); } }); ``` Time to check whether the code above and installation of the motion sensor was successful. Run: ```bash node index.js ``` Wave your hand in front of the motion sensor, then watch the Terminal to see “Motion Detected!”. A few seconds later you’ll see “Motion stopped” output. ### Testing the Camera In your Raspberry Pi command line, type the following command to take a still photo of the camera’s view. **NOTE** If you have logged in as a user other than the default `pi`, replace `pi` with your username. ```bash raspistill -o /home/pi/cam.jpg ``` Looking in the directory `/home/pi/` you’ll now see `cam.jpg`. Opening it will show you a photo of your Raspberry Pi’s current camera view. ### Node and NPM ```bash node --version npm --version ``` > Both Node and NPM need to be installed and at the correct version. [Go to nodejs.org](https://nodejs.org/), download and install the correct version if you don’t have it. ### Our CLI To set up your application, you’ll need to install [our CLI](https://www.npmjs.com/package/nexmo-cli). Install it using NPM in the terminal. ```bash npm install -g nexmo-cli@beta ``` You can check you have the correct version with this command. At the time of writing, I was using version `0.4.9-beta-3`. ```bash nexmo --version ``` Remember to [sign up for a free Vonage account](https://dashboard.nexmo.com/sign-up?utm_source=DEV_REL&utm_medium=blog&utm_campaign=home-surveillance-system-with-node-and-a-raspberry-pi) and configure the CLI with the API key and API secret found on your dashboard. ```bash nexmo setup <your_api_key> <your_api_secret> ``` ### Git (Optional) You can use git to clone the [demo application](https://github.com/nexmo-community/home-surveillance-with-raspberry-pi) from GitHub. > For those uncomfortable with git commands, don’t worry, I’ve you covered. Follow this [guide to install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). ### Install a Mysql Server On the Raspberry Pi, run the following command to install the MySQL database server: ```bash sudo apt install mariadb-server ``` By default, the MySQL server gets installed with the `root` user having no password. You need to rectify this, to ensure the database isn’t insecure. On the Pi run the command below and follow the instructions. ```bash sudo mysql_secure_installation ``` Now the `root` user’s password is set, it’s time to create a database and user to access that database. Connect to the MySQL server: ```bash sudo mysql -u root -p ``` ```mysql -- Creates the database with the name picam CREATE DATABASE picam; -- Creates a new database user "camuser" with a password "securemypass" and grants them access to picam GRANT ALL PRIVILEGES ON picam.* TO `camuser`@localhost IDENTIFIED BY "securemypass"; -- Flushes these updates to the database FLUSH PRIVILEGES; ``` Your Raspberry Pi is now set up and ready for the code part of this tutorial. ## Building the Application ### Installing an SSL Certificate In your Raspberry Pi’s Terminal, change directory to your project path and run the following command to generate a self-signed SSL certificate. Vonage Video API requires HTTPS to be accessed, so an SSL certificate is needed, even if it’s self-signed. Run the command below to generate your SSL certificates. ```bash openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 ``` Two files get created, `key.pem` and `cert.pem`, move these to a location your code can access. For this tutorial, they’re in the project directory. ### The Web Server [Express](https://expressjs.com/) is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. Express is a very lightweight, flexible Node.js framework that is what you need in this project. To provide endpoints for you to access your video stream. Install Express into your application with the following command: ```bash npm install express --save ``` At the top of the `index.js` file, you need to import the packages `https`, `fs` and `express`. Make the following changes: ```diff + const express = require('express'); + const https = require('https'); + const fs = require('fs'); const gpio = require('onoff').Gpio; + const app = express(); const pir = new gpio(18, 'in', 'both'); pir.watch(function(err, value) { if (value == 1) { console.log('Motion Detected!') - } else { - console.log('Motion Stopped'); } }); ``` You don’t need the `else` part of the motion detection for this tutorial. So remove that part too, as shown above. You need a web server to access your video stream over the network or Internet. Time to create a method to initiate a new server with an example endpoint. Above `pir.watch(function(err, value) {` add ```javascript async function startServer() { const port = 3000; app.get('/', (req, res) => { res.json({ message: 'Welcome to your webserver!' }); }); const httpServer = https.createServer({ // The key.pem and cert.pem files were created by you in the previous step, if the files are not stored in the project root directory // make sure to update the two lines below with their correct paths. key: fs.readFileSync('./key.pem'), cert: fs.readFileSync('./cert.pem'), // Update this passphrase with what ever passphrase you entered when generating your SSL certificate. passphrase: 'testpass', }, app); httpServer.listen(port, (err) => { if (err) { return console.log(`Unable to start server: ${err}`); } return true; }); } ``` A way to access this function is now needed, below your function `startServer() {}` add a call to the function as shown: ```javascript startServer(); ``` To test this is working, in your Terminal, run: ```bash node index.js ``` > **_Note:_** If you’re connected to your Raspberry Pi via SSH or keyboard/tv, in the Terminal type: `ifconfig` to find out your Raspberry Pi’s local IP address. Accessing your Raspberry Pi’s IP address in your browser: `https://<ip address>:3000/` will return ```json {"message":"Welcome to your webserver!"} ``` ### Installing Sequelize [Sequelize](https://sequelize.org/) is a powerful library for Node to make querying a database easier. It is an Object-Relational Mapper (ORM), which maps objects to the database schemas. Sequelize covers various protocols such as Postgres, MySQL, MariaDB, SQLite, and Microsoft SQL Server. This tutorial will use MariaDB server because that's the SQL server available on the Raspberry Pi. ```bash # DotEnv is used to access your .env variables # Sequelize is an ORM for your DATABASE # mysql2 is what you're using as a database. Sequelize needs to know this. npm install --save dotenv sequelize mysql2 # Sequelize-cli allows you to generate models, migrations and run these migrations. npm install -g sequelize-cli # Initializes Sequelize into the project, creating the relevant files and directories sequelize init ``` Inside your project directory, create a new file `.env`, and update the values below with the correct credentials for your database. ```env DB_NAME=<database name> DB_USERNAME=<database username> DB_PASSWORD=<database password> DB_HOST=127.0.0.1 DB_PORT=3306 ``` Within the `config` directory create a new file called `config.js`. This file is where the projects database settings are stored, and being javascript, it can access the `.env` file: ```javascript require('dotenv').config(); module.exports = { development: { database: process.env.DB_NAME, username: process.env.DB_USERNAME, password: process.env.DB_PASSWORD, host: process.env.DB_HOST, port: process.env.DB_PORT, dialect: 'mysql', operatorsAliases: false }, } ``` Now in `models/index.js`, find and replace: ```diff - const config = require(__dirname + '/../config/config.json')[env]; + const config = require(__dirname + '/../config/config.js')[env]; ``` Back in your main `index.js` file, import the `models/index.js` file for your application to access your database models: ```javascript const db = require('./models/index'); ``` ### Generating and Running a Migration When a Vonage Video session gets created, a session ID gets returned, this session ID needs to be stored somewhere for you to connect to it remotely. The best way to do this is a database table. Using the recently installed Sequelize CLI, run the command below. It creates a new table called Session, with two new columns: - sessionId (which is a string), - active (which is a boolean). ```bash # Generate yourself a Session model, this is going to be used to store the sessionId of the video feed sequelize model:generate --name Session --attributes sessionId:string,active:boolean ``` Two new files get created after this command is successful, these are: - `models/session.js` - `migrations/<timestamp>-Session.js` The new model, `session.js`, defines what the database expects in terms of column names, data types, among other things. The new migrations file defines what is to be persisted to the database when the migration is successful. In this instance, it creates a new database table called `sessions` with five new columns: - id - sessionId - active - createdAt - updatedAt Run this migration using the Sequelize CLI command with the parameters `db:migrate`: ```bash sequelize db:migrate ``` The output will be the same as the example below: ```bash == 20200504091741-create-session: migrating ======= == 20200504091741-create-session: migrated (0.051s) ``` You now have a new database table that you will later use to store the session ID. ## Vonage Video You’re about to install two libraries the project needs, Vonage Video (formerly TokBox OpenTok), and Puppeteer. Vonage Video (formerly TokBox OpenTok) is a service that provides live interactive video sessions to people globally. The Vonage Video API (formerly TokBox OpenTok) uses the WebRTC industry standard. It allows people to create custom video experiences across billions of devices, whether it be mobile, web or desktop applications. Puppeteer is a Node library that provides a method to control Chrome or Chromium programmatically. By default, Puppeteer runs in a headless mode, but can also run in a non-headless mode of Chrome or Chromium. A headless browser is a browser without a graphical user interface, (such as no monitor for the user to see). Install both of these libraries by running the command below: ```bash npm install opentok puppeteer ``` Copy the additions to the code in your `index.js` as shown below. This code imports three libraries into your project. - OpenTok (To publish/subscribe to video stream with Vonage Video) - Puppeteer (For your Raspberry Pi to open a browser in headless mode to publish the stream) - DotEnv (To access the .env variables) An OpenTok object gets initialized using your Vonage API Key and Secret .env variables you have yet to add. ```diff const gpio = require('onoff').Gpio; + const OpenTok = require('opentok'); + const puppeteer = require('puppeteer'); + const dotenv = require('dotenv'); const app = express(); const pir = new gpio(23, 'in', 'both'); + dotenv.config(); + const opentok = new OpenTok( + process.env.VONAGE_VIDEO_API_KEY, + process.env.VONAGE_VIDEO_API_SECRET, + ); ``` You’ll need your Vonage Video API key and API secret. You can find these by logging into your [Vonage Video Video API account](https://tokbox.com/account). Next, create a new Project. Once created, you will see your project’s dashboard, which contains the API key and API secret. Inside your `.env` file add the Vonage Video credentials as below (Updating the values inside `<` and `>` with your credentials): ```env VONAGE_VIDEO_API_KEY=<tokbox api key> VONAGE_VIDEO_API_SECRET=<tokbox api secret> ``` ### Creating a Vonage Video Session In your `index.js` file, find the part of the code that initializes the OpenTok object, and add three variables called: - `canCreateSession`, determines whether your project can create a session or not (if a session is already active) - `session`, is the variable to hold the current session object - `url` is the variable to keep the current URL of the session (in this case, a Ngrok URL) ```diff const opentok = new OpenTok( process.env.VONAGE_VIDEO_API_KEY, process.env.VONAGE_VIDEO_API_SECRET, ); + let canCreateSession = true; + let session = null; + let url = null; ``` Time to create a session and store the returned session ID in the database for use when the user clicks on the link to view the published stream. Copy the code below to add the functions that achieve this: ```javascript async function createSession() { opentok.createSession({ mediaMode: 'routed' }, (error, session) => { if (error) { console.log(`Error creating session:${error}`); return null; } createSessionEntry(session.sessionId); return null; }); } function createSessionEntry(newSessionId) { db.Session .create({ sessionId: newSessionId, active: true, }) .then((sessionRow) => { session = sessionRow; return sessionRow.id; }); } ``` The session watcher part of the project needs to be updated to determine whether `canCreateSession` is true, if this is the case, set it to false (so no other streams get created while this one is active), then create the session by calling the method previously added to the project `createSession`. This is done by updating the following code: ```diff pir.watch(function(err, value) { - if (value == 1) { + if (value === 1 && canCreateSession === true) { + canCreateSession = false; console.log('Motion Detected!'); + createSession(); } }); ``` ### Creating a Publisher and Subscriber A new directory is needed which holds the front-facing pages for the Pi to publish its stream, and the client (you) to subscribe to a stream. Create a new `public` directory with its accompanying `css`, `js`, and `config` directories with the commands below: ```bash mkdir public mkdir public/css mkdir public/js mkdir public/config ``` You’re going to need some styling for your page that the client sees, so create a new `app.css` file inside `public/css/` and copy the code below into this file. The CSS below ensures the size of the content is 100% in height, the background colour is grey, and the video stream is full screen for maximum visibility. ``` body, html { background-color: gray; height: 100%; } #videos { position: relative; width: 100%; height: 100%; margin-left: auto; margin-right: auto; } #subscriber { position: absolute; left: 0; top: 0; width: 100%; height: 100%; z-index: 10; } #publisher { position: absolute; width: 360px; height: 240px; bottom: 10px; left: 10px; z-index: 100; border: 3px solid white; border-radius: 3px; } ``` Next, you will need to create a new javascript file that gets used on the client’s side (so in your browser as the subscriber). This file will initialize a Vonage Video session, get the session details from the backend with a GET request and if the route is `/serve` it will publish the stream if the URL path is `/client` it will subscribe to the current active video stream. In `public/js/` create a new `app.js` file and copy the following code into it: ```javascript let apiKey; let sessionId; let token; let isPublisher = false; let isSubscriber = false; let url = ''; // Handling all of our errors here by alerting them function handleError(error) { if (error) { console.log(error.message); } } function initializeSession() { const session = OT.initSession(apiKey, sessionId); // Subscribe to a newly created stream if (isSubscriber === true) { session.on('streamCreated', (event) => { session.subscribe(event.stream, 'subscriber', { insertMode: 'append', width: '100%', height: '100%', }, handleError); }); } if (isPublisher === true) { // Create a publisher let publisher = OT.initPublisher('publisher', { insertMode: 'append', width: '100%', height: '100%', }, handleError); } // Connect to the session session.connect(token, (error) => { // If the connection is successful, publish to the session if (error) { handleError(error); } else if (isPublisher === true) { session.publish(publisher, handleError); } }); } function setDetails(details) { apiKey = details.apiKey; sessionId = details.sessionId; token = details.token; initializeSession(); } async function getDetails(publisher, subscriber, url) { const request = await fetch(url); const response = await request.json(); if (publisher === true) { isPublisher = true; } if (subscriber === true) { isSubscriber = true; } setDetails(response); } function fetchUrl() { return fetch('/config/config.txt') .then( r => r.text() ) .then( t => { url = t} ); } ``` Two new `HTML` files are needed for these two new endpoints `/serve` and `/client`, these make use of the Vonage Video client-side javascript library to publish or subscribe to current active sessions. Create a new `server.html` file inside the `public/` directory with the following contents: ```html <html> <head> <link type="text/css" rel="stylesheet" href="/css/app.css"> <script src="https://static.opentok.com/v2/js/opentok.min.js"></script> <script src="https://unpkg.com/axios/dist/axios.min.js"></script> </head> <body> <h1>Publisher view</h1> <div id="videos"> <div id="publisher"></div> </div> <script type="text/javascript" src="/js/app.js"></script> <script type="text/javascript"> getDetails(true, false, 'https://localhost:3000/get-details'); </script> </body> </html> ``` For the `/client` endpoint, create a new `client.html` file inside the `public/` directory and copy the following code: ```html <html> <head> <link type="text/css" rel="stylesheet" href="/css/app.css"> <script src="https://static.opentok.com/v2/js/opentok.min.js"></script> <script src="https://unpkg.com/axios/dist/axios.min.js"></script> </head> <body> <h1>Subscriber view</h1> <div> <button onclick="getDetails(false, true, url + 'get-details')">Watch Video Stream</button> </div> <div id="videos"> <div id="subscriber"></div> </div> <script type="text/javascript" src="/js/app.js"></script> </body> </html> ``` You don’t have the endpoints defined yet in your backend code (`index.js`), so time to build those! Find the original endpoint you created: ```javascript app.get('/', (req, res) => { res.json({ message: 'Welcome to your webserver!' }); }); ``` Replace it with the following code: ```javascript // Adds the public directory to a publicly accessible directory within our new web server app.use(express.static(path.join(`${__dirname}/public`))); // Creates a new endpoint `/serve` as a GET request, which provides the contents of `/public/server.html` to the users browser app.get('/serve', (req, res) => { res.sendFile(path.join(`${__dirname}/public/server.html`)); }); // Creates a new endpoint `/client` as a GET request, which provides the contents of `/public/client.html` to the users browser app.get('/client', (req, res) => { res.sendFile(path.join(`${__dirname}/public/client.html`)); }); // Creates a new endpoint `/get-details` as a GET request, which returns a JSON response containing the active Vonage Video session, the API Key and a generated Token for the client to access the stream with. app.get('/get-details', (req, res) => { db.Session.findAll({ limit: 1, where: { active: true, }, order: [['createdAt', 'DESC']], }).then((entries) => res.json({ sessionId: entries[0].sessionId, token: opentok.generateToken(entries[0].sessionId), apiKey: process.env.VONAGE_VIDEO_API_KEY, })); }); ``` If you look carefully in the above code, you’re using a new library called `path`. So at the top of the `index.js` file, include path as shown below: ```javascript const path = require('path'); ``` Nothing happens until you publish the display on the Raspberry Pi. Inside `.env` add another variable (60000 milliseconds is the equivalent to 60 seconds): ```env VIDEO_SESSION_DURATION=60000 ``` Back inside `index.js` add functionality that will close the stream when the function `closeSession()` is called: ```javascript async function closeSession(currentPage, currentBrowser) { console.log('Time limit expired. Closing stream'); await currentPage.close(); await currentBrowser.close(); if (session !== null) { session.update({ active: false }); } } ``` Now is the time to create the publishing of the stream in headless mode, the function below does the following all in headless mode: - Creates a new browser instance, - Opens a new page / tab, - Overrides permissions for the camera and microphone on the browser, - Directs the page to the `/serve` endpoint to publish the video stream, - Creates a new timer to stop the video stream after a certain length of time, - Creates another timer to provide a buffer between the stream ending and when another is allowed to start Copy the code below into your `index.js` file: ```javascript async function startPublish() { // Create a new browser using puppeteer const browser = await puppeteer.launch({ headless: true, executablePath: 'chromium-browser', ignoreHTTPSErrors: true, args: [ '--ignore-certificate-errors', '--use-fake-ui-for-media-stream', '--no-user-gesture-required', '--autoplay-policy=no-user-gesture-required', '--allow-http-screen-capture', '--enable-experimental-web-platform-features', '--auto-select-desktop-capture-source=Entire screen', ], }); // Creates a new page for the browser const page = await browser.newPage(); const context = browser.defaultBrowserContext(); await context.overridePermissions('https://localhost:3000', ['camera', 'microphone']); await page.goto('https://localhost:3000/serve'); let sessionDuration = parseInt(process.env.VIDEO_SESSION_DURATION); let sessionExpiration = sessionDuration + 10000; // Closes the video session / browser instance when the predetermined time has expired setTimeout(closeSession, sessionDuration, page, browser); // Provides a buffer between the previous stream closing and when the next can start if motion is detected setTimeout(() => { canCreateSession = true; }, sessionExpiration); } ``` Time to make use of the function you’ve just put into your project, find and add `startPublish()` to your code: ```diff createSessionEntry(session.sessionId); + startPublish(); ``` You’re almost at a point you can test your code! You’ve created new endpoints, accessible either as a publisher or a subscriber to the video. Next, you want to have a URL to access the stream if you’re in a remote location. ### Ngrok If you wish to connect to the camera stream remotely, outside of the network, the Raspberry Pi has connected to, and you’ll need to expose your web server to the Internet. It’s time to install and use [Ngrok](https://ngrok.com/). By running the command below, Ngrok will only be installed locally for the project: ```bash npm install ngrok ``` You now need to implement the usage of Ngrok into your project. So at the top of the `index.js` file include the `ngrok` package: ```javascript const ngrok = require('ngrok'); ``` Now you need to create a function that connects to Ngrok. When successful it will save the URL returned into a file `public/config/config.txt` which gets retrieved in the file created in previous steps `public/client.html`. In your `index.js` file add the following: ```javascript async function connectNgrok() { let url = await ngrok.connect({ proto: 'http', addr: 'https://localhost:3000', region: 'eu', // The below examples are if you have a paid subscription with Ngrok where you can specify which subdomain //to use and add the location of your configPath. For me, it was gregdev which results in //https://gregdev.eu.ngrok.io, a reserved subdomain // subdomain: 'gregdev', // configPath: '/home/pi/.ngrok2/ngrok.yml', onStatusChange: (status) => { console.log(`Ngrok Status Update:${status}`); }, onLogEvent: (data) => { console.log(data); }, }); fs.writeFile('public/config/config.txt', url, (err) => { if (err) throw err; console.log('The file has been saved!'); }); } ``` Now this has all been configured, you can call Ngrok by calling the `connectNgrok()` function as shown below: ```diff httpServer.listen(port, (err) => { if (err) { return console.log(`Unable to start server: ${err}`); } + connectNgrok(); return true; }); ``` You can now test your stream. Run the following, while in the Raspberry Pi Terminal: ```bash node index.js ``` After around 10 seconds (for the service to initialize), wave your hand in front of the motion sensor. If successful, you will see a `Motion Detected!` output in your Terminal window. Now go to the file on your Raspberry pi `public/config/config.txt`, copy this URL and paste it into your browser. Append `/client` to the end of the URL. For me, this was `https://gregdev.eu.ngrok.io/client`. Your browser will now show the published stream from your Raspberry pi, which has opened a headless Chromium browser instance and navigated to its local IP: `https://localhost/serve`. ### Installing Vonage Messages To use the new Vonage Messages API, which sends SMS messages whenever motion gets detected, you’ll need to install the beta version of our Node SDK. Run the following command: ```bash npm install nexmo@beta ``` The Messages API requires you to create an application on the Vonage Developer portal, and an accompanying a `private.key` which gets generated when creating the app. Running the command below creates the application, sets the webhooks (Which aren’t required right now so leave them as quoted), and finally a key file called `private.key`. ```bash nexmo app:create "My Messages App" --capabilities=messages --messages-inbound-url=https://example.com/webhooks/inbound-message --messages-status-url=https://example.com/webhooks/message-status --keyfile=private.key ``` Now that you’ve created the application, some environment variables need setting. You will find your `API key` and `API secret` on the [Vonage Developer Dashboard](https://dashboard.nexmo.com/getting-started-guide). The `VONAGE_APPLICATION_PRIVATE_KEY_PATH` is the location of the file you generated in the previous command. This project had it stored in the project directory, so for example: `/home/pi/pi-cam/private.key` The `VONAGE_BRAND_NAME` doesn’t get used in this project, but you are required to have one set for the Messages API, I’ve kept it simple `HomeCam` Finally, the `TO_NUMBER` is the recipient that receives the SMS notification. ```env VONAGE_API_KEY= VONAGE_API_SECRET= VONAGE_APPLICATION_PRIVATE_KEY_PATH= VONAGE_BRAND_NAME=HomeCam TO_NUMBER=<your mobile number> ``` At the top of your `index.js` file import the Vonage package: ```javascript const Vonage = require('nexmo'); ``` To create the Vonage object which is used to make the API requests, under the definition of the OpenTok object, add the following: ```javascript const vonage = new Vonage({ apiKey: process.env.VONAGE_API_KEY, apiSecret: process.env.VONAGE_API_SECRET, applicationId: process.env.VONAGE_APPLICATION_ID, privateKey: process.env.VONAGE_APPLICATION_PRIVATE_KEY_PATH, }); ``` Inside, and at the end of your `connectNgrok()` function, add functionality that updates your Vonage application with webhooks to handle inbound-messages and the message-status with the correct URL (the Ngrok URL): ```javascript vonage.applications.update(process.env.VONAGE_APPLICATION_ID, { name: process.env.VONAGE_BRAND_NAME, capabilities: { messages: { webhooks: { inbound_url: { address: `${url}/webhooks/inbound-message`, http_method: 'POST', }, status_url: { address: `${url}/webhooks/message-status`, http_method: 'POST', }, }, }, }, }, (error, result) => { if (error) { console.error(error); } else { console.log(result); } }); ``` ### Sending an SMS The notification method of choice for this tutorial is SMS, sent via the Messages API. The Vonage library has already been installed into this project, so no need to configure it. In the `index.js` file, add a new function called `sendSMS()`, this takes the URL and the number you’re expecting to receive the SMS on. Then, using the Messages API, sends an SMS notification that the camera has detected motion. ```javascript function sendSMS() { const message = { content: { type: 'text', text: `Motion has been detected on your camera, please view the link here: ${url}/client`, }, }; vonage.channel.send( { type: 'sms', number: process.env.TO_NUMBER }, { type: 'sms', number: process.env.VONAGE_BRAND_NAME }, message, (err, data) => { console.log(data.message_uuid); }, { useBasicAuth: true }, ); } ``` Now call the `sendSMS()` function by adding: ```diff createSessionEntry(session.sessionId); + sendSMS(); ``` There we have it! All you have to do now is SSH into your Raspberry Pi and start the server within your project directory running: ```bash node index.js ``` Your server is now running, and your Raspberry Pi is to detect motion, which it will then do the following: - Start an OpenTok session, - Save the Session ID to the database, - Send an SMS to your predetermined phone number with a link to the stream, - Start a publishing stream from the Raspberry pi. You’ve now built yourself a home surveillance system in a short time, which can be accessed anywhere in the world! The finished code for this tutorial can be found on the [GitHub repository](https://github.com/nexmo-community/home-surveillance-with-raspberry-pi). Below are a few other tutorials we’ve written implementing the Vonage Video API into projects: - [Stream a Video Chat With Vonage Video API](https://www.nexmo.com/blog/2020/04/28/stream-a-video-chat-with-vonage-video-api-dr) - [Add Texting Functionality to a Video Chat With Vonage Video API](https://www.nexmo.com/blog/2020/04/21/video-with-text-chat) - [Real-Time Face Detection in .NET with OpenTok and OpenCV](Real-Time Face Detection in .NET with OpenTok and OpenCV) Don’t forget, if you have any questions, advice or ideas you’d like to share with the community, then please feel free to jump on our [Community Slack workspace](https://developer.nexmo.com/community/slack) or pop a reply below ![👇](https://s.w.org/images/core/emoji/11.2.0/72x72/1f447.png). I’d love to hear back from anyone that has implemented this tutorial and how your project works. The post [Home Surveillance System With Node and a Raspberry Pi](https://www.nexmo.com/blog/2020/05/19/home-surveillance-system-with-node-and-a-raspberry-pi) appeared first on [Vonage Developer Blog](https://www.nexmo.com).
gregholmes
339,107
Embracing the Chaos
So I’ve done quite a few posts recently about resiliency. And it’s a topic that more and more is very...
0
2020-05-21T20:28:48
https://dev.to/documentednerd/embracing-the-chaos-12mn
technology, engineering, practices
--- title: Embracing the Chaos published: true date: 2020-05-15 02:52:27 UTC tags: Technology,engineering,practices canonical_url: --- So I’ve done quite a few posts recently about resiliency. And it’s a topic that more and more is very important to everyone as you build out solutions in the cloud. The new buzz word that’s found its way onto the scene is Chaos engineering. And really this is a practice of building out solutions that are more resilient. That can survive faults and issues that arise, and ensure the best possibly delivery of those solutions to end customers. The simple fact is that software solutions are absolutely critical to every element of most operations, and to have them go down can ultimately break down a whole business if this is not done properly. At its core, Chaos engineering is about pessimism :). Things are going to fail. Sort of like every other movement, like Agile and DevOps, Chaos Engineering embraces a reality. In this case that reality is that failures will happen, and should be expected. The goal being that you assume, that there will be failures and should architected to support resiliency. So what does that actually mean, it means that you determine the strength of the application, by doing controlled experiments that are designed to inject faults into your applications and seeing the impact. The intention being that the application grows stronger and able to handle any faults and issues while maintaining the highest resiliency possible. ### How this something new? Now a lot of people will read the above, and say that “chaos engineering” is just the latest buzz word to cover something everyone’s doing. And there is an element of truth to that, but the details are what matters. And what I mean by that, is that there is a defined approach to doing this and doing it in a productive manner. Much like agile, and devops. In my experience, some are probably doing elements of this, but by putting a name and methodology to it, we are calling attention to the practice for those who aren’t, and helping with a guide of sorts to how we approach the problem. There are several key elements that you should keep in mind as you find ways to grow your solution by going down this path. - Embrace the idea that failures happen. - Find ways to be proactive about failures. - Embrace monitoring and visibility Sort of how Agile embraced the reality that “Requirements change”, and DevOps embrace that “All Code must be deployed.” Chaos engineering embraces that the application will experience failures. This is a fact. We need to assume that any dependency can break, or that components will fail or be unavailable. So what do we mean at a high level for each of these: ### Embrace the idea…failure happens The idea being that elements of your solution will fail, and we know this will happen. Servers go down, service interruptions occur, and to steal a quote from Batman Begins, “Sometime things just go bad.” I was in a situation once where an entire network connection was taken down by a Squirrel. So we should build our code and applications in such a way that embraces that failures will eventually occur and build resiliency into our applications to accommodate that. You can’t solve a problem, until you know there is one. How do we do that at a code level? Really this comes down to looking at your application, or micro service and doing a failure mode analysis. And a taking an objective look at your code and asking key questions: - What is required to run this code? - What kind of SLA is offered for that service? - What dependencies does the service call? - What happens if a dependency call fails? That analysis will help to inform how you handle those faults. ### Find ways to be proactive about failure In a lot of ways, this results in leveraging tools such as patterns, and practices to ensure resiliency. After you’ve done that failure mode analysis, you need to figure out what happens when those failures occur: - Can we implement patterns like circuit breaker, retry logic, load leveling, and libraries like Polly? - Can we implement multi-zone, multi-region, cluster based solutions to lower the probability of a fault? Also at this stage, you can start thinking about how you would classify a failure. Some failures are transient, others are more severe. And you want to make sure you respond appropriately to each. For example, a monitoring networking outage is very different from a database being down for an extended period. So another key element to consider is how long the fault lasts for. ### Embrace Monitoring and Visibility Now based on the above, the next question is, how do I even know this is happening? With micro service architectures, applications are becoming more and more decentralized means that there are more moving parts that require monitoring to support. So for me, the best next step is to go over all the failures, and identify how you will monitor and alerts for those events, and what your mitigations are. Say for example you want to do manual failover for your database, you need to determine how long you return failures from a dependency service before it notifies you to do a failover. Or how long does something have to be down before an alert is sent? And how do you log these so that your engineers will have visibility into the behavior. Sending an alert after a threshold does no one any good if they can’t see when the behavior started to happen. Personally I’m a fan of the concept her as it calls out a very important practice that I find gets overlooked more often than not.
documentednerd
339,122
Learn Angular 9 with Tailwind CSS by building a banking app - Lesson 1: Start the project
This article was originally published at https://www.blog.duomly.com/angular-course-building-a-bankin...
6,834
2020-05-21T10:37:53
https://www.blog.duomly.com/angular-course-building-a-banking-application-with-tailwind-css-lesson-1-start-the-project/
angular, webdev, tutorial, beginners
This article was originally published at https://www.blog.duomly.com/angular-course-building-a-banking-application-with-tailwind-css-lesson-1-start-the-project/ --- A few days ago we posted an <a href="https://www.blog.duomly.com/sql-injection-attack-tutorial-for-beginners/">SQL injection tutorial</a> where you were able to hack banking app which we’ve created. We decided to create a full Angular 9 course, where we’d like to teach you how to create front-end for the fin-tech application like the one in SQL injection example. We are going to start from creating a new project with Angular CLI, installing Tailwind CSS for the UI, setting proxy, and building Login component UI. In the next lessons, we will create login for the login component and another feature for our application. Also, you are welcome to share your ideas in the comments, so we can add it to our app, show you how to write them, and build it together. We are also going to create two separate courses where we will be building a backend for this application, one in Go Lang and the other one in Node.js, so follow us to stay updated. P.S. As always, for the ones who like watching instead of reading, we have a video version of this article, so join us on our Youtube channel. {% youtube heKE_eN5gxE %} Let’s crash Angular 9! ###Installing Angular 9### Let’s start by creating a new Angular 9 project. For this, we’ll use <a href="https://cli.angular.io">Angular CLI</a>. If you don’t have it installed yet, here’s an <a href="https://www.blog.duomly.com/angular-tutorial/">article</a> where we did it some time ago, so feel free to jump there and check how to do it. To create a new project, we will use the following command: ``` ng new banking-app-frontend ``` We’ll be using Scss and routing, so please make sure you did install them. When your empty Angular project is ready, let’s install the Tailwind framework. ###2. Installing Tailwind CSS### To be able to use Tailwind CSS we need to install a few additional packages, but it’s worth it. To be able to add Tailwind to our build we are going to use `@angular-builders/custom-webpack`. Let’s install Tailwind and other necessary packages: ``` npm i tailwindcss postcss-import postcss-loader postcss-scss @angular-builders/custom-webpack -D ``` If the installation is done, open your style.scss file and add there Tailwind imports: ```CSS @import ‚tailwind/base’; @import ‚tailwind/components’; @import ‚tailwind/utilities’; ``` Now, we can initialize Tailwind CSS using the following command: ```npx tailwind init``` This command will create us the `tailwing.config.js` file in our root folder. In this file, we can add custom settings, like additional colors or font properties. We are almost ready with Tailwind CSS. The last thing we need to do is setting a custom webpack config and a few changes in `angular.json` file. Let’s start from creating webpack.config.js file in our root folder and adding there the following code: ```javascript module.exports = { module: { rules: [ { test: /\.scss$/, loader: 'postcss-loader', options: { ident: 'postcss', syntax: 'postcss-scss', plugins: () => [ require('postcss-import'), require('tailwindcss'), require('autoprefixer'), ] } } ] } } ``` We can now implement following code into the angular.json file. ```javascript "build": { "builder": "@angular-builders/custom-webpack:browser", "options": { "customWebpackConfig": { "path": "./webpack.config.js" }, } } "serve": { "builder": "@angular-builders/custom-webpack:dev-server", "options": { "customWebpackConfig": { "path": "./webpack.config.js" }, } ``` Let’s start our project using the following command: ```ng serve``` When it’s ready you should see the default screen of the empty Angular app: ![Duomly - Programming Online Courses](https://dev-to-uploads.s3.amazonaws.com/i/d71oiepkvfpzysu8twlb.png) Let’s clean the app.component.html file to prepare it for the work. Here’s how your file should look like after cleaning: ```html <div id="app"> <router-outlet></router-outlet> </div> ``` ###3. Setting proxy### The next step of this tutorial is setting the proxy config in Angular. It will be useful when we will connect with the backend to avoid CORS. Let’s create a new file in the **src** file and call it proxy.conf.json and place there the following code: ```javascript { "/login/*": { "target": "http://localhost:8888", "secure": false, "logLevel": "debug" } } ``` Keep in mind that if your backend is on a different hosts or on a different port you need to change **target** property. Let’s add this configuration to **angular.json** file right now inside the **serve** command, so it should look like the following code now: ```javascript "serve": { "builder": "@angular-builders/custom-webpack:dev-server", "options": { "browserTarget": "banking-app-frontend:build", "customWebpackConfig": { "path": "./webpack.config.js" }, "proxyConfig": "src/proxy.conf.json" }, "configurations": { "production": { "browserTarget": "banking-app-frontend:build:production" } } } ``` ###4. Creating login component UI### Now it’s time for the biggest fun. Let’s use `ng generate component <componentName>` to create a Login component, where we will be building the UI. When it’s done open `app-routing.module.ts` file and we are going to add a route. ```javascript import { LoginComponent } from './login/login.component'; const routes: Routes = [ {path: '', component: LoginComponent } ]; ``` Great, now we can display our LoginComponent. Let’s open the `login.component.html` file and remove the existing paragraph. As the next step, we will use two ready components from TailwindCSS, form, and notification. Let’s add it to our template with a few small changes. Here is a ready code: ```HTML <div id="login-container" class="flex container mx-auto items-center justify-center"> <div class="w-full max-w-xs"> <form class="bg-white shadow-md rounded px-8 pt-6 pb-8 mb-4"> <img src="../../assets/logo.png" class="logo" /> <div class="mb-4"> <label class="block text-gray-700 text-sm font-bold mb-2" for="username"> Username </label> <input class="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline" id="username" type="text" placeholder="Username"> <p class="text-red-500 text-xs italic">Username can consist of letters and numbers only!</p> </div> <div class="mb-6"> <label class="block text-gray-700 text-sm font-bold mb-2" for="password"> Password </label> <input class="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 mb-3 leading-tight focus:outline-none focus:shadow-outline" id="password" type="password" placeholder="******************"> </div> <div class="flex items-center justify-between"> <button class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded focus:outline-none focus:shadow-outline" type="button"> Sign In </button> <a class="inline-block align-baseline font-bold text-sm text-blue-500 hover:text-blue-800" href="#"> Forgot Password? </a> </div> </form> <p class="text-center text-white text-xs"> &copy;2020 Banking App by Duomly. All rights reserved. </p> </div> <div class="notification bg-indigo-900 text-center py-4 lg:px-4"> <div class="p-2 bg-indigo-800 items-center text-indigo-100 leading-none lg:rounded-full flex lg:inline-flex" role="alert"> <span class="flex rounded-full bg-indigo-500 uppercase px-2 py-1 text-xs font-bold mr-3">ERROR</span> <span class="font-semibold mr-2 text-left flex-auto">Error message here</span> </div> </div> </div> ``` We are almost there, but we just need some custom styles. First let’s open `app.component.scss` file and add the following code: ```css #app { min-height: 100vh; min-width: 100vw; background-color: #f7f7fc; } ``` Next, let’s add custom styles to the login component. Open `login.component.scss` file and add there the code: ```scss #login-container { min-height: 100vh; min-width: 100vw; color: white; position: relative; background-image: url('../../assets/background.png'); background-size: cover; background-repeat: no-repeat; background-position: center; .logo { max-height: 60px; margin: auto; margin-bottom: 30px; } .notification { position: absolute; bottom: 0; left: 0; right: 0; } } ``` I used two images in the UI, one for the background and one as a logo. You can use any images you like or you can check out our Github where you can find the code we are doing right now and also get the images! Here is the result I’ve got. ![Duomly - Programming Online Courses](https://dev-to-uploads.s3.amazonaws.com/i/3oqg51y82gpv6d5gl970.png) In the next lessons, we are going to create some logic for this login component and also we will learn how to prevent SQL injection which you could see in the tutorial which my friend published a few days ago. ###Conclusion### In this article, we did a first step to building a banking app using Angular 9 and TailwindCSS. Today we’ve created a simple UI for our form, and in the next parts of this course we will do logic for this component and lots of more features. If you have any idea for an interesting feature, let us know and we will implement it. Let’s build this course together! In the meantime, we will be showing you how to create a backend for the application like this in two different technologies, Go Lang and Node.js. Besides that, there will be a series of web security tutorials based on this application. Stay with us for the next parts and if you missed some code, check it here: https://github.com/Duomly/angular9-tailwind-bank-frontend/tree/Angular9-TailwindCSS-Course-Lesson1 Thank you for reading, Anna from Duomly <a href="www.duomly.com"> ![Duomly - Programming Online Courses](https://dev-to-uploads.s3.amazonaws.com/i/ogfuw76l9dyg1xa4jdwd.jpg) </a>
duomly
339,164
Democratizar el lenguaje de signos
Primera clase en todos los videos Mientras pensaba en mi post previo sobre capacidades de...
0
2020-05-19T17:03:50
https://dev.to/carloshm/democratizar-el-lenguaje-de-signos-1lk8
text2sign, asl, lse, video
## Primera clase en todos los videos Mientras pensaba en mi [post previo](https://dev.to/carloshm/como-publicar-traducciones-de-microsoft-translate-en-microsoft-stream-4naf) sobre capacidades de traducción en videos, mientras veía la sesión de [Microsoft Build](https://build.microsoft.com), me he dado cuenta de un nuevo icono en el streaming de las sesiones, para ver la traducción en lenguaje de signos americano (ASL) de la sesión. Increíble trabajo el de #MSBuild ![ASL Lenguaje de signos en videos](https://raw.githubusercontent.com/carloshm/devto-articles/master/images/ASL%20in%20video.png) Puedes ver una corte de la traducción de la presentación de Imagine Cup <video width="640" controls> <source src="https://github.com/carloshm/devto-articles/blob/master/images/ASL_Video.mp4?raw=true" type="video/mp4"> Your browser does not support the video tag. </video> ## Lenguaje de signos para todos ¿No sería fantástico poder incluir esta funcionalidad siempre en todos tus videos? Hay un proyecto en Github que emplea el servicio https://www.signingsavvy.com/search/hello {% github https://github.com/Waasi/text_2_sign %} que podríamos usar para incluir el resultado en los video de Stream. ## Material de aprendizaje Empezaré por leer un poco de [Wikipedia](https://es.wikipedia.org/wiki/Lengua_de_se%C3%B1as) para aprender sobre su diversidad con el objetivo de centrarme en Speech2Text2Sign. Existen proyectos interesante para hacer traducción en tiempo real, empleando modelos entrenados mediante DNN, pero veo más factible su inclusión partiendo de texto. **Nota** Proyecto añadido a mi listado: Prototipo para incluir lenguaje de signos y mezclarlo
carloshm
339,180
useReducer for the win
Hey, how are you there? Well, here is a story. It's quite small, but it can save your time and health...
0
2020-05-25T18:42:58
https://dev.to/viscoze/usereducer-for-the-win-413f
react
Hey, how are you there? Well, here is a story. It's quite small, but it can save your time and health. So keep reading. We wanted to have a sequence of steps in our application which it’s changed depending on user’s answers. Take a look: ``` step with yes/no question -> if yes: Step 1 -> if yes: Step 2 -> Step 3 -> Step 4 -> if no: skip -> if no: skip -> Step 3 -> Step 4 ``` The logic is the following: 1. User pick an answer in a form 2. The form sends the data to an API – the API persists the answer 3. On success we change the state of the redux store 4. We change a flow of steps depending on the answers 5. Go to the next step accordingly to the flow 6. Profit Disclaimer 1.: there is a pretty nice library that can help to manage sophisticated flows – [xstate](https://xstate.js.org). And for this case it'd be an overkill, so we created our small, simple, homemade solution 😌 Disclaimer 2.: the code presented here is simplified to focus on the issue. Please, don't judge And here is the code: ```javascript function useSteps(flow) { const [step, setStep] = useState(_.first(flow)) const goBack = () => { const prevStep = _.nth(flow, flow.indexOf(step) - 1) setStep(prevStep) } const goForward = () => { const nextStep = _.nth(flow, flow.indexOf(step) + 1) setStep(nextStep) } return { current: step, goForward, goBack } } function LeComponent() { const entity = useEntity() const flow = [ STEP_1, entity.yesOrNo === 'Yes' && STEP_2, entity.yesOrNo === 'Yes' && STEP_3, STEP_4, ].filter(Boolean) const steps = useSteps(flow) return pug` if steps.current === STEP_1 LeForm( onCancel=steps.goBack onSubmitSuccess=steps.goForward ) if steps.current === STEP_2 ......... ` } ``` And it won't work. Every time we run it, `onSubmitSuccess` is called with the old `steps.goForward` so even if user answered 'yes', we redirect them to `Step 3`. Meh. Worth to mention: the `entity` and the `flow` are updated correctly before the action of going forward. It. Must. Work. Except it doesn't. Ok, an overengineered solution to help. Every time user updates the value in the form we update the state of the parent component using `redux-form`'s `onChange`. Also we have to sync the state of our component with the state that has been persisted on the API in case of the page reloading – so we have this `useEffect` there. Shit is getting crazy. Take a look: ```javascript function LeComponent() { const entity = useEntity() const [yesOrNo, setYesOrNo] = useState(null) const handleYesOrNo = formData => setYesOrNo(formData.yesOrNo) useEffect(() => { setYesOrNo(entity.yesOrNo) }, [entity.yesOrNo]) const flow = [ STEP_1, entity.yesOrNo === 'Yes' && STEP_2, entity.yesOrNo === 'Yes' && STEP_3, STEP_4, ].filter(Boolean) const steps = useSteps(flow) return pug` if steps.current === STEP_1 LeForm( onCancel=steps.goBack onSubmitSuccess=steps.goForward onChange=handleYesOrNo ) if steps.current === STEP_2 ......... ` } ``` Perfect! I'm being paid for a reason definitely. But no, come on, we can't leave it as that. What if we need to track more answers? So we started to investigate if there is something wrong with `redux-form`. Every value around is new, but `onSubmitSuccess` is living in the past. And we didn't find what really happened. Instead we decided why not to use `useReducer` in the `useSteps`. How? Take a look: ```javascript function useSteps(flow) { function reducer(step, action) { switch (action.type) { case 'goBack': return _.nth(flow, flow.indexOf(step) - 1) case 'goForward': return _.nth(flow, flow.indexOf(step) + 1) default: return step } } const [current, dispatch] = useReducer(reducer, _.first(flow)) const goBack = () => dispatch({ type: 'goBack' }) const goForward = () => dispatch({ type: 'goForward' }) return { current, goForward, goBack } } ``` Sweet! Now `goForward` just push an action w/o rely on the closure, so we can remove all of these stuff of keeping state of the answer in the component and make it in the _react way_ so to say. And it worked out :rocket: And this is a nice practice in your toolkit for creating such flows with conditional showing of steps. Be happy. Cheers!
viscoze
342,096
Reducing our Carbon Docker image size further!
This article is a direct follow-up to my last article : Reducing Docker's image size while creating...
0
2020-05-23T09:43:33
https://lengrand.fr/reducing-our-docker-image-size-further/
docker, containers, node, development
--- title: Reducing our Carbon Docker image size further! published: true date: 2020-05-22 23:43:15 UTC tags: docker,containers,node,development canonical_url: https://lengrand.fr/reducing-our-docker-image-size-further/ --- ![Reducing our Carbon Docker image size further!](https://lengrand.fr/content/images/2020/05/carbon--5-.png) This article is a direct follow-up to my last article : [Reducing Docker's image size while creating an offline version of Carbon.now.sh](https://lengrand.fr/ghost/#/editor/post/5ebe91f3e7e3d36793661f53). I was still unsatisfied with the final results of 400Mb for our Carbon Docker image and kept diving a little further. Let's see what additional there are in our sleeves to do just that. ## Removing all unnecessary files from the node\_modules During our last experiment, we got rid of all development dependencies before creating our final Docker image. Turns out, even those leftover modules contain clutter such as documentation, test files, or definition files. [**node-prune**](https://www.npmjs.com/package/node-prune) can help us solve that problem. We can fetch it during compilation and run it after having removed our development dependencies. Now, it can be considered [bad practice](https://codefresh.io/containers/docker-anti-patterns/) to fetch files from the big bad internet to create a Docker file for multiple reasons (security, and reproducibility mainly) but given that we use the file in our builder container I'll accept that limitation for now. Our Dockerfile becomes : <!--kg-card-begin: markdown--> ``` FROM mhart/alpine-node:12 AS builder RUN apk update && apk add curl bash WORKDIR /app COPY package*.json ./ RUN yarn install COPY . . RUN yarn build RUN npm prune --production RUN curl -sfL https://install.goreleaser.com/github.com/tj/node-prune.sh | bash -s -- -b /usr/local/bin RUN /usr/local/bin/node-prune FROM mhart/alpine-node:12 WORKDIR /app COPY --from=builder /app . EXPOSE 3000 CMD ["yarn", "start"] ``` <!--kg-card-end: markdown--> There are three main changes : - We fetch the node-prune script during building - We run it at the end of the build process - Because curl and bash are no available by default on alpine, we have to install them! **The resulting image is 361Mb, so we still shaved 30Mb off our container size**. Good news. <!--kg-card-begin: markdown--> ``` ➜ carbon git:(feature/docker) docker images REPOSITORY IMAGE ID SIZE julienlengrand/carbon.now.sh 535581c57ed5 361MB ``` <!--kg-card-end: markdown--> ## Diving into our image We see that the wins we are getting are getting marginally lower. So we'll have to check deeper into what strategic improvements we can do next. Let's look at our image, and more specifically what is taking up space. For this, we'll use the awesome tool [**dive**](https://github.com/wagoodman/dive). <!--kg-card-begin: image--> ![Reducing our Carbon Docker image size further!](https://lengrand.fr/content/images/2020/05/image-3.png)<figcaption>A screenshot of the current version of the image using dive</figcaption> <!--kg-card-end: image--> Alright, this view gives us some interesting information: - The OS layer is 80Mb. Not sure how much we can do about this - We still have 281(!)Mb of stuff needed to run the app - But we also see lots of useless things in there! .git and .idea folders, docs, ... - No matter what we do, there is still 235Mb (!!!) of node\_module left to be dealt with **So in short, we can save another 30ish MB removing some auxiliary folders, but the bulk of the work would have to be done in the node\_modules.** We'll modify the Dockerfile to just copy the files required to run the app (it's probably possible to do a bulk copy, I haven't found an answer I liked just yet. <!--kg-card-begin: markdown--> ``` FROM mhart/alpine-node:12 AS builder RUN apk update && apk add curl bash WORKDIR /app COPY package*.json ./ RUN yarn install COPY . . RUN yarn build RUN npm prune --production RUN curl -sfL https://install.goreleaser.com/github.com/tj/node-prune.sh | bash -s -- -b /usr/local/bin RUN /usr/local/bin/node-prune FROM mhart/alpine-node:12 WORKDIR /app COPY --from=builder /app/.next ./.next COPY --from=builder /app/components ./components COPY --from=builder /app/lib ./lib COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/pages ./pages COPY --from=builder /app/public ./public COPY --from=builder /app/next.config.js ./next.config.js COPY --from=builder /app/LICENSE ./LICENSE COPY --from=builder /app/package.json ./package.json EXPOSE 3000 CMD ["yarn", "start"] ``` <!--kg-card-end: markdown--> We save some more space, as expected <!--kg-card-begin: markdown--> ``` ➜ carbon git:(feature/docker) docker images REPOSITORY IMAGE ID SIZE julienlengrand/carbon.now.sh a672815ed93f 343MB ``` <!--kg-card-end: markdown--> ## Checking production node modules The next thing I've done was to look at the leftover `node_modules` dependencies that make it to the production build. Here are the top 5 biggest dependencies, sorted by size <!--kg-card-begin: image--> ![Reducing our Carbon Docker image size further!](https://lengrand.fr/content/images/2020/05/image-4.png)<figcaption>5 biggest dependencies of carbon, ordered by size</figcaption> <!--kg-card-end: image--> Some quick observations: - **Firebase is responsible for a whooping 60Mb in our image** - Next is large, but required to run the app. - All of the others, especially prettier, seem like they should be dev dependencies We'll have to investigate this further. - The application uses Firebase. [Looking at the documentation](https://www.npmjs.com/package/firebase), you can indeed import only what you need, but the library will download everything anyways so there is not much we can do there. - It looks like prettier is actually used in production, so we can't do anything about that. - The application is a Next.js app, so it sounds logical that it needs `next`. We don't see any mention of the other dependencies in the `package.json` file. Let's use `$ npm ls` on the production dependencies to see where they're coming from. <!--kg-card-begin: markdown--> ``` carbon@4.6.1 /Users/jlengrand/IdeaProjects/carbon ├─┬ ... ├─┬ next@9.4.1 │ ├─┬ ... │ ├─┬ @babel/core@7.7.7 ├─┬ ... ├─┬ next-offline@5.0.2 │ ├─┬ ... │ └─┬ workbox-webpack-plugin@5.1.3 │ ├── .... │ └─┬ workbox-build@5.1.3 │ ├─┬ @babel/core@7.9.6 ``` <!--kg-card-end: markdown--> So it seems like babel and workbox are also coming from the `next` framework. We may have reached a dead end. ## Back to Docker : Docker squash We've looked into the application itself and decided we couldn't get clear wins any more. Let's move back to Docker. Can we pass the 300MB barrier with some extra steps? When building an image, it is possible to tell Docker to squash all the layers together. **Mind that it is a one-way operation, you won't be able to go back.** Also, it might be counter-productive in case you run a lot of containers with the same base image. But that allows us to save some extra space. The only thing we have to do is to add the `-squash` option to our Docker build command. In our case, I deem this acceptable because we don't run any other node apps in our cluster and this is a one time experiment. Here is the result: <!--kg-card-begin: markdown--> ``` $ docker build --squash -t julienlengrand/carbon.now.sh.squashed . ➜ carbon git:(feature/docker) ✗ docker images REPOSITORY IMAGE ID SIZE julienlengrand/carbon.now.sh.squashed b09b0e3206f8 297MB julienlengrand/carbon.now.sh a672815ed93f 343MB ``` <!--kg-card-end: markdown--> Well that's it we made it! We are under 300MB! But I'm sure we can do even better. ## Back to Docker : Docker slim There are many tools I had never learnt about before starting this fun quest. A few of them have been suggested to me by friends on [Linkedin](https://www.linkedin.com/feed/update/urn:li:activity:6667338283846045697/). One of those is [**Docker-slim**](https://github.com/docker-slim/docker-slim) **.** Docker-slim claims to optimize and secure your containers, without you having anything to do about it. Have a look at the project, some of the results are quite surprising indeed. To work with docker-slim, you first have to install the tool on your system and then ask it to run against your latest Docker image. Of course there are many more options available to you. Docker-slim will run your container, analyze it and come out with a slimmed down version of it. When I ran it the first time, I got extremely good results, but docker-slim deleted the whole app from the container XD. [I opened an issue about it.](https://github.com/docker-slim/docker-slim/issues/149) Manually adding the app path to the configuration fixes the issues, but also I guess prevents most of the optimizations. Running docker-slim leads the following results : <!--kg-card-begin: markdown--> ``` $ docker-slim build --include-path=/app julienlengrand/carbon.now.sh.squashed:latest ➜ carbon git:(feature/docker) ✗ docker images REPOSITORY IMAGE ID SIZE julienlengrand/carbon.now.sh.squashed.slim 8c0d8ac87f74 273MB julienlengrand/carbon.now.sh.squashed a672815ed93f 297MB ``` <!--kg-card-end: markdown--> Not amazing, but hey we're still shaving another 20MB with a pretty strong limitation on our end so it's still quite something. ## Other ideas I looked into: - **Next.js** has a packaging tool called [**pkg**](https://github.com/zeit/pkg) that allows for creating executables and get rid of the whole node ecosystem in the process. It looked interesting but requires the application to run on a custom server, which carbon does not. Given that I wanted to keep the node application as-is and simply create a layer on top of it, that rules out this solution - Similarly, I looked into [**GraalVM**](https://www.graalvm.org/docs/reference-manual/languages/js/), and specifically [GraalJS](https://github.com/graalvm/graaljs). Use a Polyglot GraalVM setup should produce optimized, small executables. I even got quite some starting help on [Twitter](https://twitter.com/jlengrand/status/1262128264094646278) for it. I easily managed to run carbon on the GraalVM npm, but my attempts to create a native image of the project have been a failure so far. I probably should look at it again in the future. ## Conclusion We started our first post with a 'dumb' Dockerfile and a 2.53Gb image. With some common sense, we were able to quickly tune it down to less than 400MB. But diving even further, we see that we can even go beyond that and **reach just over 270MB**. **I find that interesting because on my local machine, that's about exactly the size of the node\_modules for the project!** <!--kg-card-begin: image--> ![Reducing our Carbon Docker image size further!](https://lengrand.fr/content/images/2020/05/image-5.png)<figcaption>Final list of all images created sorted by size</figcaption> <!--kg-card-end: image--> I learnt a few things : - As we write code and build new applications every day, it is important to keep size and performance in mind. It is impressive to see how quick it was to reduce the size of the final deliverable by a factor 10! How many containers today could still be optimized? - **Some tools and languages seem less container friendly than others.** It is likely that a Go or Rust software would have a much lower footprint. We have seen how heavy our node\_modules folder was here. It makes sense for the Carbon project to have gone the serverless route. - **More and more technologies seem to offer 'native' compilation, and should help reduce the memory cost of running applications**. I named only 2 here ( **GraalVM** and pkg but there are more). We hear about them a lot lately, but I wonder how generalized their adoption is in the wild today. It can only improve. That's it! I hope you enjoyed the ride, and see you another time!
jlengrand
339,219
Folding@home โปรเจคช่วยประมวลผลเพื่อต่อสู้ COVID-19!
Folding@home (FAH, F@h) ไม่ได้เป็นของใหม่อะไรเลย โดย FAH เป็นโปรเจคการประมวลผลแบบกระจาย (d...
0
2020-05-19T17:35:27
https://dev.to/ph9/folding-home-covid-19-1j07
# Folding@home (FAH, F@h) ไม่ได้เป็นของใหม่อะไรเลย โดย FAH เป็นโปรเจคการประมวลผลแบบกระจาย (distributed computing) สำหรับการจำลองพลวัตของโปรตีน (protein dynamics) รวมไปถึงการประมวลผลเพื่อจำลองการขดตัว หรือการเคลื่อนไหวของโปรตีน โดยใครที่มีเครื่องคอมพิวเตอร์ว่าง ๆ ก็เอามาให้มันประมวลผลเพื่อช่วยในโปรเจคนี้ได้ ซึ่งผลของการประมวลที่ได้จะช่วยให้นักวิทยาเข้าใจในชีววิทยามากขึ้น นั่นหมายถึงอาจช่วยให้ค้นพบวิธีการรักษาโรคต่าง ๆ ได้ ## Download ใครพร้อมแล้วก็ไปดาวโหลดกันกันได้เลยที่ [https://foldingathome.org/start-folding/](https://foldingathome.org/start-folding/) ส่วนผมใช้ macOS อยู่แล้วก็เลือกที่จะลงแบบ geek ๆ ด้วย `brew cask install folding-at-home` ## Community ในไทยก็ได้มีการรวมตัวกันที่เฟสบุคกลุ่ม [Folding@Home Thailand](https://www.facebook.com/groups/263948448063747/) ผมไม่ได้สร้างเองนะมี[คนสร้างไว้อยู่แล้ว](https://medium.com/@udkgab/75fa46be4708) << กดเบา ๆ เข้าไปอ่านถ้าต้องการรายละเอียดที่มากขึ้น ซึ่งจริง ๆ เคยมีคน[เขียนอธิบาย](https://cryptominingman.blogspot.com/2016/10/foldinghome-v7.html) FAH ไว้ตั้งแต่ ต.ค. 59 กันเลยทีเดียว และมีการลง pantip.com ด้วยตั้งแต่ [เม.ย. 60](https://pantip.com/topic/36358948) ## Setup สำหรับใครที่ไม่ต้องการ credit ใด ๆ ติดตั้งเสร็จก็สามารถกด Fold ได้เลย แต่อย่างน้อยใส่ Team Number ด้วยเลข `261333` ก็ยังดีนะ ซึ่งอยู่ในนาม [Folding@Home Thailand](https://stats.foldingathome.org/team/261333) ![FAHControl](https://dev-to-uploads.s3.amazonaws.com/i/uqcxwrney9r5834nsdxr.png) - ถ้าต้องการเลือกว่าเราจะ support โปรเจคไหนสามารถเลือกได้ที่ Configure > Advanced > Cause Preference ผมเลือกเป็น High Priority ไป degfault เป็นโปรเจคไหนก็ได้ (Any) - ในส่วนของ Folding Power สามารถปรับพลังที่เราจะใช้ในการประมวลผลได้จาก Light - Medium - Full - Folding Slots ปกติแล้วจะมีแต่ CPU และตามด้วยจำนวน threads ในรูปจะเห็นว่าใช้อยู่ทั้งหมด 12 threads - สามารถเพิ่ม GPU ได้จาก Configure ซ้ายบน ไม่แน่ใจว่าถ้ามี CPU หลายตัว จะต้องเพิ่มเองหรือป่าว หรือมีมาให้ตั้งแต่แรก - ปุ่ม Finish ▷| เมื่อกดแล้ว โปรแกรมจะยังทำงานต่อจนกว่าจะเสร็จงานนั้น ๆ แล้วจึงจะหยุด ## สมัคร ถ้าใครอยากได้ credit สามารถลงทะเบียนได้ที่ https://apps.foldingathome.org/getpasskey โดยเราจะได้ username และ passkey สำหรับมากรอกในโปรแกรมผ่านทางอีเมลอีกที ซึ่งที่ผมเห็นบางคนก็ใช้รหัสกระเป๋าตังตัวเองตั้งเป็น alias นะ ไม่รู้มีคนบริจาคบ้างมั้ย 555 ## ข้อจำกัด - สำหรับ macOS ยังไม่รองรับการประมวลผลบน GPU - ถ้าอยากช่วยบน ARM เช่น Raspberry Pi หรือโทรศัพท์มือถือให้ไปดูโปรเจค [Rosetta@home](https://boinc.bakerlab.org) แทน - ใครที่หวังว่าจะเห็นคะแนนตัวเองทันทีหลังจากจบงาน อาจจะไม่เห็นทันที เพราะระบบจะ refresh คะแนนทุก ๆ 1 ชม. ## [คอขวด](https://youtu.be/HaMjPs66cTs) ซึ่งเมื่อวันที่ 27 มีนาคม 2563 มีรายงานว่าผู้เข้าร่วมโครงการ [กระโดดจากประมาณ 3 หมื่นคนเป็นมากกว่า 4 แสนคน](https://twitter.com/drGregBowman/status/1243405484289228801)เลยทีเดียว ซึ่งทำให้เกิด[คอขวดที่เซิฟเวอร์](https://youtu.be/KU4qOebhkfs) ที่ไม่สามารถประจายงาน และรับงานได้เร็วพอกับพลังประมวลผลที่มีอยู่ตอนนี้ คาดว่าตอนนี้ปัญหานี้น่าจะได้รับการแก้ไขแล้ว ## อ่านต่อ - คำถามที่พบบ่อย https://foldingathome.org/support/faq/project-details/ ก็ไม่ได้อยากเขียนยาวมาก คิดว่าติดตั้ง และใช้งานได้ไม่ยาก แค่อยากบอกให้รู้ว่าโลกมีสิ่งนี้อยู่ **สวัสดีครับ**
ph9
339,264
Introducing Vue Formulate — truly delightful form authoring.
Vue Formulate has been in the wild for 2 months now, and with the latest release (v2.3) the project h...
6,796
2020-05-19T19:38:53
https://dev.to/justinschroeder/introducing-vue-formulate-truly-delightful-form-authoring-56f5
vue, forms, webdev, javascript
[Vue Formulate](https://vueformulate.com/) has been in the wild for 2 months now, and with the latest release (v2.3) the project has enough momentum to warrant a post from its creator (me, [Justin Schroeder](https://twitter.com/jpschroeder)) on why it exists, what it does, and where it is going. ![Quick example of Vue Formulate](https://assets.wearebraid.com/vue-formulate/formulate-simple.gif) ### The problem with forms When you’re learning to program, one of the most exciting early progressions is when you make your "Hello World" app _interactive_ by prompting a user for their name. Take those mad I.O. skills to the web and it gets even easier! Just plop an `<input>` tag into your markup and you’re off the races right? Well...not so fast. Over the past two months, I've gotten a lot of questions about Vue Formulate. Unsurprisingly one of the most frequent ones is, "What’s wrong with HTML?". {% twitter 1236275673108557824 %} There’s nothing _wrong_ with HTML, of course, just like there was nothing wrong with JavaScript before Vue and React (I know, I know, Vanilla purists’ blood is boiling out there). HTML, React, Vue... it doesn’t matter — the reality is: creating high-quality forms requires a lot of consideration. Labels, help text, validation, inline file uploads, and accessibility are just a few of the items a developer will need to address. This almost inevitably amounts to gobs of copy/paste and boilerplate markup littered throughout your codebase. There are other issues too. HTML validation, for example, is pretty limited. What if you want to asynchronously check if a username is already taken? What if you want to have well-styled validation errors? What if you want to offer the ability for someone to add more attendees on their ticket purchase? None of these are available to native HTML/React/Vue without considerable effort. Furthermore, maintaining a high level of quality while working on such disparate features becomes secondary to just making the form _work_. This is fertile ground for a library to help increase developer happiness while pushing quality and accessibility. ### Why is Vue Formulate different? Vue Formulate is far from the first library to address these concerns. Our long-time friends in the community have been fighting these battles for ages: vue-forms, VeeValidate, Vuelidate, and even some UI frameworks like Vuetify aim to help developers author better forms. These are great packages and I wouldn’t discourage you from using them if they’re appropriate for your project. However, Vue Formulate approaches the same problems with two specific objectives: 1. Improve the developer experience of form authoring. 2. Increase the quality of forms for end-users. In order to provide a great developer experience, Vue Formulate needs to focus on being a _comprehensive form authoring_ solution. It cannot just be a validator and doesn’t aspire to become a full UI library. Instead, these guiding principles have resulted in a highly consistent component-first API focused solely on first-class form authoring. To that end, every single input in Vue Formulate is authored with the same component `<FormulateInput>`, smoothing out the inconsistencies in HTML’s default elements such as `<input>`, `<textarea>`, `<select>` and others. In Vue Formulate you simply tell the `<FormulateInput>` what type of input it should be — a text input (`<FormulateInput type="text">`) and a select input (`<FormulateInput type="select">`) can even be dynamically exchanged by changing the `type` prop on the fly. Why is this better you ask? It’s better because it’s easy to remember, fast to compose, and reduces mistakes. We absolutely shouldn’t discount those very real quality of life improvements... but of course that’s not all. By ensuring all inputs conform to a single component interface we allow for more powerful enhancements like automatic labels, declarative validation, form generation, automatic accessibility attributes, and support for complex custom inputs. This allows a `FormulateInput` component to maintain an easy-to-use API while being endowed with super powers. Consider how similarly these two inputs are authored using Vue Formulate and yet how different their actual HTML implementation is: ```vue <FormulateInput type="email" name="email" label="Enter your email address" help="We’ll send you an email when your ice cream is ready" validation="required|email" /> <FormulateInput type="checkbox" name="flavor" label="Pick your 2 favorite flavors" validation="min:2,length" :options="{ vanilla: 'Vanilla', chocolate: 'Chocolate', strawberry: ’Strawberry', apple: 'Apple' }" /> ``` {% codepen https://codepen.io/justin-schroeder/pen/dyYQZgr %} Now, notice some of the things we _didn’t_ have to deal with in that example: - `<label>` elements inputs were automatically generated and linked to the `<input>` element via auto-generated ids (specify if you want). - Help text was generated in the proper location and the input was linked to it with `aria-describedby`. - We added real time input validation without having to explicitly output errors. - Multiple checkboxes were rendered with their values linked together. - The labels for the checkboxes automatically adjusted their position. By consolidating inputs into a single `FormulateInput` component, we drastically improve the quality of life for developers, and simultaneously create a powerful hook for adding new features and functionality to those inputs. As a bonus, when it comes time to upgrade to Vue 3’s Composition API, Vue Formulate’s component-first API means developers won’t need to refactor anything in their template code. ### Neato, but where’s my form? I’ve explained Vue Formulate’s purpose and its unique approach to inputs, but how about the form itself? Let’s consider the purpose of the native `<form>` element: to transmit input from a user to a server by aggregating the values of its input elements. What does that look like in Vue Formulate? Pretty much exactly what you would expect: ```vue <template> <FormulateForm @submit="login" > <FormulateInput type="email" name="email" label="Email address" validation="required|email" /> <FormulateInput type="password" name="password" label="Password" validation="required" /> <FormulateInput label="Login" type="submit" /> </FormulateForm> </template> <script> export default { methods: { login (data) { /* do something with data when it passes validation: * { email: 'zzz@zzz.com', password: 'xyz' } */ alert('Logged in') } } } </script> ``` Great, so data aggregation works just like a normal form, but there’s not anything “reactive” here yet. Ahh, let’s slap a `v-model` onto that form — and — presto! We have a fully reactive object with all the data in our form. ```vue <template> <FormulateForm @submit="login" v-model="values" > <FormulateInput type="email" name="email" label="Email address" validation="required|email" /> <FormulateInput type="password" name="password" label="Password" validation="required" /> <FormulateInput label="Login" type="submit" /> <pre>{{ values }}</pre> </FormulateForm> </template> <script> export default { data () { return { values: {} } }, methods: { login (data) { /* do something with data: * { email: 'zzz@zzz.com', password: 'xyz' } */ alert('Logged in') } } } </script> ``` {% codepen https://codepen.io/justin-schroeder/pen/VwvVyZo %} And yes, `v-model` means its _two-way_ data binding. You can write values into any input in your form by changing properties on a single object. Aim small, miss small — so let’s shoot for making “it just works” the default developer experience. {% codepen https://codepen.io/justin-schroeder/pen/xxwQymd %} ### Slots, custom inputs, plugins — oh my! This article is just an introduction — not a substitute for the full documentation — but it wouldn’t be fair to leave out some of my favorite extensibility features. Form authoring tools need to be flexible — there’s an edge case for everything right? Vue Formulate’s highly opinionated component-first API may seem at odds with flexibility, but in reality that consistent API is the core behind a highly flexible architecture. Slots are a great example of how consistency pays the bills. Central to Vue Formulate’s inputs is a [comprehensive `context` object](https://vueformulate.com/guide/inputs/slots/#context-object) that dictates virtually everything about an input. The model, validation errors, label value, help text, and lots (lots!) more are members of this object. Because every input has a consistent API, every input has a consistent context object. {% codepen https://codepen.io/justin-schroeder/pen/rNOQQww %} While the flexibility to use scoped slots is great — they can hurt the consistency and readability of our form’s code. To address this, Vue Formulate also includes the ability to override the default value of every slot. We call these [“Slot Components”](https://vueformulate.com/guide/inputs/slots/#slot-components), and they’re fundamental to maintaining a clean consistent authoring API. Want to add that example tooltip to every label? No problem. You can replace the default value in the label slot on every input in your project without having to use scoped slots or wrap your components in your templates at all. If you decide you’re better off creating your own custom input type, you can do that too! Custom inputs keep form authoring buttery-smooth, just pick your own input `type` and register it with Vue Formulate. Your custom input will get validation, labels, help text, model binding, and more out of the box. Even better, once you’ve created a custom input you can easily turn it into a plugin to share with your team members or the larger community. ### Where you go is where I wanna be... In the excellent Honeypot [Vue documentary](https://www.youtube.com/watch?v=OrxmtDw4pVI), Thorsten Lünborg summed up what I consider to be the number one reasons for Vue’s spectacular success: > The focus in Vue.js from the get-go was always that the framework is more than just the code. It’s not like, “this is the library, this is the documentation, of how it works, and now you solve the rest.” In essence, the Vue core team was willing to go where developers were feeling pain points the most. As a result they have created a framework that isn’t just elegant — it’s delightful for real-world developers to use. Vue Formulate maintains this spirit; to meet developer’s pain points with delightful form authoring solutions. We believe we’ve now paved the road for 90% of users — but if your road is less traveled and you find yourself at an edge case — please shout it out. We're listening. ---- If you’re intrigued, checkout [vueformulate.com](https://vueformulate.com/). You can follow me, [Justin Schroeder](https://twitter.com/jpschroeder), on twitter — as well as my co-maintainer [Andrew Boyd](https://twitter.com/BoydDotDev).
justinschroeder
339,271
customizing Chakra UI theme in a Gatsby project
So this is going to be my first post on DEV.to 🎉 In this article, I'm going to explain how to add yo...
0
2020-05-19T18:49:38
https://dev.to/jesuissuyaa/customizing-chakra-ui-theme-in-a-gatsby-project-3jmc
gatsby, react, webdev, beginners
So this is going to be my first post on DEV.to :tada: In this article, I'm going to explain how to add your own custom themes to your Gatsby project. ## TL;DR 1. create a new file in `src/gatsby-plugin-chakra-ui/theme.js` 2. import original theme from `@chakra-ui/core` & add your own properties 3. restart server ## prerequisites - gatsby project is set up - `gatsby-plugin-chakra-ui` is added to your project If you haven't added the plugin yet, check out the [docs](https://www.gatsbyjs.org/packages/gatsby-plugin-chakra-ui) on how to do so. ## step 1: add a theme.js file Create a a `theme.js` file under `src/gatsby-plugin-chakra-ui/`. (Most likely you need add the `gatsby-plugin-chakra-ui` folder under `src` ) This will allow Gatsby to **shadow** the `theme.js` file. **Shadowing** is a concept introduced by Gatsby so users can use their own themes. What this does is that it replaces a file in the webpack bundle with a file in the `src` directory. For example, if you have a plugin named `gatsby-plugin-awesome` and you want to replace `awesomeFile.js` with your own version, you would create a new file in `src/gatsby-plugin-awesome/awesomeFile.js`. Then you can use your own version of `awesomeFile.js` in your project instead of the default version provided by the plugin. [This comment on Github Issues](https://github.com/chakra-ui/chakra-ui/issues/347#issuecomment-579620254) is also another explanation on shadowing. ## step 2: edit `theme.js` This is where we write our custom theme. I'm going to add a custom color called "brandPurple" that has a value of "#673FB4". First, we'll copy & paste the code from the [plugin docs](https://www.gatsbyjs.org/packages/gatsby-plugin-chakra-ui/). ```javascript // src/gatsby-plugin-chakra-ui/theme.js const theme = {}; export default theme; ``` This code is overwriting the default theme provided by the plugin with an empty theme. :warning: Don't try to run `gatsby develop` with this code yet; you're going to see a bunch of errors because the theme object is `{}`, and none of the previously available values can be accessed. Next, we're going to add the default theme provided by Chakra UI to our custom theme. ```javascript // src/gatsby-plugin-chakra-ui/theme.js import { theme as defaultTheme } from "@chakra-ui/core" const theme = { ...defaultTheme }; export default theme; ``` We rename `theme as defaultTheme` because we don't want names to clash for Chakra UI's theme with our own variable `theme`. You can run `gatsby develop` with this code now, but you won't see any changes, because we haven't added anything to our theme yet. Finally, we add our own "brandPurple" color like so: ```javascript // src/gatsby-plugin-awesome/theme.js import { theme as defaultTheme } from "@chakra-ui/core" const theme = { ...defaultTheme, colors: { ...defaultTheme.colors, brandPurple: "#673FB4", }, } export default theme ``` ### final code ```javascript // src/gatsby-plugin-awesome/theme.js import { theme as defaultTheme } from "@chakra-ui/core" const theme = { ...defaultTheme, colors: { ...defaultTheme.colors, brandPurple: "#673FB4", }, } export default theme ``` ## step 3: restart server In order for the `theme.js` to shadow, we need to restart the server. Go ahead and hit Ctrl+C (or other shortcut keys depending on your computer), and enter `gatsby develop` At this point, we're all set and we can use our new "brandPurple" color just like any other theme colors provided by Chakra UI. Here's an example test code. ```javascript // src/pages/testPage.js import React from "react" import { Box } from "@chakra-ui/core" const TestPage = () => { <Box bg="brandPurple"> here's the new color! </Box> } export default TestPage ``` see also: [plugin docs](), [Chakra UI docs on custom themes]() Feel free to leave a comment or hit me up on Twitter if you have any questions.
jesuissuyaa
339,279
Announcing Torah && Tech; The Book.
About a year and a half ago, my friend Ben Greenberg and I were trying to come up with a project that...
6,785
2020-05-19T19:38:25
https://blog.yechiel.me/announcing-torah-tech-the-book-2cf2e9c91d82
showdev, books, learning, coding
--- title: Announcing Torah && Tech; The Book. published: true date: 2020-05-19 19:01:30 UTC tags: showdev,books,learning,coding canonical_url: https://blog.yechiel.me/announcing-torah-tech-the-book-2cf2e9c91d82 series: Torah && Tech cover_image: https://res.cloudinary.com/practicaldev/image/fetch/s--wMEq9tYJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AxL9qpvQ2KX4ynJuZ.png --- About a year and a half ago, my friend [Ben Greenberg](https://dev.to/benhayehudi) and I were trying to come up with a project that would allow us to keep in touch as his family made Aliyah and moved halfway across the planet. ![The Torah and Tech logo.](https://cdn-images-1.medium.com/max/257/1*OhjCxTFwZ6sQcywFBTR_ZQ.png) The result was **Torah && Tech**, a weekly newsletter that covers the two interests we both have in common: tech and Jewish laws and ethics. The following 75 weeks (and counting!) have been loads of fun! We grew our readership from just the two of us (and our supportive spouses) to close to 200 weekly subscribers. The newsletters covered topics ranging from office etiquette and mentorship to the future of AI and the use of pointers in Golang to coping strategies during COVID-19 imposed quarantine. As we hit the one-year mark, the realization hit that this wasn’t just a side project that would last a month or two and that we were in this for the long-haul. The idea came up to collect the first year’s worth of Torah thoughts and publish them in a book. The idea sounded intimidating at first, but thanks to some help from a few good friends, we are happy to announce, just in time for Shavuot, that the first volume of Torah && Tech is available for pre-order! ![](https://cdn-images-1.medium.com/max/1024/0*xL9qpvQ2KX4ynJuZ.png) Unfortunately, due to coronavirus related delays, the print version won’t be available for another few weeks. Still, the e-book version is available for pre-order at most retailers and will be delivered to your favorite e-reader device on June 1st. Just head over to [**torahandtech.dev**](https://torahandtech.dev/) and order your copy today. Of course, while you’re there, you can use the opportunity to sign up for the newsletter so you can get a preview of volume two before anyone else. You will get a weekly Torah though in your inbox every week, just in time to share it at your Shabbat table. If you have any comments or feedback about the book, please don’t hesitate to reach out to Ben or me on our social media. We LOVE talking about anything Torah/tech-related! * * *
yechielk
339,332
What skill you never thought to be useful which you can't live without now?
A post by Zigmas Slusnys
0
2020-05-19T21:01:39
https://dev.to/slushnys/what-skill-you-never-thought-to-be-useful-which-you-can-t-live-without-now-30k9
skill, useful, development, programming
slushnys
339,351
Post Graduation, week 2
My trials and triumps in searching for my first software engineering role.
6,612
2020-05-19T21:43:32
https://dev.to/jbshipman/post-graduation-week-2-5735
jobsearch, career, productivity, codenewbie
--- title: Post Graduation, week 2 published: true description: My trials and triumps in searching for my first software engineering role. tags: jobsearch, career, productivity, codenewbie series: Software Engineer Job Searching in a Pandemic cover_image: https://thumbs.dreamstime.com/z/job-search-concept-chart-keywords-icons-gray-background-job-search-concept-chart-keywords-icons-gray-102351681.jpg --- _cover image credit [image url](https://thumbs.dreamstime.com/z/job-search-concept-chart-keywords-icons-gray-background-job-search-concept-chart-keywords-icons-gray-102351681.jpg)_ # TL;DR I have been doing lots of company research to find a small handful of organizations I want to / would love to work for. This includes seeing if these specific companies have openings for a Junior Software Engineer. If they do, fantastic; if not then I am still doing the research. I am also brainstorming ideas for my next project and I think I figured out what I want to build. I have spent a number of nights now on paper toying around with needed algorithms and logic to make my idea work. Have to say I am very fasinated by Sudoku and it is so much more complex to implement a game then I thought it would be; it's a challenge I want to take on. # What has been happening this week
jbshipman
339,391
a Weather App with Monetization Feature
[Instructions]: While I am still waiting for the free-trial account Coil, I am trying to build someth...
0
2020-05-20T00:17:57
https://dev.to/peterhychan/a-weather-app-with-monetization-feature-517g
gftwhackathon
[Instructions]: While I am still waiting for the free-trial account Coil, I am trying to build something basic but meaningful for experimenting with the Monetization API. ## What I built - a Simple Weather App with Monetization Feature - Currently, if the Monetization API along Coil plugin at the client side is properly implemented, a "Thank you" message will be displayed - If the client does not implement Monetization feature on his/her browser, the "Monetization Feature Available" message will be shown instead ### Submission Category: Exciting Experiments ## Demo ![App](https://upload.cc/i1/2020/05/20/1NjWZQ.png) ## Link to Code [Repo](https://github.com/peterhychan/mt-weatherapp-2020) ## How I built it I implement this application by React.js with Hooks. Realtime weather info is fetched on OpenWeather API. The Application is styled by CSS along with Bootstrap. ## Additional Resources/Info [Deployed Demo on now.sh](https://mt-weatherapp-2020.hoychanan.now.sh/)
peterhychan
340,708
Enhanced Analytic System for Smart University Assistance
An integration specific system development based on Agile Methodology to provide an easy solution for...
0
2020-05-21T08:49:25
https://dev.to/rahulbordoloi/enhanced-analytic-system-for-smart-university-assistance-1hlk
datascience, webdev, machinelearning, computerscience
An integration specific system development based on Agile Methodology to provide an easy solution for “newcomers” in college. It helps to allocate hostel rooms, suggest suitable stream, and provide daily class schedules and pending tasks. Moreover it makes an easy solution to students to do few tasks without the help of their mentors. It aims to ease daily routine queries faced by students new to campus, would ensure the prediction of the professions that can be pursued by aspiring engineering students after completion of course and also help to automate the counselling process. Website : http://befriend-minor.herokuapp.com Repository : https://github.com/rahulbordoloi/Enhanced-Analytic-System-for-Smart-University-Assistance Technologies Being Used : Data Mining, Machine Learning, Full Stack Development, Flutter. Tools and Languages Being Used : Python, PHP, MongoDB, MySQL, Flask, AWS, EC2, Restful API, CSS, JavaScript, Apache Server, PuTTY. 1. Introduction Often after higher secondary results students find themselves in a turmoil in deciding what stream of engineering is best suited for them. BEFRIEND aims to be a guide. It will provide students an easy and user friendly platform available in both web and mobile application formats where students can enter their marks and the entrance exam rank. Implementing Data Analytics on the input, it will provide the best choice for the student. Moreover, the pre-final and final year students face difficulty in opting the right field or domain for their career paths to grow. BEFRIEND will help them make this decision. The classical process of allocating rooms to the borders is completely manual and it takes a lot of time and effort to provide the confirmed rooms to borders due to which the students face a lot of problems. BEFRIEND simplifies the process. Here the students can enter their preferences (2/3 bedded, AC/Non-AC, Attached/Non-attached washroom) and based on the availability of rooms, hostels will be allocated to them. Its main aim is to be a 24x7 guide for students. To facilitate this, the portal will have an event scheduler where a student can keep track of his/her due academic projects, assignments, quizzes and also non-academic activities. In this way, they can efficiently manage their time between the two. Along with a lot of other functionalities, it will also contain a notice board displaying daily notifications and important announcements so that students can keep track of the latest happenings around the campus. 2. Mission Statement In student life, time management plays a crucial role. With rapid changing needs of a dynamic world market, a student must have lateral development. However, due to lack of planning and relevant information, the student loses track in vast overflowing data present all around him/ her. Being new to college, the first year students are unable to cope up with the unfamiliarity of the new surroundings. This leads to improper utilization of their full potential as a good amount of time is wasted on building faith and checking for the relevance and validity of the information source. Few years back a computer engineer's task was to manage and deal in hardware and software components. However, with passage of time and advancement of technology, the task has diversified. Due to the presence of varied domains in the IT sector, many final year students get deviated and are unable to decide on which domain is best suited for them to pursue as a career. 3. Mission Objective Our aim is to build a complete self-adapted system that deals with several problems together without manual interference. Using all sub-systems integrated in a single platform will make it more robust and efficient both in terms of technology and time management. The auto-correlated system we aim to build provides a solution by taking output of a sub-system as an input to the other one. 4. Project Goal ➢ Suggesting freshers the branch that is most suitable for them ➢ To simplify the process of allotment of hostel to students ➢ A virtual static Mentor Bot to reduce workload. ➢ Career recommendation to under-graduate engineering students for higher education. ➢ To-do list for assignments and other tasks. ➢ Self-monitoring system- to check progress and learning curve after each semester. ➢ Notify students about all latest notices and events hosted by KIIT and KISS. ➢ Daily schedule to keep up the learning pace of students, includes class routines, class tests and quizzes. ➢ Voice detection to ensure the students emotional state is stable. ➢ Creation of platform where alumni of KIIT can be participate in discussions with their batch mates. 5. Features Completely self-adapted system. Multiple utility platform. Secure and reliable. State-of-the-art tools and platforms are used to get consistent and accurate results every time. User-Friendly interface. Available in both web and mobile application interface. Deployment This api can be hosted on platform like heroku, aws, and others. MongoDB Atlas or Matlab can be used for remote database. For instance, the application can be deployed on Heroku by creating and registering an account. Following, create a new app and choose a deployment method (terminal or github) and follow the instruction there. Remote database can be created using Mongodb Atlas or Matlab. For Mongodb Atlas, you need to just to create your account and make a new cluster and link the cluster to your application through a URL. Following the given steps, you would have a remote application up and running.
rahulbordoloi
342,038
Need help deploy Docker that needs storage on GCP
For example, https://github.com/umputun/remark42 Internal BoltDB Uses docker-compose.yml h...
0
2020-05-23T06:51:07
https://dev.to/patarapolw/need-help-deploy-docker-that-needs-storage-on-google-2l7n
docker, devops, googlecloudplatform, help
For example, - https://github.com/umputun/remark42 - Internal BoltDB - Uses `docker-compose.yml` - https://posativ.org/isso/ - Internal SQLite - I also like Discourse, but [it seems expensive](https://www.discourse.org/pricing). I need a persistent storage, but I cannot use external MongoDB, MySQL or PostGRES. Or, I can just use DigitalOcean, and SSH tunnel in to do everything, but not sure about security.
patarapolw
342,057
Flood Fill Algorithm
Description Flood fill also known as Seed Fill algorithm helps us to find connected area t...
0
2020-05-23T08:26:27
https://dev.to/ashishpanchal/flood-fill-algorithm-2blj
algorithms, python, programming, recursion
--- title: Flood Fill Algorithm published: true description: tags: #algorithms, #python, #programming, #recursion cover_image: https://dev-to-uploads.s3.amazonaws.com/i/g3e4p5u1qhr0jy66443m.jpg --- Description ------------ Flood fill also known as Seed Fill algorithm helps us to find connected area to a node in multi dimensional array. It is popularly known for its use in bucket fill tool of paint program to fill similarly colored area and also used in games like Minesweeper, Go etc. Following animation shows how flood fill algorithm fills area connected to a node with similar color. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jugf21414se3y6onwaee.gif) Algorithm ---------- In order to achieve this the algorithm works as follows: *Input parameters: source_node, color_to_replace, new_color* Step 1: If color_to_replace is equal to new_color then return. Step 2: Else If color of source_node is not equal to color_to_replace then return. Step 3: Else set the source_node color to new_color. Step 4: Recursively start performing flood fill on following nodes: - Left node of the source_node, color_to_replace, new_color - Right node of the source_node, color_to_replace, new_color - Upper node of the source_node, color_to_replace, new_color - Node below to the source_node, color_to_replace, new_color Step 5: return Implementation --------------- Let's solve a problem to understand it better. You're given a image represented as 2D array, row and column of source node and new color. You need to color the source node and all its adjacent node which are similarly colored as source node. Example 1: Input: [[1,0,1],[1,1,0],[1,1,1]] source row = 1, source column = 1 and new color = 2 Output: [[2,0,1],[2,2,0],[2,2,2]] Example 2: Input: [[0,0,0],[1,1,0],[1,0,1]] source row = 1, source column = 1 and new color = 2 Output: [[0,0,0],[2,2,0],[2,0,1]] Note: Try implementing the algorithm on your own before moving to the solution, it'll surely help 🙂 Solution: ```python class Solution(object): def color_it(self, image, sr, sc, new_color): """ :type image: List[List[int]] :type sr: int :type sc: int :type new_color: int :rtype: List[List[int]] """ # Initiate an array to keep the track of visited elements # and color of the elements self.image = [] for i in range(len(image)): self.image.append([]) for j in range(len(image[i])): # Keep the track of color and if the node is already visited value = {"color": image[i][j], "visited": False} self.image[i].append(value) current_color = image[sr][sc] self.flood_fill(image, sr, sc, current_color, new_color) for i in range(len(image)): for j in range(len(image[i])): image[i][j] = self.image[i][j]["color"] return image def flood_fill(self, image, row, col, current_color, new_color): if row < 0 or row >= len(image) or col < 0 or col >= len(image[0]): return if image[row][col] != current_color: return elif image[row][col] == current_color and self.image[row][col]["visited"]: return elif image[row][col] == current_color and not self.image[row][col]["visited"]: self.image[row][col]["color"] = new_color self.image[row][col]["visited"] = True # Recursively call flood_fill for adjacent nodes self.flood_fill(image, row + 1, col, current_color, new_color) self.flood_fill(image, row - 1, col, current_color, new_color) self.flood_fill(image, row, col + 1, current_color, new_color) self.flood_fill(image, row, col - 1, current_color, new_color) ```
ashishpanchal
342,113
Why No Modern Programming Language Should Have a 'Character' Data Type
Photo by Henry &amp; Co. from Pexels Standards are useful. They quite literally allow us to commu...
0
2020-05-27T21:52:42
https://dev.to/awwsmm/why-no-modern-programming-language-should-have-a-character-data-type-51n
history, javascript, healthydebate
Photo by [Henry & Co.](https://www.pexels.com/@hngstrm?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels) from [Pexels](https://www.pexels.com/photo/gray-concrete-wall-2599543/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels) --- Standards are useful. They quite literally allow us to communicate. If there were no standard grammar, no standard spelling, and no standard pronunciation, there would be no language. Two people expressing the same ideas would be unintelligible to one another. Similarly, without standard encodings for digital communication, there could be no internet, no world-wide web, and no DEV.to. When digital communication was just beginning, competing [encodings abounded](https://superuser.com/q/537229/728488). When all we can send along a wire are `1`s and `0`s, we need a way of _encoding_ characters, numbers, and symbols within those `1`s and `0`s. [Morse Code](https://en.wikipedia.org/wiki/Morse_code) did this, [Baudot codes](https://en.wikipedia.org/wiki/Baudot_code) did it in a different way, [FIELDATA](https://en.wikipedia.org/wiki/Fieldata) in a third way, and dozens -- if not hundreds -- of other encodings came into existence between the middle of the 19th and the middle of the 20th centuries, each with their own method for grouping `1`s and `0`s and translating those groups into the characters and symbols relevant to their users. Some of these encodings, like Baudot codes, used 5 _bits_ (binary digits, `1`s and `0`s) to express up to `2^5 == 32` different characters. Others, like FIELDATA, used 6 or 7 bits. Eventually, [the term _byte_](https://en.wikipedia.org/wiki/Byte) came to represent this grouping of bits, and a byte reached the modern _de facto_ standard of the 8-bit _octet_. Books could be written about this slow development over decades (and many surely have been), but for our purposes, this short history will suffice. It was this baggage that the [ANSI committee](https://en.wikipedia.org/wiki/American_National_Standards_Institute) (then called the American Standards Association, or ASA) had to manage while defining their new American Standard Code for Information Interchange (ASCII) encoding in 1963, as computing was quickly gaining importance for [military, research, and even civilian use](https://en.wikipedia.org/wiki/Personal_computer#History). ANSI decided on an 7-bit, 128-character ASCII standard, to allow plenty of space for the 52 characters (upper and lowercase) of the English language, 10 digits, and many control codes and punctuation characters. > Even though ASCII was defined as a 7-bit encoding, the popularity of 8-bit bytes meant that ASCII characters commonly included a high 8th bit which went unused. In some applications, that bit acted as [a toggle to make text italic](https://en.wikipedia.org/wiki/ASCII#Bit_width). In spite of this seeming embarrassment of wealth with regards to defining symbols and control codes for English typists, there was one glaring omission: the remainder of the world's languages. And so, as computing became more widespread, computer scientists in non-English-speaking countries needed their own standards. Some of them, like [ISCII](https://en.wikipedia.org/wiki/Indian_Script_Code_for_Information_Interchange) and [VISCII](https://en.wikipedia.org/wiki/VISCII), simply extended ASCII by tacking on an additional byte, but keeping the original 128 ASCII characters the same. [Logographic](https://en.wikipedia.org/wiki/Logogram) writing systems, like Mandarin Chinese, require thousands of individual characters. Defining a standard encompassing multiple logographic languages could require multiple additional bytes tacked onto ASCII. Computer scientists realised early on that this would be a problem. On the one hand, it would be ideal to have a single, global standard encoding. On the other hand, if 7 bits worked fine for all English-language purposes, those additional 1, 2, or 3 bytes would simply be wasted space most of the time ("zeroed out"). When these standards were being created, disk space was at a premium, and spending three quarters of it on zeroes for a global encoding was out of the question. For a few decades, different parts of the world simply used different standards. But in the late 1980s, as the world was becoming more tightly connected and global internet usage expanded, the need for a global standard grew. [What would become the Unicode consortium](https://en.wikipedia.org/wiki/Unicode#History) began at Apple in 1987, defining a 2-byte (16-bit) standard character encoding as a "wide-body ASCII": > Unicode aims in the first instance at the characters published in modern text... whose number is undoubtedly far below 2^14 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicodes. And so Unicode fell into the same trap as ASCII in its early days: by over-narrowing its scope (focusing only on "modern-use characters") and prioritising disk space, Unicode's opinionated 16-bit standard -- declaring by fiat what would be "generally useful" -- was predestined for obsolescence. This [2-byte encoding, "UTF-16"](https://en.wikipedia.org/wiki/UTF-16), is still used for many applications. It's the `string` encoding in JavaScript and the `String` encoding in Java. It's used internally by Microsoft Windows. But even 16 bits' worth (65536) of characters quickly filled up, and Unicode had to be expanded to include "generally useless" characters. The encoding transformed from a fixed-width one to a variable-width one as new characters were added to Unicode. Modern Unicode consists of over [140,000 individual characters](https://en.wikipedia.org/wiki/Unicode), requiring at least 18 bits to represent. This, of course, creates a dilemma. Do we use a [fixed-width 32-bit (4-byte)](https://en.wikipedia.org/wiki/UTF-32) encoding? Or a [variable-width](https://en.wikipedia.org/wiki/UTF-8) encoding? With a variable-width encoding, how can we tell whether a sequence of 8 bytes is eight 1-byte characters or four 2-byte characters or two 4-byte characters or some combination of those? > UTF-8, the modern, variable-width incarnation of Unicode, is actually a code-within-a-code. The bit sequence in the first byte of a multi-byte character [encodes within it the number of bytes in that sequence](https://stackoverflow.com/a/44568131/2925434). This is a complex problem. Because of its UTF-16 encoding, [JavaScript will break apart multibyte characters](https://blog.jonnew.com/posts/poo-dot-length-equals-two) if they require more than two bytes to encode: {% codepen https://codepen.io/awwsmm/pen/mdevqqG %} Clearly, these are "characters" in the lay sense, but not according to UTF-16 `string`s. The entire body of terminology around characters in programming languages has now gotten so overcomplicated, we have [characters, code points, code units, glyphs, and graphemes](https://stackoverflow.com/q/27331819/2925434), all of which mean slightly different things, except sometimes they don't. Thanks to combining marks, a single grapheme -- the closest thing to the non-CS literate person's definition of a "character" -- can contain a virtually unlimited number of UTF-16 "characters". There are [multi-thousand-line libraries dedicated _only_ to splitting text into graphemes](https://github.com/orling/grapheme-splitter). Any single emoji is a grapheme, but they can sometimes consist of 7 or more individual UTF-16 characters. In my opinion, the only sensibly-defined entities in character wrangling as of today are the following: - "byte" -- a group of 8 bits - "code point" -- this is just a number, contained within the Unicode range `0x000000 - 0x10FFFF`, which is mapped to a Unicode element; a code point requires between 1 to 3 bytes to represent - "grapheme" -- an element which takes up a single horizontal "unit" of space to display on a screen; a grapheme can consist of 1 or more code points A code point encoded in UTF-32 is always four bytes wide and uniquely maps to a single Unicode element. A code point encoded in UTF-8 can be 1-4 bytes wide, and can compactly represent any one Unicode element. If there were no such thing as combining marks, either or both of those two standards should be enough for the foreseeable future. But the fact that combining marks can stack Unicode elements on top of each other in the same visual space blurs the definition of what a "character" really is. You can't expect a user to know -- or care about -- the difference between a character and a grapheme. So what are we really talking about when we define a `character` data type in a programming language? Is it a fixed-width integer type, like in Java? In that case, it can't possibly represent all possible graphemes and doesn't align with the layperson's understanding of "a character". If an emoji isn't a single character, what is it? Or is a `character` a grapheme? In which case, the memory set aside for it can't really be bounded, because any number of combining marks could be added to it. In this sense, a grapheme is just a `string` with some unusual restrictions on it. Why do you need a `character` type in your programming language anyway? If you want to loop over code points, just do that. If you want to check for the existence of a code point, you can also do that without inventing a `character` type. If you want the "length" of a `string`, you'd better define what you mean -- do you want the horizontal visual space it takes up (number of graphemes)? Or do you want the number of bytes it takes up in memory? Something else maybe? Either way, the notion of a "character" in computer science has become so confused and disconnected from the intuitive notion, I believe it should be abandoned entirely. Graphemes and code points are the only sensible way forward.
awwsmm
342,118
Convince Your Boss to Pay for Coding Courses
While learning to code is important to get a job, keeping up with your learning after you get that po...
0
2020-05-23T10:55:05
https://dev.to/adriantwarog/convince-your-boss-to-pay-for-coding-courses-363
career, motivation, productivity
While learning to code is important to get a job, keeping up with your learning after you get that position can lead to things like pay rises, higher positions, and even the chance to grow outside of your skillset/department. If we are lucky enough to be in a position of employment, we should seek to ask our current employers/bosses to help us develop our skills further. The result of doing so leads to a number of benefits not just for us, but for them too. In this video, I wanted to share some ways you can encourage your current company to pay for coding courses for you. TLDR? - Bring up the idea of learning more and improving with your boss - Create an email to formalise this request - Review different course that is in line with the company requirements - Be realistic about the costs and budget for training, and forward a few examples - Organise an in-person meeting with your boss to request learning - Review your companies policies as larger companies often include professional development as something to boost employees skills {% youtube mFZ_djyYBhk %} <center>[Youtube: Convince Your Boss to Pay for Coding Courses](https://youtu.be/mFZ_djyYBhk)</center> ## Follow and support me: Special thanks if you subscribe to my channel :) * [🎞️ Youtube](https://www.youtube.com/channel/UCvM5YYWwfLwpcQgbRr68JLQ?sub_confirmation=1) * [🐦 Twitter](https://twitter.com/adrian_twarog) ## Want to see more: I will try to post new great content every day. Here are the latest items: * [Adobe XD to Fully Responsive WordPress Site](https://dev.to/adriantwarog/adobe-xd-to-fully-responsive-wordpress-site-16e0) * [Adobe XD to HTML Full Process](https://dev.to/adriantwarog/adobe-xd-to-html-full-process-ao6) * [Full Tutorial on how to use SASS to improve your CSS](https://dev.to/adriantwarog/full-tutorial-on-how-to-use-sass-to-improve-your-css-57on) * [Creating a Mobile Design and Developing it](https://dev.to/adriantwarog/creating-a-mobile-design-and-developing-it-5c4o)
adriantwarog
342,146
A Contacts App on Kubernetes
How I found out about Kubernetes? During my last internship, I was assigned a pilot projec...
0
2020-05-23T11:54:47
https://dev.to/namitdoshi/a-contacts-app-on-kubernetes-1h1l
octograd2020, docker, kubernetes
--- title: A Contacts App on Kubernetes published: true description: tags: octograd2020, docker, kubernetes --- # How I found out about Kubernetes? During my last internship, I was assigned a pilot project, that was a contacts app. I completed building my contacts app, then they wanted me to Dockerize the application. That was the first time I heard about containerization and Docker. After facing some difficulties I was able to Dockerize the application. ## A new challenge awaits. Just after I finished with Docker, a new task was waiting for me, even more challenging and exciting for me. I was said to learn about Kubernetes and deploy the dockerize app on Kubernetes deployment. What the hell is that? Is that even possible for me? (These were my thoughts when I heard about the task.) I started learning and doing things by hit and trial xD, that's how I got introduced to Kubernetes. ## About the project The project comprised of mainly three stages (leaving out error handling and debugging). The first step was to build the contacts application, second step was to Dockerize the application and the last but the mightiest step was to deploy the application on a Kuberentes Cluster. ## Links [Live Project Link](https://go.aws/2zCvAj0) [Github Link](https://github.com/namitdoshi/phonebook) [Docker Image Link](https://hub.docker.com/r/namitdoshi/web-php) ## The festivities that followed During my internship, I was introduced to a lot of new technologies. This made me even more curious and I researched further into the topic. As a result of which I ran into CI/CD (Continuous Integration/ Continuous Deployment) and tools like Jenkins and CircleCI, furthermore I completed a Nanodegree of Cloud DevOps Engineer.
namitdoshi
342,179
YEW Tutorial: 04 ...and services for all!
Never break a promise (Photo by LexScope on Unsplash) In this fourth part we are going first to do s...
5,838
2020-05-23T13:25:11
https://dev.to/davidedelpapa/yew-tutorial-04-and-services-for-all-1non
rust, yew, webassembly
Never break a promise (Photo by LexScope on Unsplash) In this fourth part we are going first to do some "minor" improvements, and hopefully show a little more the potential of the [Yew framework](https://yew.rs) In this article we will be tinkering around,as usual; after which we'll start to see the light and usefulness at the end of the tunnel (still, the exit is far away). ### Code to follow this tutorial The code has been tagged with the relative tutorial and part. ```Bash git clone https://github.com/davidedelpapa/yew-tutorial.git cd yew-tutorial git checkout tags/v4p0 ``` ## Part 0: Cleanup First of all, let's remove some clutter from our _src/app.rs_ ```Rust use yew::prelude::*; pub enum Msg { AddOne, RemoveOne, } pub struct App { items: Vec<i64>, link: ComponentLink<Self>, } impl Component for App { type Message = Msg; type Properties = (); fn create(_: Self::Properties, link: ComponentLink<Self>) -> Self { App { link, items: Vec::new(), } } fn update(&mut self, msg: Self::Message) -> ShouldRender { match msg { Msg::AddOne => { self.items.push(1); } Msg::RemoveOne => { let _ = self.items.pop(); } } true } fn view(&self) -> Html { let render_item = |item| { html! { <> <tr><td>{ item }</td></tr> </> } }; html! { <div class="main"> <div class="flex three"> <div class="card"> <header> <h2>{"Items: "}</h2> </header> <div class="card-body"> <table class="primary"> { for self.items.iter().map(render_item) } </table> </div> <footer> <button onclick=self.link.callback(|_| Msg::AddOne)>{ "Add 1" }</button> {" "} <button onclick=self.link.callback(|_| Msg::RemoveOne)>{ "Remove 1" }</button> </footer> </div> </div> </div> } } } ``` Away with consoles and dialogs, and even random numbers. We are just adding `1` and removing `1`, literally. ## Part 1: Being persistent #### Code to follow this part ```Bash git checkout tags/v4p1 ``` After cleaning up we can now bring back a service, the [storage](https://developer.mozilla.org/en-US/docs/Web/API/Storage). ```Rust use yew::services::storage; ``` What is the storage, and how does it work? Well, first of all MDN has got a [better explanation](https://developer.mozilla.org/en-US/docs/Web/API/Storage) than what I could ever provide. Let's say that we are using a persistent Key-Value store that any modern browser provides for us, in order to save data. There are two kinds really, [session storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage), and [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage). We are using the local storage, that is saved across browser sessions. Since it is a Key-Value store, we can either save all our data as a key-Value pair, or set a key and dump all data in it. The lazy bum in me always prefers the dump and forget, obviously.. We need therefore to establish a main key under which all our data will live. ```Rust const KEY: &'static str = "yew.tut.database"; ``` Now we need also to create a virtual database with a custom data structure. We will use [serde](https://serde.rs/), one of the most mature crates in all Rust's ecosystem, to come to our rescue. Don't forget to update the _Cargo.toml_ with: ```Toml serde = "1.0" ``` ### Setup This is all the `use` area of the _src/app.rs_ with our KEY right after it. ```Rust use serde::{Deserialize, Serialize}; use wasm_bindgen::prelude::*; use yew::format::Json; use yew::prelude::*; use yew::services::storage::Area; use yew::services::StorageService; const KEY: &'static str = "yew.tut.database"; ``` It is a lot more than just serde! In order: - we have serde's _serialize_ to format the data, and _deserialize_ to dump it - we have wasm_bindgen to go lower in the stack. We're getting more hardcore here. Explanations later on. - from Yew we import _Json_ so we can create and dump data using JSON, because if we are lazy why not be also classy while doing it? - of course we need still Yew's prelude - then we import also yew's mod to relate to the storage area onto which we will save data (the local or session area we were discussing before) and the storage service itself (If you look at the code you see i have implemented right away the hardcore stuff needed with wasm_bindgen, but we'll talk about it later on) ### Custom Data Structures Now we can start to build our custom data structures! ```Rust #[derive(Serialize, Deserialize)] pub struct Database { tasks: Vec<Task>, } ``` Our virtual database is a vec of tasks. Sometimes trying to be as simple as possible is just right. I like to _impl_ my structs right away, just to keep together code that belong together, and that I might want to refactor out later on: ```Rust impl Database { pub fn new() -> Self { Database { tasks: Vec::new() } } } ``` Nothing too fancy, just a _new_ constructor. Following this, we can declare our _Task_: ```Rust #[derive(Serialize, Deserialize, Debug, Clone)] pub struct Task { title: String, description: String, } ``` The `Debug` derive is there also if you want to call back the console and resume logging... Go ahead, I won't judge you. Instead, notice the `Clone` derive: if we want to copy a task and bring it out of any function or callback, or if we want to just save a new task inside our database, most probably we will need to clone a task somewhere. Otherwise be prepared to face the compiler's wrath because you will end up borrowing mutably some immutable Task at some point, or making a Task survive its scope! And now a beautiful impl for our _Task_: ```Rust impl Task { pub fn new() -> Self { Task { title: "".to_string(), description: "".to_string(), } } pub fn is_filledin(&self) -> bool { (self.title != "") && (self.description != "") } } ``` - We have here an empty _new_ constructor - We have also a check to know if the Task is filled-in or not, that we'll end up using later on ### Msg and App We need to update our _Msg_ to manage our _Task_: ```Rust pub enum Msg { AddTask, RemoveTask(usize), SetTitle(String), SetDescription(String), } ``` We add and remove tasks, however we will not add them all at once: since we will have a form with a field for the _title_ and a field for the _description_ we need a way to collect those two separately. We therefore have a _SetTitle_ message, containing the title in a string, and a _SetDescription_ message, containing the description. The _AddTask_ will be shot by the submit button: when the message arrives we will collect title and description and save them in a new task. More on this mechanism later on. Now, in our app, we can take away the _items_ and add _storage_ and _database_ ```Rust pub struct App { link: ComponentLink<Self>, storage: StorageService, database: Database, temp_task: Task, } ``` We added a temporary task as well, to hold information on the task being filled in the form, before it will be stored in the database. Let's move on to the 3 functions in the impl of our _App_ ### fn create ```Rust fn create(_: Self::Properties, link: ComponentLink<Self>) -> Self { let storage = StorageService::new(Area::Local); let Json(database) = storage.restore(KEY); let database = database.unwrap_or_else(|_| Database::new()); App { link, storage, database, temp_task: Task::new(), } } ``` The `let` lines are those of concern here: - First we need to create a new storage object, and we are referencing the local, not the session storage (specified passing `Area::Local` to the constructor) - We make a JSON out of the database, and we create it using the restore function. That is, we restore the database loading it as JSON from the storage connected to the KEY (if it exists). - Of course the first time around the store area will be empty: in this case the _restore_ will return an error, that is why we unpack it in the next line with a _unwrap_or_else_: if there is something to restore we reassign the unwrapped content to the database, otherwise we create a new database by calling its constructor inside the closure of the _unwrap_or_else_. ### fn update The _update_ function is where most of the magic usually happens, so we'll examine it carefully: ```Rust fn update(&mut self, msg: Self::Message) -> ShouldRender { match msg { Msg::AddTask => { if self.temp_task.is_filledin() { self.database.tasks.push(self.temp_task.clone()); self.storage.store(KEY, Json(&self.database)); self.temp_task = Task::new(); refreshform("taskform"); } } Msg::RemoveTask(pos) => { let _ = self.database.tasks.remove(pos); self.storage.store(KEY, Json(&self.database)); } Msg::SetTitle(title) => { self.temp_task.title = title; } Msg::SetDescription(description) => { self.temp_task.description = description; } } true } ``` We go message by message. Remember that we have a temporary _Task_ that we are using as buffer in order to insert title and description separately with two different from input fields. `Msg::AddTask`: What we are doing here is simple. After the user has filled the form containing a field for the title, and a field for the description, the user will click on the submit button (that we will call _Add task_) However, before adding this temporary Task permanently on our storage, we need to check that both title and description have been inserted. We do it by calling the _is_filledin()_ function we implemented for the _Task_ struct earlier. After this simple integrity check on the temporary _Task_, we are going to push it into the virtual database. Here we need to _clone()_ it, that is, we are saving a copy of the temporary task inside our virtual DB. We update the storage by re-dumping the whole content of the virtual DB inside the value corresponding to our key: that is the purpose of the `self.storage.store(KEY, Json(&self.database))`; otherwise the virtual database and the storage of the page will be out of sync. Admittedly it is a lazy workaround, but the other option, that is, to have a proper serialize-deserialize method implemented in order to save each task as a pair of Key-Value takes much more coding effort. Of course, for a serious project, that should be the way, but for a tutorial's purpose (and small quick projects) I think this is sufficient. Next we call the mysterious function `refreshform("taskform")`. This is the function we created with the `wasm-bindgen`. Let's analyze its purpose first and its implementation next. **Purpose**: When the user fills out a form in a page and clicks on submit, usually the page sends information to another page, whicih is loaded if the form `action` field is set, or it refreshes the same page with updated information. What happens here instead is that when we fire the _Add Task_ button the message is sent to the WASM app we are writing. This app has to refresh the form in some way, if we want to reset the _title_ and _description_ inputs. As of today there is not a straightforward way of doing that with Yew. However, we are using Yew on top of the _web_sys_ stack. That means that we can use it to accomplish this task. We can use `#[wasm-bindgen]` to bind a function created with javascript to Rust. It is just [FFI](https://doc.rust-lang.org/1.9.0/book/ffi.html) in action. The _wasm-bindgen_ proposes us two ways of doing this: we can either write a `.js` file with a function that does this, and bind it to a Rust function interface, or we can write an inline javascript function right away. Of course, for a simple function it is OK to write inline javascript, while for a complex one we will need to write a separate file and bundle it in this project. ```Rust #[wasm_bindgen( inline_js = "export function refreshform(form) { document.getElementById(form).reset(); }" )] extern "C" { fn refreshform(form: &str); } ``` **Implementation**: What we are doing here is to define the `inline_js` function first, and its Rust interface next; the interface is simple `fn refreshform(form: &str)` it takes a string and does not have a return type. The inline JavaScript function is simple as well: ```javascript export function refreshform(form) { document.getElementById(form).reset(); } ``` We pass the form name to `document.getElementById` and we use the `reset` method on it in order to reset the fields of the form. Of course you are allowed and encouraged to reuse this snippet in your own code. After this brief detour into the `wasm_bindgen` lowlands, we can analyze the rest of the messages. `Msg::RemoveTask`: we use the position inside the vector in order to pop that element out of the database's _tasks_ vector with `self.database.tasks.remove(pos)`; after this we dump again the database in the storage, in order to update it. Wait a moment, how do we get the position inside the vector using a message from the `view` of the element? Well, we will see later on how to implement this trick, and I let you judge if indeed it is a really a trick or not. `Msg::SetTitle` and `Msg::SetDescription`: both messages are self explanatory, we just set the `self.temp_task` _title_ and _description_ respectively, no big deal. ### fn view Finally, we can analyze the `view`. ```Rust fn view(&self) -> Html { let render_item = |(idx, task): (usize, &Task)| { html! { <> <div class="card"> <header><label>{ &task.title }</label></header> <div class="card-body"><label>{ &task.description }</label></div> <footer><button onclick=self.link.callback(move |_| Msg::RemoveTask(idx))>{ "Remove" }</button></footer> </div> </> } }; html! { <div> <h2>{"Tasks: "}</h2> { for self.database.tasks.iter().enumerate().map(render_item) } <div class="card"> <form id="taskform"> <label class="stack"><input placeholder="Title" oninput=self.link.callback(|e: InputData| Msg::SetTitle(e.value)) /></label> <label class="stack"><textarea rows=2 placeholder="Description" oninput=self.link.callback(|e: InputData| Msg::SetDescription(e.value))></textarea></label> <button class="stack icon-paper-plane" onclick=self.link.callback(|_| Msg::AddTask)>{ "Add task" }</button> </form> </div> </div> } } ``` The `render_item` closure has changed a little, however we left the original name just to keep the structure. We are not creating rows of a _table_ as before, but creating new [picnic card](https://picnicss.com/documentation#card) items. We also have to annotate the variables we are capturing with our closure, because it's getting more complicated for the compiler to keep track of types. We are passing to it a `usize` as index, called _idx_ and a `Task` We are using the _task.title_ in the card's _header_, the _task.description in the card's _body_ (watch out for it, it is a custom style we created inside _index.html_ ), but the real magic is now in the code of the _footer_. In it we are creating a button to remove the task, with a callback that will fire the `Msg::RemoveTask` passing to it the `usize` _idx_ where it is stored the index of the task, index given by the position inside the _tasks_ vector. This index is used in order to remove the task from the database. But where did we get that index from? Again: good things will come to those who are patient. The `html!` macro is simple, however, it is a little bit cluttered with many `<div>` and `class` for presentation purposes. The structure is simple enough, once the clutter is removed: there is really only the list of the tasks, and the submission form to enter a new task. We use the `for` syntactic sugar to call the _render_item_ closure. The thing to notice here is that before the `map()` we call the `enumerate()`. This trick is what allows us to know the index that each task has inside the _tasks_ vector! Beware that `enumerate()` creates a tuple in the form of `(index, iter() content)`, so that same tuple we are passing to `map()`. This is the reason we had to annotate our closure as a `(usize, &Task)` tuple. The `form` field is straightforward: - we have an input with an `oninput` callback to update the temporary Task with the message `Msg::SetTitle` - likewise, we have a textarea with an `oninput` callback, used to update the temporary Task's description - the button _Add task_ completes the form, firing the message `Msg::AddTask`. What is really of notice is the `<form id="taskform">`: here we are setting the id of the form, that we pass to the inline javascript function inside the `Msg::AddTask` handling logic. Now we can understand the purpose of the `refreshform("taskform")`. Let's give it a run, and a try. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ebp890haio0wdrl72v9v.jpg) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/etlit022uc1xo6i6lp3h.jpg) Not bad at all. Now if you are playing with it, you can notice that the task list is persistent even if you refresh the page. It stays the same also when you close the server and restart it (easy, it's not a server side storage) We can inspect the storage with the browser tools. Here some images with Firefox and Chrome. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5xz3zi1ptncg06976xd6.jpg) In firefox is under the tab **storage** of the dev tools, in the local storage. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/6coexicb3ckb01fyvqpu.jpg) You can inspect the JSON stored in the value. Likewise you can inspect the JSON with Chrome's dev tools: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/55c44sk75y4aammnxc6s.jpg) However, here the Local Storage is found under the _Application_ tab. ## Part 2: Being practical You didn't think I would ever get to this point, did you? Well, checkout my [yew-todolist](https://github.com/davidedelpapa/yew-todolist). Live at [yew-todolist.surge.sh](http://yew-todolist.surge.sh). It has got a _manifest.json_ to make it real PWA. Well not really, because to make a PWA we need `https`, but that is easy to get on premises. #### Code to follow this part It is another repository: ```Bash git clone https://github.com/davidedelpapa/yew-todolist.git cd yew-todolist ``` In order to make the manifest, I used this [firebase app](https://app-manifest.firebaseapp.com/), then you just link back to the _index.html_ ```html <link rel="manifest" href="manifest.json" /> ``` While the rest is needed for ios compatibility. Speaking of the things you should really notice is that I made a separate _deploy/_ directory in order to be able to run [surge](https://surge.sh/), and deploy there. Of course you can't get real PWA without **https** support, and surge with https cost$ (quite so). Maybe with some other provider? An interesting thing is that I refactored the code to separate the virtualdb out of the _app.rs_ in its own _database.rs_. Moreover, I implemented a different status system for each todo, `Active`, `Completed`, `Archived`. For now there is no turning back from completed to active (if someone completed it by error), and the archived are removed from the interface but sit on the database. In the future I should make filters for views (active, completed, archived), ways to restore, adding the beginning date-completion date. Ways to completely remove old archived tasks. An export function could be useful too. Some of the these things we could implement with the knowledge so far gained, some other we really have to learn more concepts in ordr to implement; so stay tuned for more! PS: If yu want to thinker with it, pass it to some other provider that guarantees https, try out implementing new functions... That repo is there as a study/starting point. Take it as a homework source, play with it, and let me know how it goes. Stay tuned for next istallment, _Drive through libraries_, where we will see how to interface with online API's, NPM libraries, and more...
davidedelpapa
342,202
#My Final Year Project
My Final Project Every student in his final year of graduation is tasked with submitting a...
0
2020-05-23T17:25:16
https://dev.to/blank1611/my-final-year-project-1c25
octograd2020
[Comment]: # (All of this is placeholder text. Use this format or any other format of your choosing to best describe your project and experience.) [Note]: # (If you used the GitHub Student Developer Pack on your project, add the "#githubsdp" tag above. We’d also love to know which services you used and how you used it!) ## My Final Project Every student in his final year of graduation is tasked with submitting a final year project, this project is a measure of knowledge attained and how well we can put it to use. Before we start making a project we have to think what purpose does it serve, what problems does it solve. To understand my project we must first understand the situation for which the project was developed: What better way to utilize my knowledge than to make a project which solves an existing problem in my College. With that in mind, after some surveying around I came upon a problem faced by the Water Maintenance personnel in my college. My college has a vast campus area with buildings spread throughout the campus, there are overhead tanks on each of these buildings. These tanks are supplied water from one main sump and there is only one motor to supply water to all these tanks. Now one might wonder how does water go to all the tanks if there's only one motor? Well, there is a network of pipes with valves to channel water to the desired tank. So the problem was the monitoring of these tanks, each time a personnel had to go to each blocks/buildings climb all the floors and check the status of these tanks and relay the information to the Water Maintenance Cell. So to automate the process of monitoring and filling up of tanks would have required major changes made to the Water Supply system which could result in inconvenience to the people using it. So the project I made addressed the following issues: * Monitoring Overhead Water Tanks * Relaying the information of Tank's statuses to the Water Maintenance Cell This project monitors the tanks and when there is a change in status of the tanks say the tank is empty or is full it notifies this change to the Maintenance cell on a display panel. My project has two units: * Tank Unit: This unit is deployed in individual tanks. It monitors the tank and relays the information to the Central Unit. * Central Unit: This Unit is setup in the Water Maintenance Cell. It collectively displays the information on a display panel and also sounds a buzzer to alert the personnel of the change in tanks statuses. The Display panel has two columns of LEDs one column for Red LEDs which signify the tanks as being "Empty" and the other for Green LEDs which signify the tanks as being "Full". Rows on this panel signify the tanks. Here is an image for a better understanding of the proposed model and situation(situation is oversimplified in the image): ![Proposed Model](https://dev-to-uploads.s3.amazonaws.com/i/8db4381y4zsortqwiqgm.png) Display Panel Prototype: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/bx02d6f8angz7srg52sk.jpeg) Block Diagram for Tank Unit: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/mkfvtujsjq13td62kpcm.png) ## Demo Link [here](https://drive.google.com/open?id=1FBkwt90udUY51aoznhnOivtQRZVpmil_) ## Link to [Central Unit code gist](https://gist.github.com/Blank1611/531ef665546a138589a6e2ff5970be32) & [Tank Unit code gist](https://gist.github.com/Blank1611/23e6d414cbbd100db08ba4fb71c08ab4) [Note]: # (Our markdown editor supports pretty embeds. If you're sharing a GitHub repo, try this syntax: `{% github link_to_your_repo %}`) ## How I built it (what's the stack? did I run into issues or discover something new along the way?) ###The stack or components used in this project: * Arduino nano (Tank Unit) * Arduino Mega (Central Unit) * GSM SIM800A * Float Switch Sensor * Buzzer * 12VDC to 5VDC MB102 convertor * Power Adapter * LEDs (Display Panel) * Jumper Cables In softwares: * Arduino IDE (To program Arduino) * Embedded C as programming language * SoftwareSerial Library (To communicate with GSM module) * Usage of AT commands to control GSM module ###Issues I ran into: ####Choosing a technology for relaying information: * My college does not have a wifi facility so I could not have implemented IoT in this project. * ZigBees, bluetooth have very low range. So I ended up using GSM module. ####Reading the messages recieved on GSM: GSM module needs a SIM card to send and receive messages, using the AT commands * I was able to display this messages on the Serial Monitor but was unable to store the message body in a variable to take action/process further. * Also GSM has different modes to operate in for reading the text messages. The easiest mode is a like pipeline of sort where everytime a new message is received the message is not stored but forwarded to the device communicating with it, in our case Arduino. So our program has to continuously monitor this stream coming from this pipeline and store it in a variable. To manage this problem I used codes like "@T1F#", "@T1E" where "@" signifies the start of the message and "#" signifies the end. But this mode had a disadvantage which was, if two messages were to arrive simultaneously then there is a possibility that one message is dropped. So I thought of using another mode, a bit complex but it solves the problem. In this mode the messages are stored in the specified storage location, on the SIM card by default. This mode sends a notification of the newly arrived message, this notification contains the location of where the message is stored. Reading the message at this location gives the body of the message. ###New things I learnt from this project: I discovered and learn't a lot of new things that go into deploying a real time project. I learn't how to use GSM module to send and receive information also how to use AT commands to control GSM module's behaviour. Also I learnt how proper selection of components is crucial to a project's performance. ## Additional Thoughts / Feelings / Stories This project was made in a way which was less costly and simple for the given situation. I know its not something new, but it is something new in the college and it makes the life of the workers there a little easier. [Final Note]: # (CONGRATULATIONS!!! You are amazing!)
blank1611
342,207
Handshaking lemma / Degree sum formula
Exploring the degree sum formula and what it tells us about simple graphs.
0
2020-05-23T20:39:37
https://dev.to/adnauseum/handshaking-lemma-degree-sum-formula-419a
math, graphtheory
--- title: Handshaking lemma / Degree sum formula published: true description: Exploring the degree sum formula and what it tells us about simple graphs. tags: math, graph theory --- Behold, the degree sum formula: ![The degree sum formula](https://dev-to-uploads.s3.amazonaws.com/i/hfb5y4dzy8aaqg5w86k3.png) The degree sum formula states that, given a graph `G = (V,E)`, the sum of the degrees is twice the number of edges. Let's look at K<sub>3</sub>, a complete graph (with all possible edges) with 3 vertices. ![K3 graph](https://dev-to-uploads.s3.amazonaws.com/i/ni1c40rv2u7t6j7tk6ry.png) First, recall that _degree_ means the number of edges that are _incident_ to a vertex. A vertex is _incident_ to an edge if the vertex is one of the two vertices the edge connects. In the case of K<sub>3</sub>, each vertex has two edges incident to it. Actually, for all K graphs (complete graphs), each vertex has `n-1` degrees, `n` being the number of vertices. Dope. So, for each vertex in the set `V`, we increment our sum by the number of edges incident to that vertex. Or, in another way, construct a degree sequence for a graph and sum it: `sum([2, 2, 2]) # 6`. This sum is twice the number of edges. Our graph should have `6 / 2` edges. The "twice the number of edges" bit may seem arbitrary. But each edge has **two** vertices incident to it. In the degree sum formula, we are summing the degree, the number of edges incident to each vertex. A degree is a property involving edges. Edges are connections between two vertices. Summing the degrees of each vertex will inevitably re-count edges. ## Properties we can derive from this formula Anything multiplied by 2 is even. Since the sum of degrees is two times the number of edges the result must be even and the number of edges must be even too. With the above knowledge, we can know if the description of a graph is possible. This is useful in a puzzle such as the one I found in [this book](http://discrete.openmathbooks.org/dmoi3.html): > At a recent math seminar, 9 mathematicians greeted each other by shaking hands. Is it possible that each mathematician shook hands with exactly 7 people at the seminar? Each mathematician would shake the hand of 7 others which amounts to shaking hands with every mathematician minus yourself and one other person. A graph may not have jumped out at you, but this puzzle can be solved nicely with one. Think of each mathematician as a vertex and a handshake as an edge. Can we have a graph with 9 vertices and 7 edges? Applying the degree sum formula, we can say no. When we sum the degrees of all 9 vertices we get 63, since `9 * 7 = 63`. Since the sum of degrees is twice the number of edges, we know that there will be `63 ÷ 2` edges or 31.5 edges. Since half a handshake is merely an awkward moment, we know this graph is impossible. I hate telling mathematicians that they can't shake hands. Can we have 9 mathematicians shake hands with 8 other mathematicians instead? Can we have a graph with 9 vertices and 8 edges? Summing 8 degrees 9 times results in 72, meaning there are 36 edges.
adnauseum
342,437
Aiding your Research Internship's exploration
During my four years of engineering, I have seen friends who were interested in doing research intern...
0
2020-05-23T17:54:23
https://dev.to/littlestar642/aiding-your-research-internship-s-exploration-1i6g
angular, research, career, college
During my four years of engineering, I have seen friends who were interested in doing research internships under a qualified professor. This made them to frenziedly search for the professor over an entire plethora of institutions and their faculty list. I saw this as a pain point that I could solve and hence built this application called # lighthouse. ![lighthouse image from unsplash](https://dev-to-uploads.s3.amazonaws.com/i/vnc21r8chhyjat0kj8t8.jpg) The application is actually named **lighthouse** because it aims to guide students to the faculty of their choice across colleges and across departments. Here I would list the stack of the project -: #### Frontend 1. Angular 2. HighCharts for dynamic Graphs #### Backend 1. NodeJs 2. MongoDB 3. NLP processing using an third party API ## Usefulness of the Application: As already stated, the app is trying to bridge the gap between students seeking research internships under professors who are qualified highly in their fields. It is an *click-click-click* resource to find your professor of interest and get into contact with him. Here are some of the pages of the application developed so far. ### The home page: ![The Home Page](https://dev-to-uploads.s3.amazonaws.com/i/pojyhl4wy2r5xx066if9.png) ### The page with the details of the professor: ![Table representation for quick reference](https://dev-to-uploads.s3.amazonaws.com/i/uejkzmt429rw6zv7i3wd.png) ![Graphs for the data obtained](https://dev-to-uploads.s3.amazonaws.com/i/aa3fcmxb5iqkq7n0o827.png) ### The summary of the research papers ![The summary using a Third party api](https://dev-to-uploads.s3.amazonaws.com/i/rukw9hdxk0yd1cjivla1.png) ### The trends page: ![Trends of Learning](https://dev-to-uploads.s3.amazonaws.com/i/m6hfssujwg5rimr34ik1.png) ## Future Developments There is so much that we can still develop into the application. Some of the points that I feel should be implemented are-: 1. Building a trusted data pool. Presently the data is being scraped from google scholar using a script. Google scholar tends to block your API for multiple requests. A workaround is needed to solve this. 2. Calculating a proper credibility score to rate the professor on the basis of their citations and work. 3. Making the User Interface better with useful additions. If you like my idea and want to work along in developing the project you can always fork my repository [here](https://github.com/littlestar642/lighthouse) and join forces with me. The four years of my engineering life had been full of surprises and learning. I kind of enjoyed this time as a vacation. I think it is important for an individual to learn contemporary resources and then build useful solutions using them. Such practices will help the society as well as his own personality. I did my best to enjoy these college years and learnt things from places where my heart took me. You can connect with me on [linkedin](https://www.linkedin.com/in/littlestar642/). Always there for a quick chat! Signing off! Littlestar642
littlestar642
342,468
The Purpose of Programming
The purpose of programming is to create value.
0
2020-05-23T19:26:07
https://dev.to/aceafrica/the-purpose-of-programming-17lc
<p>The purpose of programming is to create value.<p>
aceafrica
342,494
All about console logging in JavaScript
In this article I want to collect all the information about logging in the console. Do you want to pu...
0
2020-05-23T20:17:35
https://dev.to/s0xzwasd/all-about-console-logging-in-javascript-588
javascript, webdev, beginners, tutorial
In this article I want to collect all the information about logging in the console. Do you want to pump your skill in this and surprise familiar developers? Then let's get started! ✨ ## console.log() This is probably one of the most frequent commands that we use when debugging the application. However, even this command has chips you may not be aware of. For example, you can pass several arguments: ```jsx const address = 'dev.to'; const protocol = 'HTTPS'; console.log(protocol, address); ``` In addition, you can transfer objects, arrays or functions: ```jsx const obj = { name: 'John', lastname: 'Doe' }; console.log(obj); ``` ## console.warn() & .error() & .debug() & .info() This is a very interesting logging feature, which should not be abandoned and can be used daily. Some of the most important advantages of using: entities are separated, warnings or errors during logging can be recognized immediately, you can make a filter of the desired type. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vj205l64i3it6m7togj8.png) ### console.warn() ⚠ Using `console.warn()`, you can display a warning if something suddenly goes wrong, but, for example, it does not greatly affect the main operation of the application. ```jsx const a = 3; const b = -5; const multiply = (a, b) => { b < 0 ? console.warn('Ooops. b is less than zero!') : null; return a * b; } ``` ### console.error() 🌋 Obvious use: when something seriously went wrong. More informative and immediately visible, unlike `console.log()`. ```jsx const firstName = 'John'; const secondName = undefined; const concatNames = (firstName, secondName) => { return firstName && secondName ? `${firstName} ${secondName} : console.error('Something goes wrong!');) } ``` ### console.debug() 🐛 It is very convenient to use the application for debugging, since then you can go through the entire code and remove it if you suddenly forgot. 🧹 ```jsx const car = { model: 'Audi', year: 2020 price: 132043 } console.debug(car); ``` ### console.info() ℹ It can be used to display some kind of reference information, for example, the execution time of a specific code / function. ```jsx console.info(`Code executed ${time} seconds`); ``` ## Variables and special values When using string values, you can specify variables that are declared with the following arguments. Be sure to specify the type of value: string, number, and so on. ```jsx console.log("Hello, my name is %s and I'm %d years old!", "Daniil", 22); ``` In the example above, we made two variables with different types: string and number. In addition, you can use tabs or newlines. The main thing is not to forget the escape of the character :) ```jsx console.log("\tThat tabs!"); ``` Now you can smoothly switch to using CSS in the console and create beautiful output 🎉 ## Using CSS in console! 🎨 We came to the most interesting. Using a special directive `%c`, you can set CSS properties for a string. Pass the styling itself as the second argument (most styles are supported). Let's look at an example right away. ```jsx console.log("This is %cCSS", "color: aqua"); ``` Now in the example, «CSS» will be displayed with aqua color. Who uses VS Code there is a special extension that will help with this. [VS Code Extension: Colored Console Log](https://marketplace.visualstudio.com/items?itemName=rehfres.colored-console-log) ## Output grouping For ease of reading the logs, they can be grouped. This will help if you have a large amount of data that can be combined. ```jsx console.group("That console group!"); console.log("Info in group #1"); console.groupEnd(); ``` ## «Assert» values In short, `console.assert()` displays an error message if the statement is false. Let's see an example. ```jsx const foo = undefined; console.assert(foo, "Foo is not set now"); // Assertion failed: Foo is not set now ``` ## Stack tracing 🏎 ```jsx function foo() { function bar() { console.trace(); } bar(); } foo(); ``` In the console, we get the function call stack in the following order: ``` bar foo ``` ## Try it out! I prepared a small sandbox where you can test all of the listed use cases (grouping in this sandbox is not supported at the time of writing). [Codesandbox](https://codesandbox.io/s/console-advanced-logging-jmiww) I will be glad to see examples of your use of logging in the comments :)
s0xzwasd
342,522
Dart and C : how to ffi and wasm (4) buffer pointer
This document is a continuation of the previous one. In This document, I introduce to how to use pon...
10,918
2020-05-23T21:48:00
https://dev.to/kyorohiro/dart-and-c-how-to-ffi-and-wasm-4-buffer-pointer-1n8e
dart, c, ffi, webassembly
This document is a continuation of [the previous one](https://dev.to/kyorohiro/dart-and-c-how-to-ffi-and-wasm-3-int-doube-buffer-pointer-832). In This document, I introduce to how to use ponter and buffer at ffi and wasm. # How to use Pointer and Buffer ### Create clang function ```ky.c #include <stdio.h> #include <stdlib.h> // [Linux] // find . -name "*.o" | xargs rm // gcc -Wall -Werror -fpic -I. -c ky.c -o ky.o // gcc -shared -o libky.so ky.o // [Wasm] // find . -name "*.o" | xargs rm // find . -name "*.wasm" | xargs rm // emcc ky.c -o ky.o // emcc ky.o -o libky.js -s EXTRA_EXPORTED_RUNTIME_METHODS='["ccall", "cwrap"]' -s EXPORTED_FUNCTIONS="['_new_buffer','_init_buffer','_destroy_buffer']" // cp libky.js ../web/libky.js // cp libky.wasm ../web/libky.wasm char* new_buffer(int size) { char* ret = malloc(sizeof(char)*size); return ret; } char* init_buffer(char* buffer, int size) { for(int i=0;i<size;i++) { buffer[i] = i; } return buffer; } void destroy_buffer(char* p) { free(p); } ``` ### Call from linux server at dart:ffi ```main.dart import 'dart:ffi' as ffi; import 'dart:typed_data' as typed; // dart ./bin/main.dart ffi.DynamicLibrary dylib = ffi.DynamicLibrary.open('/app/libc/libky.so'); // char* new_buffer(int size) typedef NewBufferFunc = ffi.Pointer<ffi.Uint8> Function(ffi.Int32 size); typedef NewBuffer = ffi.Pointer<ffi.Uint8> Function(int size); NewBuffer _new_buffer = dylib .lookup<ffi.NativeFunction<NewBufferFunc>>('new_buffer') .asFunction<NewBuffer>(); ffi.Pointer<ffi.Uint8> newBuffer(int length) { return _new_buffer(length); } // char* init_buffer(char*, int size) typedef InitBufferFunc = ffi.Pointer<ffi.Uint8> Function(ffi.Pointer<ffi.Uint8> buffer, ffi.Int32 size); typedef InitBuffer = ffi.Pointer<ffi.Uint8> Function(ffi.Pointer<ffi.Uint8> buffer, int size); InitBuffer _init_buffer = dylib .lookup<ffi.NativeFunction<InitBufferFunc>>('init_buffer') .asFunction<InitBuffer>(); ffi.Pointer<ffi.Uint8> initBuffer(ffi.Pointer<ffi.Uint8> buffer, int length) { return _init_buffer(buffer, length); } // void destroy_buffer(char* p) typedef DestroyBufferFunc = ffi.Void Function(ffi.Pointer<ffi.Uint8> buffer); typedef DestroyBuffer = void Function(ffi.Pointer<ffi.Uint8> buffer); DestroyBuffer _destroy_buffer = dylib .lookup<ffi.NativeFunction<DestroyBufferFunc>>('init_buffer') .asFunction<DestroyBuffer>(); void destroyBuffer(ffi.Pointer<ffi.Uint8> buffer) { _destroy_buffer(buffer); } void main(List<String> args) { // pointer and buffer var buffer = newBuffer(20); // new pointer for(var i=0;i<20;i++){ print(buffer.elementAt(i).value); // random value or 0 } // pointer -> pointer buffer = initBuffer(buffer, 20); for(var i=0;i<20;i++){ print(buffer.elementAt(i).value); // 0, 1, 2, 3, 4, ....19 } // pointer -> uint8slist // 0, 1, 2, 3, 4, ....19 typed.Uint8List bufferAsUint8List = buffer.asTypedList(20); for(var i=0;i<bufferAsUint8List.length;i++){ print(bufferAsUint8List[i]); } // set value into buffer bufferAsUint8List[0] = 110; print(buffer.elementAt(0).value); // 110 } ``` ### Call from web browser at dart:js ```main.dart import 'dart:js' as js; import 'dart:typed_data' as typed; // webdev serve --hostname=0.0.0.0 js.JsObject Module = js.context['Module']; var HEAP8 = Module['HEAP8']; js.JsFunction _new_buffer = Module.callMethod('cwrap',['new_buffer','number',['number']]); int newBuffer(int length) { return _new_buffer.apply([length]); } js.JsFunction _init_buffer = Module.callMethod('cwrap',['init_buffer','number',['number','number']]); int initBuffer(int buffer, int length) { return _init_buffer.apply([buffer, length]); } js.JsFunction _destroy_buffer = Module.callMethod('cwrap',['destroy_buffer','void',['number']]); int destroyBuffer(int buffer) { return _destroy_buffer.apply([buffer]); } js.JsFunction _to_uint8list= js.context['to_uint8list'];// from util.js typed.Uint8List toUint8List(int buffer, int length) { return _to_uint8list.apply([buffer, length]); } void main() { // hello printHello(); // Hello!! // int print('${sumInt(10, 100)}'); // 110 // double print('${sumDouble(10.1, 100.2)}'); // 110.3 // // new pointer var buffer = newBuffer(20); for(var i=0;i<20;i++){ print('${HEAP8[i+buffer]}'); // random value or 0 } // pointer -> pointer buffer = initBuffer(buffer, 20); for(var i=0;i<20;i++){ print('${HEAP8[i+buffer]}'); // random value or 0 } // pointer -> uint8slist // 0, 1, 2, 3, 4, ....19 typed.Uint8List bufferAsUint8List = toUint8List(buffer, 20); for(var i=0;i<bufferAsUint8List.length;i++){ print(bufferAsUint8List[i]); } // set value into buffer bufferAsUint8List[0] = 110; print('${HEAP8[buffer]}'); // 110 print("${HEAP8.runtimeType}");// typed.Uint8List bufferAsUint8List2 = (HEAP8 as typed.Int8List).buffer.asUint8List(buffer, 20); for(var i=0;i<bufferAsUint8List2.length;i++){ print(bufferAsUint8List2[i]); } } ``` ```util.js to_uint8list = function(index, len) { var v = new Uint8Array(Module.HEAP8.buffer, index, len); return v; } ``` # Explanation ### Pointer Clang's pointer is used as Pointer<Uint8> object in dart:ffi. Clang's pointer is used as number object in dart:js. If you want to use the value of the pointer ```dart:io buffer.elementAt(i).value ``` ```dart:js js.JsObject Module = js.context['Module']; var HEAP8 = Module['HEAP8']; typed.Uint8List bufferAsUint8List2 = (HEAP8 as typed.Int8List).buffer.asUint8List(buffer, 20); bufferAsUint8List2[i] ``` --- But I couldn't find any documentation to back this up about dart:js's code I recommend the following code to use js function ``` to_uint8list = function(index, len) { var v = new Uint8Array(Module.HEAP8.buffer, index, len); return v; } ``` ### About Uint8List You can handle the Buffer as Uint8List In the case of dart:io, `buffer.asTypedList(20)`. In the case of daer:js, `(HEAP8 as typed.Int8List).buffer.asUint8List(buffer, 20)` And, if you change Uint8List, the C language Buffer will also change. ### Note!! Clang's buffer is released by free function. but, it is not recommended to access released buffer. # Next Time About Clang's Object # PS Here's the code for this one https://github.com/kyorohiro/dart_clang_codeserver/tree/03_buffer_int_double
kyorohiro
342,697
Important CSS Concepts To Learn.
CSS(Cascading Style Sheets) is a rule-based language. It's used to style and lay out pages by definin...
0
2020-05-24T12:53:01
https://dev.to/frontenddude/important-css-concepts-to-learn-57j3
css, codenewbie, 100daysofcode, beginners
CSS(Cascading Style Sheets) is a rule-based language. It's used to style and lay out pages by defining specific groups of styles that get applied to elements or groups of elements. Many people find themselves learning CSS in conjunction with HTML. Both languages work in unison (CSS rules style HTML elements) but due to its various concepts, CSS can often be frustrating and get confusing. If you are just starting out, learn the following CSS concepts to gain a strong foundation and understanding of the rule-based language. >Please note: The descriptions below are a brief overview of each concept. Read the recommended reading to get in-depth explanations of each CSS concept. --- ### Cascading, Inheritance & Specificity The first step to gaining a stronger understanding of CSS is to learn how these three concepts together control which CSS rule applies to what element. ##### Cascading Cascade is the fundamental concept on how CSS was created. As the name suggests, Stylesheets cascade (top to bottom). This means that the order of CSS rules matter and when two rules apply that have equal specificity the one that comes last in the CSS is the one that will be used. ##### Inheritance Some CSS property values set on parent elements are inherited by their child elements, and some aren't. This can often be confusing but the principle behind it is actually designed to allow us to write fewer CSS rules. Properties such as 'color' and 'font-family' are inherited which is why we often use the BODY element to assign them to. It is also worth knowing that every CSS property accepts four values to control inheritance essentially being able to turn inheritance "on and off". ##### Specificity As multiple rules apply to an element conflicting rules are sorted and applied by specificity. Each selector has a different specificity ranking which are: 1. Id's 2. Class and Pseudo Class's 3. Element selectors As rules conflict, CSS determines the rule with the highest specificity and applies it to the element. ##### Recommended Reading + [MDN's - Cascade and Inheritance](https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/Cascade_and_inheritance) + [Simmons - CSS Inheritance, Cascade, and Specificity](http://web.simmons.edu/~grabiner/comm244/weekfour/css-concepts.html) + [Emma Bostian's - CSS Specificity ](https://dev.to/emmawedekind/css-specificity-1kca) + [Specificity Calculator](https://specificity.keegan.st/) --- ### !important Declarations The !important property in CSS overrides any specified rules and makes sure the rule denoted by !important is applied. Without understanding the three concepts above, it is easy to get into the habit of using !important on every property that doesn't get applied as expected. However, it's important to understand that most developers consider the use of !important an anti-pattern. Read the articles recommended below to grasp a better understanding of when and how to use !important. ##### Recommended Reading + [CSS Trick's - When Using !important is The Right Choice ](https://css-tricks.com/when-using-important-is-the-right-choice/) + [UX Engineer's - Avoid Using !important](https://uxengineer.com/css-specificity-avoid-important-css/) --- ### Media Queries CSS Media Queries are used to change the style of your site depending on what screen resolution or device is being used. Media Queries can be combined to create specific scenarios for when you want to apply certain rules to that situation. This created and allowed the concept of responsive and adaptive design to work coherently in the browser. If you'd like to learn how to define, use and understand CSS Media Queries, check out the recommend reading below. ##### Recommended Reading + [Web.Dev's - Responsive Basics](https://web.dev/responsive-web-design-basics/) + [Udacity's - Responsive Web Design Fundamentals ](https://www.udacity.com/course/responsive-web-design-fundamentals--ud893) + [MDN's - Using Media Queries](https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queries/Using_media_queries) + [CSS Trick's - CSS Media Queries](https://css-tricks.com/css-media-queries/) --- ### Flexbox & Grid Over the years it's become apparent that CSS isn't easy to grasp or master. Thankfully as the language has evolved, concepts like Flexbox and Grid have been introduced. They both offer a solution that makes positioning and page layout much easier, faster and responsive. CSS Grid Layout is a two-dimensional layout system. It lets you lay content out in rows and columns, and has many features that make building complex layouts straightforward. Flexbox layout is a direction based layout system. It gives you ability to alter its items’ width, height and order to best fill the available space. ##### Recommended Reading + [MDN's - Grids](https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout/Grids) + [Grid Garden](https://cssgridgarden.com/) + [CSS Trick's - A Complete Guide to Flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/) + [FreeCodeCamps - How Flexbox Works](https://www.freecodecamp.org/news/an-animated-guide-to-flexbox-d280cf6afc35/) + [Flexbox Froggy](https://flexboxfroggy.com/) --- ### Let Connect! If you enjoyed this article or found it helpful lets stay connected. You can find me on Twitter sharing tips and code snippets [@frontenddude](https://twitter.com/frontenddude)
frontenddude
343,738
I'm watching you 👀
A post by Webtechno-G
0
2020-05-26T01:10:06
https://dev.to/webtechnog/i-m-watching-you-1p2p
codepen
{% codepen https://codepen.io/webtechno-g/pen/RwWOyOm %}
webtechnog
342,711
My first React App: Nüte
Nüte is an appplication that makes me smile, because I didn't know nothing about React, but I had to...
0
2020-05-24T10:14:49
https://dev.to/hectormtz22/my-first-react-app-nute-29ie
devgrad2020, octograd2020, showdev, githubsdp
Nüte is an appplication that makes me smile, because I didn't know nothing about React, but I had to do an entrepreneurial project. As a developer I decided to make a notes web application in React, because I was interested in learn how to do a Progressive Web App. To make matters worse, I was just learning NodeJS, and it was a good way to have frontend with React, and backend with NodeJS. ## Demo Link [Nüte](https://nutenotes.web.app/) https://nutenotes.web.app/ # Link to Code ## Code of frontend : React {% github HectorMtz22/nutefrontend no-readme %} ## Code of backend : NodeJS {% github HectorMtz22/nutebackend no-readme %} ## Additional Thoughts / Feelings / Stories I'm Mexican, this is the way why this app is in Spanish, but in the near future have all my stuff in both languages to ensure that everyone will understand
hectormtz22
342,749
Using Retrospective to Improve Yourself
Every Sunday I take half an hour to review my week. I learned this thing in college as CS people use...
0
2020-05-24T11:49:44
https://dev.to/lankinen/using-retrospective-to-improve-yourself-44ek
productivity
Every Sunday I take half an hour to review my week. I learned this thing in college as CS people use Scrum often as a project management tool and it contains these retrospectives. One day I realized that if this works in teams, why I couldn’t use it alone. There are three questions that I answer: How well you achieved the goal (from previous retrospective), what did you learn, what worked well, What needs improvement? / How to fix problems for the next sprint? I also create simple bullet point timeline about the last week for later so I don’t need to pull that information some other place. And I set myself some goal for the next week and some subgoals that I also review. I have done this now probably a month or little bit more and I feel like it have helped a lot. I have notice some things like not concentrating when doing things too long and I reduced the amount of freetime I spend daily. Props from the cover image to [Jill Heyer](https://unsplash.com/@jillheyer)
lankinen
342,771
CSS – An Introduction to Flexbox
The Flexible Box Layout Module (Flexbox) is a useful and easy-to-use CSS module that helps us to make...
0
2020-05-24T12:42:12
https://rajeshdhiman.in/an-introduction-to-flexbox/
css, webdev, codenewbie, beginners
The [Flexible Box Layout](https://www.w3.org/TR/css-flexbox-1/) Module (Flexbox) is a useful and easy-to-use CSS module that helps us to make our content responsive. Flexbox takes care of any spacing calculations for us, and it provides a bunch of ready-to-use CSS properties for structuring content. You can play with some sample [code](https://codepen.io/rajeshdh/pen/ExVrNPz) here. Flexbox has two components, a parent container and child items. Flexbox module provides us with a set of different CSS properties. We apply some of those properties on the parent container, also called a flex container and others on its collection of children or flex items. For example: Consider the following block. <div class='container'> <div>item 1</div> <div>item 2</div> </div> To apply a flexbox layout to the above block, we just need to add a CSS property of `display:flex` to it. .container { display: flex } When we set the display property to flex for any element. It becomes a flex container and all child items of the container become flex items. This applies a few defaults on the items inside the container, eg. The default direction is **row** . Items start from the **starting edge of the main axis**. Items **do not stretch** out on the main dimension **but can shrink**. Items **will stretch to fill the size of the cross axis**. `flex-basis` is set to `auto`, and `flex-wrap` is set to `nowrap`. **Properties for the flex container:** **flex-direction** -the direction of the content (horizontal or vertical) **flex-wrap** - control wrapping of items in one line or multiple lines **flex-flow** - shorthand for `flex-direction and flex-wrap` **justify-content** - control layout positioning on the main axis (horizontal positioning) **align-items** - control layout positioning on the cross axis (vertical positioning) **align-content** - control layout alignment when there is multiline content. **Properties for the flex items:** **order** - control the order or appearance of the items inside the container. **flex-grow** - controls the stretch behaviour of items relative to other items, or how much space an item can take up to fill the space. **flex-shrink** - controls the shrink behaviour of items relative to the other items, or how much smaller an item can become when adjusting for space. **flex-basis** - specifies the initial length of a flex item. flex - shorthand for the `flex-grow, flex-shrink, and flex-basis` **align-self** - specify the alignment of the individual item inside flex container; this also overrides the default alignment set by align-items. **Flexbox** is meant for **one-dimensional layout**, meaning you can create a layout for your content in one direction, either horizontal or vertical. To understand more about the concept of directions, we need to learn about the **concept of axis** followed by flexbox layout. Axis are the core to understand the flexbox layout. In flexbox we have: **main axis:** horizontal or left to right or right to left. **cross axis:** vertical, top to bottom or bottom to top. We can change direction by using the `flex-direction` property. The default value for `flex-direction` is `row`, which means the content is displayed horizontally. For `cross-axis`, we can assign it to `column` value. **flex property** - We can use flex property on **flex items** to make them responsive. The flex property is a shorthand for three other properties, **flex-grow, flex-shrink, flex-basis.** It is better than setting percentage width of the items, e.g. `width: 33.333%` **Ordering items:** We can change the order in which items appear with order property. Default is 0. I have created a [codepen](https://codepen.io/rajeshdh/pen/ExVrNPz) with some typical usage example of the flex layout. You check also check out [A Complete Guide to Flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/) and [Basic Concepts of flexbox](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox) for more details about the flexbox layout.
paharihacker
342,812
Swift Debugging: better printing with a simple trick
Add timestamps, source, and emoji to Swift logs.
0
2020-05-24T13:42:53
https://dev.to/ccheptea/swift-debugging-better-printing-with-a-simple-trick-4khi
swift, xcode, debugging, logging
--- title: Swift Debugging: better printing with a simple trick cover_image: https://dev-to-uploads.s3.amazonaws.com/i/00b8w7z76pvvv80ezloo.png published: true description: Add timestamps, source, and emoji to Swift logs. tags: - swift - Xcode - debugging - logging --- Coming from Android Development to iOS/Swift, one might expect debugging to be somewhat similar. It is to some extent. One thing that I miss while doing iOS Development is the logging capabilities Android Studio offers, which are available out of the box. I use `print()` a lot while during development. I like that it is so basic and simple to use, but oftentimes I find myself doing extra work to make the logs easier to follow and dissect. Things like timestamps and types (debug/error/warning/info) are things that make it a lot easier to filter and analyze the logs. The same goes for metadata such as class/filename, function, and line where the logs were created. This made me do a little research that resulted in a few lines of code that define the following methods: ```swift info("Some butterflies are blue.") warning("Careful, something's cooking!") debug("Here's a list of primes:", 2, 3, 5, 7) error("Ooops, you've got errors!") ``` which print logs like this: ``` 15:38:21.816 🦋 TabViewScreen.init():46 Some butterflies are blue. 15:38:21.816 ⚠️ TabViewScreen.init():47 Careful, something's cooking! 15:38:21.816 🦎 TabViewScreen.init():48 Here's a list of primes: 2 3 5 7 15:38:21.816 ❌ TabViewScreen.init():49 Ooops, you've got errors! ``` The idea was to preserve the `print` signature since that's what I was used to while using more descriptive names. Here's a gist with the code I'm using, but feel free to customize to your own needs. [_logging.swift](https://gist.github.com/ccheptea/324e40dc905c961d87a62f65f7ba0462) ```swift import Foundation fileprivate let infoMarker = "🦋" fileprivate let debugMarker = "🦎" fileprivate let warningMarker = "⚠️" fileprivate let errorMarker = "❌" func info(_ items: Any..., separator: String = " ", terminator: String = "\n", file: String = #file, line: Int = #line, function: String = #function) { log(items, separator: separator, terminator: terminator, marker: infoMarker, file: file, function: function, line: line) } func debug(_ items: Any..., separator: String = " ", terminator: String = "\n", file: String = #file, line: Int = #line, function: String = #function) { log(items, separator: separator, terminator: terminator, marker: debugMarker, file: file, function: function, line: line) } func warning(_ items: Any..., separator: String = " ", terminator: String = "\n", file: String = #file, line: Int = #line, function: String = #function) { log(items, separator: separator, terminator: terminator, marker: warningMarker, file: file, function: function, line: line) } func error(_ items: Any..., separator: String = " ", terminator: String = "\n", file: String = #file, line: Int = #line, function: String = #function) { log(items, separator: separator, terminator: terminator, marker: errorMarker, file: file, function: function, line: line) } fileprivate var formatter: DateFormatter = { let _formatter = DateFormatter() _formatter.dateFormat = "H:m:ss.SSS" return _formatter }() fileprivate func log(_ items: [Any], separator: String = " ", terminator: String = "\n", marker: String, file: String, function: String, line: Int) { let lastSlashIndex = (file.lastIndex(of: "/") ?? String.Index(utf16Offset: 0, in: file)) let nextIndex = file.index(after: lastSlashIndex) let filename = file.suffix(from: nextIndex).replacingOccurrences(of: ".swift", with: "") let dateString = formatter.string(from: NSDate.now) let prefix = "\(dateString) \(marker) \(filename).\(function):\(line)" let message = items.map {"\($0)"}.joined(separator: separator) print("\(prefix) \(message)", terminator: terminator) } ``` **Note\***: unfortunately XCode doesn't allow text coloring in the console. (Or I didn't find how to do it yet) **Note\*\***: The above code is a simple/lightweight and straightforward solution that doesn't require including any external package. If you want something more sophisticated you can check some other tools like this one: https://github.com/SwiftyBeaver/SwiftyBeaver
ccheptea
342,869
Dive into Ruby
Features ? and ! in method names. 1_000_000. local, Readonly, $global, @instance and @...
0
2020-05-24T15:14:47
https://mmap.page/dive-into/ruby/
--- title: Dive into Ruby published: true date: tags: canonical_url: https://mmap.page/dive-into/ruby/ --- ## Features - `?` and `!` in method names. - `1_000_000`. - `local`, `Readonly`, `$global`, `@instance` and `@@class`. - Structs as light weight classes. - `class SameName` merges, including core classes. - `missing_method`, `define_method` and other meta-programming. ## Quirks ### `begin ... end while` ``` begin "code executed" end while false "code not executed" while false ``` This is really anti-intuitive.And the creator of Ruby said not use it. > Don’t use it please. I’m regretting this feature, and I’d like to remove it in the future if it’s possible. > Because it’s hard for users to tell begin [code] end while [cond]works differently from [code] while [cond] > ``` > loop do > ... > break if [cond] > end > > ``` – [matz.](http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/6745) ### `Proc.new` The `return` statement in proc created by `Proc.new` will not only returns control just from itself, but **also from the method enclosing it**. ``` def some_method myproc = Proc.new {return "End."} myproc.call # Any code below will not get executed! # ... end ``` Well, you can argue that `Proc.new` inserts code into the enclosing method, just like block.But `Proc.new` creates an object, while block are _part of_ an object. And there is another difference between lambda and `Proc.new`.That is their handling of (wrong) arguments.lambda complains about it, while `Proc.new` ignores extra arguments or considers absence of arguments as nil. ``` irb(main):021:0> l = -> (x) { x.to_s } => #<Proc:0x8b63750@(irb):21 (lambda)> irb(main):022:0> p = Proc.new { |x| x.to_s} => #<Proc:0x8b59494@(irb):22> irb(main):025:0> l.call ArgumentError: wrong number of arguments (0 for 1) from (irb):21:in `block in irb_binding' from (irb):25:in `call' from (irb):25 from /usr/bin/irb:11:in `<main>' irb(main):026:0> p.call => "" irb(main):049:0> l.call 1, 2 ArgumentError: wrong number of arguments (2 for 1) from (irb):47:in `block in irb_binding' from (irb):49:in `call' from (irb):49 from /usr/bin/irb:11:in `<main>' irb(main):050:0> p.call 1, 2 => "1" ``` BTW, `proc` in Ruby 1.8 creates a lambda, while in Ruby 1.9+ behaves like `Proc.new`, really confusing. ### `def` does not create closures. Closures are simple in Python: ``` def a(x): def b(): return x b() ``` This won’t work in Ruby. ``` def a(x) def b x b end ``` In Ruby, `def` starts a new scope, without access to outer variables.Only `@var` and `$var` can be accessed.And no `extern` keyword like in C.In Ruby, lambda creates closure: ``` def a(x) b = ->{ x } b.call end ``` Or `define_method`: ``` def a(x) define_method(:b) { x } b end ``` In Ruby 1.9, `define_method` is not availabel in main Object, you canuse `define_singleton_method` instead. ## Tutorial Ruby advertises its “Least Surprise” principle,thus I hope a basic understanding of the above features and quirksis enough to dive into Ruby.If you are unsure about something on Ruby,you can guess and try, and it usually works,except for quirks mentioned above. If you prefer reading a tutorial before diving into Ruby,I would recommend [why’s guide to Ruby](http://poignant.guide),a poignant introduction to a shiny language. ## REPL Use [pry](https://mmap.page/dive-into/pry/). ## Make Use `rake`. `rake` is a popular choice in Ruby. `rake` is generally nice. But `pathmap` in `rake` uses rules hard to remember: ``` SOURCE_FILES.pathmap("%{^sources/,outputs/}X.html") ``` WTF is `%{^sources/,outputs/}X`? And what is the difference if we replace `X` with one of `p`, `f`, `n`, `d`, `x`?
weakish
342,878
Notes on A little Java, a Few Patterns
Advantages of Java a small core language with a simple semantic model gc Desig...
0
2020-05-24T15:26:29
https://mmap.page/java/a-little/
--- title: Notes on A little Java, a Few Patterns published: true date: tags: canonical_url: https://mmap.page/java/a-little/ --- ## Advantages of Java - a small core language with a simple semantic model - gc ## Design Patterns Design patterns are used: - to organize your code - to communicate with others ## Introductory Language Recommended introductory language: Scheme or ML. ## Too Much Coffee **Q:** Is it confusing that we need to connect `int` with `Integer` (e.g. `Integer(5)`) and `boolean` with `Boolean` (e.g. `Boolean(false)`)? **A:** Too much coffee does that. Note the book uses an early version of Java.Current Java can do auto boxing and unboxing for primitive types. Also, in this book, if the method specified its return type as `Object` in interface,then its implementation also annotated as returning `Object`,even when the implementation in fact always returns `Boolean`. **Q:** There is no number `x` in the world for which ``` x = x + 1 ``` So why should we expect there to be a Java `p` such that ``` p = new Top(new Anchovy(), p) ``` **A:** That’s right. But that’s what happens when you have one too many double espresso. ## Semicolon on Its Own Line for Mutability ``` class PutSemicolonOnItsOwnLineForMutability { Pie p = new Crust(); Pie p = new Top(new Anchovy(), p) ; // the future begins, i.e. from this line on, references to `p` reflect the change Pieman yy = new Bottom(); yy.addTop(new Anchovy()) ; // same as above } ``` ## LittleJava.java ``` // `D` means this is a data class. abstract class Numᴰ {} class Zero extends Numᴰ {} class OneMoreThan extends Numᴰ { Numᴰ predecessor; // Constructor OneMoreThan(Numᴰ _p) { predecessor = _p; } } /* We did not tell you these are Peano axioms. We din not give formal definition. So you can form your own definition. Try to give it a name by yourself. This helps you to remember and understand it better. */ /* This book shows how to protect direct access to property without the `private` access modifier. Wrap it into an interface. In that interface, only expose public methods, not the "private" property. */ /* Visitor pattern */ abstract class Shishᴰ { OnlyOnionsⱽ ooFn = new OnlyOnionsⱽ(); abstract boolean onlyOnions(); } class OnlyOnionsⱽ { boolean forSkewer() { return true; } boolean forOnion(Shishᴰ s) { return s.onlyOnions(); } boolean forLamb(Shishᴰ s) { return false; } boolean forTomato(Shishᴰ s) { return false; } } class Skewer extends Shishᴰ { boolean onlyOnions() { return ooFn.forSkewer(); } } class Onion extends Shishᴰ { Shishᴰ s; Onion(Shishᴰ _s) { s = _s; } boolean onlyOnions() { return ooFn.forOnion(s); } } class Lamb extends Shishᴰ { Shishᴰ s; Lamb(Shishᴰ _s) { s = _s; } boolean onlyOnions() { return ooFn.forLamb(s); } } class Tomato extends Shishᴰ { Shishᴰ s; Tomato(Shishᴰ _s) { s = _s; } boolean onlyOnions() { return ooFn.forTomato(s); } } /* Before introducing visitor pattern, every subclass of Shishᴰ need to contain the logic of `onlyOnions()` in its definition. And the book asked "Wasn't this overwhelming?" I had thought it would introduce generics next. But it turned out to be the visitor pattern. Oh, I forgot Java's generics are not reified. If Java had reified generics: class Shishᴰ { <S: Shishᴰ> boolean onlyOnions(S s) { if (s instanceof Sewer) { return false; } else if (s instanceof Onion) { return onlyOnions(s.s); } else { return false; } } } */ /* This is for the loyal Schemers and MLers. */ interface Tᴵ { // It seems Java does not allow unicode arrows in identity name. // So we use the Chinese character `一` (one), which has a similar shape. o一oᴵ apply(Tᴵ x); } interface o一oᴵ { Object apply(Object x); } interface oo一ooᴵ { o一oᴵ apply(o一oᴵ x); } interface oo一oo一ooᴵ { o一oᴵ apply(oo一ooᴵ x); } class Y implements oo一oo一ooᴵ { public o一oᴵ apply(oo一ooᴵ f) { return new H(f).apply(new H(f)); } } class H implements Tᴵ { oo一ooᴵ f; H(oo一ooᴵ _f) { f = _f; } public o一oᴵ apply(Tᴵ x) { return f.apply(new G(x)); } } class G implements o一oᴵ { Tᴵ x; G(Tᴵ _x) { x = _x; } public Object apply(Object y) { return (x.apply(x)).apply(y); } } class MKFact implements oo一ooᴵ { public o一oᴵ apply(o一oᴵ fact) { return new Fact(fact); } } class Fact implements o一oᴵ { o一oᴵ fact; Fact(o一oᴵ _fact) { fact = _fact; } public Object apply(Object i) { int inti = ((Integer) i).intValue(); if (inti == 0) { return new Integer(1); } else { return new Integer( inti * ((Integer) fact.apply(new Integer(inti - 1))).intValue()); } } } // Try to figure out how the above code works. // First the concrete one `Fact`. // To construct a `Fact`, we need a `fact`. // Suppose we already have `fact`, then we call `fact.apply(n - 1)`. // To successfully continue the recursion, // `fact.apply(n - 1)` should be equivalent to something like `New Fact(...).apply(n - 1)`. // Oh! We need to construct a new `Fact`, which requires a `fact` again. // But wait. We already have `fact`, so we can reuse it. // That's it -- self reference. class Dummy implements o一oᴵ { public Object apply(Object x) { return new Fact(this).apply(x); } } // It works. // And it also what `MKFact.apply` needs. // // Let's move on. // `Fact` implements `o一oᴵ`, and `MKFact` implements `oo一ooᴵ`. // `o一oᴵ` is like a constructor, // and `oo一ooᴵ` is like a higher-order function taking a function and returning a function. // Similarly, `oo一oo一ooᴵ` is like a function taking a higher-order function that takes a function and returning a function. // Also `Tᴵ` is like a higher-order function returning a function. // // Now is the `Y`, `H`, `G` classes. // Our `Dummy` class works, but it makes `MKFact` redundant. // We need to find a way to produce `Fact` from `MKFact` without defining extra classes. // Let's look at the types. // `Fact` implements `o一oᴵ`, and `MKFact` implements `oo一ooᴵ`. // So we need something that takes `oo一ooᴵ` and returns `o一oᴵ`, a.k.a `oo一ooᴵ -> o一oᴵ`. // `Y.apply` happens to have such a signature. // Thus probably we can get a `Fact` through `new Y().apply(new MKFact())`? // And it works. // Why? // Revisit our `Dummy` class. // In `New Fact(new Dummy())`, `(new Dummy).apply` calls back `New Fact(this)`. // Next we demonstrate `new Y().apply(new MKFact())` is an equivalent form without `this`. // And by the definition of `Y`, `new Y().apply(new MKFact())` is `new MKFact().apply(new G(new H(new MKFact())))`. // Let `x = new G(new H(new MKFact()))`, we have `new MKFact().apply(x)`. // By the definition of `MKFact`, it is `new Fact(x)`. // Then we check what is `new Fact(x).apply(n)`. // Fill in the value of `x`, it is `new Fact(new G(new H(new MKFact())).apply(n)`. // By the definition of `Fact`, it is `new G(new H(new MKFact())).apply(n)`. // By the definition of `G`, it is `new H(new MKFact()).apply(new H(new MKFact())).apply(n)`. // By the definition of `H`, it is `new MKFact().apply(new G(new H(new MKFact()))).apply(n)`. // By the definition of `MKFact`, it is `new Fact(x).apply(n)`. // That is it. Self-referring to `Fact` itself without using `this`! // // This is the mighty Y combinator. // The scheme version is in The Little Schemer, chapter 9. // // Let's walk through the reinvention of it in Java. // // First let's write a straightforward recursion version of `fact`. class StaticFact { static int fact(int n) { if (n == 0) { return 1; } else { return n * fact(n - 1); } } } // Hmm, we haven't introduced static method in this book. // We did mention using static method like `Math.max` in footnotes, // but we never explain how to *declare* a static method. // Thus we changed it to a non static version. // The definition is almost the same. class NormalFact { int fact(int n) { if (n == 0) { return 1; } else { return n * fact(n - 1); } } } // We refer `fact` itself in function body, which is not possible in lambda calculus. // So we pass a function as parameter instead. // Oh, no! Java dose not support first class function. // Hmm, in fact we could pass a function via the reflection API, or as a lambda in Java 8. // But none of them is available when this book is written. // Thus we wrap the function in a class. // Note that we do not need a separate class. class Fact1 { int apply(Fact1 f, int n) { if (n == 0) { return 1; } else { return n * f.apply(f, n - 1); } } } // Nice! // But lambda can only accept one parameter. // We need to change `Fact1.apply` to return a function. // Again in Java we return a wrapped class instead. // To make future changes easier, // we declare an additional interface instead of using hard-coded class. interface Fact2ᴵ { int apply(int n); } class NClosure implements Fact2ᴵ { Fact2 f; NClosure(Fact2 _f) { f = _f; } public int apply(int n) { if (n == 0) { return 1; } else { return n * (f.apply(f)).apply(n - 1); } } } class Fact2 { Fact2ᴵ apply(Fact2 f) { return new NClosure(f); } } // This is the poor man's Y combinator. // Look at this line `return n * (f.apply(f)).apply(n - 1);`, // if it is `g.apply(n - 1)` then it would be similar to the original recursion version. // Let's `g = f.apply(f)`: class GG implements Fact2ᴵ { Fact3 f; GG(Fact3 _f) { f = _f; } public int apply(int n) { return (f.apply(f)).apply(n); } } class Fact3 { Fact2ᴵ apply(Fact3 f) { return new GClosure(f); } } class GClosure implements Fact2ᴵ { Fact3 f; GG g; GClosure(Fact3 _f) { f = _f; g = new GG(f); } public int apply(int n) { if (n == 0) { return 1; } else { return n * g.apply(n - 1); } } } // We are almost done. // `GClosure.apply()` is the recursion definition we want. // Let's create something that takes a `GClosure`. class YY { Fact2ᴵ f; YY(Fact2ᴵ _f) { f = _f; } int apply(int n) { return f.apply(n); } } // Hmm, the problem is we still need `Fact3`, whose definition hard coded reference to `GClosure`. // We could made the constructor of `Fact3` taking `GClosure` as a parameter. // That is, to construct a `Fact3`, we need a `GClosure`, // and to construct a `GClosure`, we need a `Fact3`. // Now if we make `Fact3` and `GClosure` one thing, it will be self-referral. class Fact0 implements Fact2ᴵ { Fact2ᴵ g; Fact0(Fact2ᴵ _g) { g = _g; } public int apply(int n) { if (n == 0) { return 1; } else { return n * g.apply(n - 1); } } } // Now the tricky part is constructing `Fact0(_f)`. // Again this smells like self-reference. // Just as we introduced an additional function as parameter before, // to construct `Fact0(_f)` we probably need an additional function. class MKFact0 { Fact2ᴵ apply(Fact2ᴵ fact2ᴵ) { return new Fact0(fact2ᴵ); } } // Now we need to define `Y0` such that `new Y0().apply(new MKFact0())` will construct a `Fact0`. // It is hard to write the `apply` method of `Y0`. // Let's go back to last iteration. // `GG` is not abstract. It has `Fact3` hard coded in its definition. // Let's refactor it. class G0 implements Fact2ᴵ { Fact2ᴵ g; G0(Fact2ᴵ _g) { g = _g; } public int apply(int n) { return (g.apply(g)).apply(n); } } // Oops! `g.apply(g)` is invalid. // From here we may try to rewrite `G0`. // Or we may try to rewrite `Fact2ᴵ`, `int -> int` is too restrictive, // i.e. if we cannot solve a specific problem, // try to solve its more general form. // Rewrite `Fact2ᴵ`, substitute `int` with `Object`, we get `o一oᴵ`. // Change `Fact0` and `MKFact0` accordingly, we get `Fact` and `MKFact`. // `Fact` implements `o一oᴵ`, we need to define an interface for `MKFact` as well. That is `oo一ooᴵ`. // Now we try to rewrite `G0`. // We find that `g.apply(g)` still cause type mismatch. // `g` cannot be an ordinary `Object`, which may no have an `apply` method defined. // Also `g` cannot be an `o一oᴵ`, which cannot take itself (`o一oᴵ`) as a parameter. // Thus we need a new type (interface). // An interface whose `apply` method takes anything (an `Object`) and returns an `o一oᴵ`. // Thus we have `Tᴵ`. // Then we look at `Y`. // We still dose not know how to write its `apply` method, but we know the signature of it. // It takes `MKFact`, a.k.a. `oo一ooᴵ`, and returns `Fact`, a.k.a. `o一oᴵ`. // Thus we have `oo一oo一ooᴵ`. // // We try to pass `MKFact` to `G`, `new G(new MKFact())`. // Not possible. We need a bridge to connect `oo一ooᴵ` and `Tᴵ`, // i.e. something takes a `MKFact` a.k.a. `oo一ooᴵ` and produces a `Tᴵ` for `G`. class H implements Tᴵ { oo一ooᴵ f; H(oo一ooᴵ _f) { f = _f; } public o一oᴵ apply(Tᴵ x) { return ...; } } // Now we need to fill in the missing `apply` definition for `H`. // It returns `o一oᴵ`. Do we have something returns `o一oᴵ`? // `f.apply` happens to produce `o一oᴵ`. class H implements Tᴵ { oo一ooᴵ f; H(oo一ooᴵ _f) { f = _f; } public o一oᴵ apply(Tᴵ x) { return f.apply(...); } } // `f.apply()` takes an `o一oᴵ`. Do we have something to transform `Tᴵ` to `o一oᴵ`? // The `apply` method of `Tᴵ`. // Do we have some class implements `Tᴵ`, except `H` itself? // No. // So we need to implement one? // Wait. `G` implements `o一oᴵ` and its construction method takes `Tᴵ` as parameter. // So `new G(x)` fills in the gap. class H implements Tᴵ { oo一ooᴵ f; H(oo一ooᴵ _f) { f = _f; } public o一oᴵ apply(Tᴵ x) { return f.apply(new G(x)); } } // In fact, this is the very definition given in the book. // // At last, let's complete `Y`. class Y implements oo一oo一ooᴵ { public o一oᴵ apply(oo一ooᴵ f) { return ...; } } // We need something takes `oo一ooᴵ` and produces `o一oᴵ`. // `H` takes `oo一ooᴵ` in its construction method, and produces `o一oᴵ` by its `apply` method. class Y implements oo一oo一ooᴵ { public o一oᴵ apply(oo一ooᴵ f) { return new H(f).apply(...); } } // The parameter of `H(f).apply()` need to be a `Tᴵ`. // Do we have something implements `Tᴵ`? // `H` itself. // So class Y implements oo一oo一ooᴵ { public o一oᴵ apply(oo一ooᴵ f) { return new H(f).apply(new H(f)); } } // The very definition given in the book. // If you still have difficulties to understand it in Java, // try use another language. // For example, // // ```js // // Poor man's Y combinator // ((f => n => n == 0 ? 1 : n * f(f)(n - 1)) // (f => n => n == 0 ? 1 : 0 * f(f)(n - 1)) // 3) // ``` // // It would be perfect if it is `f(n -1)` instead of `f(f(n)(n - 1))`. // Let `g = f(f)`: // // ```js // ((f => ((g => n => n == 0 ? 1 : n * g(n - 1)) f(f)) // # same as above // 3) // ``` // // Hmm, `(g => n => n == 0 ? 1 : n * g(n - 1)` is just what we want! // Let `h = (g => n => n == 0 ? 1 : n * g(n - 1)`: // // ```js // (((h => // (f => h(f(f))) // (f => h(f(f)))) // (g => n => n == 0 ? 1 : n * g(n - 1)) // 3) // ``` // // Look! The factorial logic is factor out: `(g => n => n == 0 ? 1 : n * g(n - 1))`. // And this is the Y combinator. // // ```js // Y = // (h => // (f => h(f(f))) // (f => h(f(f)))) // ``` // // To construct `fact` with Y combinator: `Y(g => n => n == 0 ? 1 : n * g(n - 1))`. // // Hope this helps to understand those `Y`, `H`, `G` classes in Java. public class LittleJava { public static void main(String[] args) { Fact fact = new Fact(new Dummy()); System.out.println(fact.apply(new Integer(5))); MKFact mKFact = new MKFact(); Fact newFact = (Fact) mKFact.apply(new Dummy()); System.out.println(newFact.apply(new Integer(5))); Fact yFact = (Fact) new Y().apply(new MKFact()); System.out.println(yFact.apply(new Integer(5))); System.out.println("static recursion version"); System.out.println(StaticFact.fact(5)); System.out.println("normal version"); NormalFact normalFact = new NormalFact(); System.out.println(normalFact.fact(5)); System.out.println("Fact1"); Fact1 fact1 = new Fact1(); System.out.println(fact1.apply(fact1, 5)); System.out.println("Fact2: poor man's Y combinator"); Fact2 fact2 = new Fact2(); System.out.println((fact2.apply(fact2)).apply(5)); System.out.println("Fact3: g = f(f)"); Fact3 fact3 = new Fact3(); System.out.println((fact3.apply(fact3)).apply(5)); System.out.println("YY"); YY yy = new YY(new GClosure(new Fact3())); System.out.println(yy.apply(5)); } } ```
weakish
342,950
Automating a conference submission workflow: deploying to production
In the first post of this series, we detailed the setup of a software to automate submissions to conf...
0
2020-05-24T16:23:18
https://blog.frankel.ch/automating-conference-submission-workflow/3/
automation, workflow, deployment, production
In the [first post](https://blog.frankel.ch/automating-conference-submission-workflow/1/) of this series, we detailed the setup of a software to automate submissions to conferences. In the second one, we configured the [integration endpoints](https://blog.frankel.ch/automating-conference-submission-workflow/2/). This third post is dedicated to the deployment of the solution to production. ## To Cloud or not to Cloud? To decide what to do, the first step is to ask oneself whether to host: 1. On-premise 2. In the Cloud 3. Or even use my own machine First, let's remove on-premise from the options. It wouldn't make sense, as I'm the only user. Instead, there are two reasons to choose the Cloud: 1. The app needs to react to events: the event being moving a card on the Trello board. Hence, it needs to be up at all time, as well as its endpoint. Using a Cloud provider allows that. The alternative would be to start the app each time on my machine, interact with Trello, and stop the app again. 2. The app needs to be accessible from Trello. I already wrote how it's possible to configure `ngrok` to receive HTTP requests from the Web on one's machine. While it's possible for debug purposes, it's not a great idea for production purposes. ## Which Cloud platform to choose? Now that the option of hosting in the Cloud has been settled, it's time to choose one's <abbr title="Platform-as-a-Service">PaaS</abbr> provider. Several choices are available: - Google Cloud Platform - Microsoft Azure - Amazon Web Services - IBM Cloud - Oracle Cloud - And a couple of others Additionally, a build tool pipeline is also required: I need a full-fledged <abbr title="Continuous Integration">CI</abbr> job. It would also be a nice-to-have to benefit from implemented Continuous Deployment. The idea is to automatically build and deploy the app at each commit. My main requirement, however, is ease-of-use: I'd like to avoid setting up a stack, which requires a full-time Software Reliability Engineer and 24/7 monitoring. My understanding of the big providers' stack is that they are quite complex. ## Heroku to the rescue Fortunately, beside those, I knew of a provider that fits: [Heroku](https://www.heroku.com/). > Heroku is a platform as a service based on a managed container system, with integrated data services and a powerful ecosystem, for deploying and running modern apps. > The Heroku developer experience is an app-centric approach for software delivery, integrated with today’s most popular developer tools and workflows. Within Heroku, the basic building block is known as a _dyno_. Heroku is built on containerization: a dyno is an isolated, containerized process dedicated to execute code with a specific amount of RAM. Depending on the load, one can add/remove dynos to scale up/down horizontally. Some higher pricing plans allow the scaling to be automatic depending on the load. As the sole user, a single dyno is enough for my needs. In that case, the free plan fits the requirements. Here are some of its features: - Switches to sleep mode after 30 minutes of inactivity - Monthly 1,000 hours of activity - Custom subdomain I have two main usages of the app: 1. When I submit to conferences, I do that in "bursts". I block a half-a-day timeslot in my calendar, and submit to multiple conferences during that time. After each submission, I move the Trello card from the _Backlog_ column to the _Submitted_ one as explained in the [first post of this series](https://blog.frankel.ch/automating-conference-submission-workflow/1/). 2. When I receive a conference update, I just move the card from the _Submitted_ column to the relevant one - _Accepted_ or _Refused_. I can cope with the sleeping behavior in both cases, as there's no requirement for neither the calendar nor the sheet to be updated in a specific timeframe. However, the biggest strength of Heroku is <abbr title="In My Humble Opinion">IMHO</abbr> its embedded Continuous Deployment model, associated with its dedicated Git repository. That allows every push to the `master` branch to trigger a build that creates the package, and to deploy it to production. Let's see how it can be done. ## 5-minutes crash course on Heroku Heroku comes with a web interface, as well as a dedicated Command-Line Interface. I'd suggest to install the <abbr title="Command-Line Interface">CLI</abbr>: ```bash brew tap heroku/brew && brew install heroku ``` It's now possible to authenticate to one's account: ```bash heroku login ``` From that point on, let's create an app. ```bash heroku create dummy ``` The app is bound to a subdomain. The `dummy` app is accessible from <https://dummy.herokuapp.com>. In addition, the underlying Git repository is hosted on <https://git.heroku.com/dummy.git>. By running the previous command from the app's root folder, the newly-created remote Git repo should have been added as the `heroku` remote. Now, each push to `master` should **build the artifact and deploy it**: ```bash git push heroku master ``` Heroku does it by inferring the tech stack and the build tool. For example, it recognizes a Maven POM located at the app's root. The previous push displays something like this: ``` remote: Compressing source files... done. remote: Building source: remote: remote: -----> Java app detected remote: -----> Installing JDK 1.8... done remote: -----> Executing Maven remote: $ ./mvnw -DskipTests clean dependency:list install remote: [INFO] Scanning for projects... remote: [INFO] remote: [INFO] ----------------< ch.frankel.conftools:conf-automation >---------------- remote: [INFO] Building conf-automation 0.0.1-SNAPSHOT remote: [INFO] --------------------------------[ jar ]--------------------------------- remote: [INFO] remote: [INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ conf-automation --- remote: [INFO] remote: [INFO] --- maven-dependency-plugin:3.1.1:list (default-cli) @ conf-automation --- remote: [INFO] remote: [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ conf-automation --- remote: [INFO] Using 'UTF-8' encoding to copy filtered resources. remote: [INFO] Copying 2 resources remote: [INFO] Copying 2 resources remote: [INFO] remote: [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ conf-automation --- remote: [INFO] Nothing to compile - all classes are up to date remote: [INFO] remote: [INFO] --- kotlin-maven-plugin:1.3.50:compile (compile) @ conf-automation --- remote: [INFO] Applied plugin: 'spring' remote: [WARNING] /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/src/main/kotlin/ch/frankel/conf/automation/TriggerHandler.kt: (25, 14) Parameter 'request' is never used remote: [WARNING] /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/src/main/kotlin/ch/frankel/conf/automation/action/AddSheetRow.kt: (51, 31) Unchecked cast: Any! to Collection<Any> remote: [INFO] remote: [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ conf-automation --- remote: [INFO] Using 'UTF-8' encoding to copy filtered resources. remote: [INFO] skip non existing resourceDirectory /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/src/test/resources remote: [INFO] remote: [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ conf-automation --- remote: [INFO] Changes detected - recompiling the module! remote: [INFO] remote: [INFO] --- kotlin-maven-plugin:1.3.50:test-compile (test-compile) @ conf-automation --- remote: [INFO] Applied plugin: 'spring' remote: [INFO] remote: [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ conf-automation --- remote: [INFO] Tests are skipped. remote: [INFO] remote: [INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ conf-automation --- remote: [INFO] Building jar: /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/target/conf-automation-0.0.1-SNAPSHOT.jar remote: [INFO] remote: [INFO] --- spring-boot-maven-plugin:2.2.0.RELEASE:repackage (repackage) @ conf-automation --- remote: [INFO] Replacing main artifact with repackaged archive remote: [INFO] remote: [INFO] --- maven-install-plugin:2.5.2:install (default-install) @ conf-automation --- remote: [INFO] Installing /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/target/conf-automation-0.0.1-SNAPSHOT.jar to /app/tmp/cache/.m2/repository/ch/frankel/conftools/conf-automation/0.0.1-SNAPSHOT/conf-automation-0.0.1-SNAPSHOT.jar remote: [INFO] Installing /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/pom.xml to /app/tmp/cache/.m2/repository/ch/frankel/conftools/conf-automation/0.0.1-SNAPSHOT/conf-automation-0.0.1-SNAPSHOT.pom remote: [INFO] ------------------------------------------------------------------------ remote: [INFO] BUILD SUCCESS remote: [INFO] ------------------------------------------------------------------------ remote: [INFO] Total time: 21.087 s remote: [INFO] Finished at: 2020-04-13T08:17:18Z remote: [INFO] ------------------------------------------------------------------------ remote: -----> Discovering process types remote: Procfile declares types -> web remote: remote: -----> Compressing... remote: Done: 86.1M remote: -----> Launching... remote: Released v41 remote: https://dummy.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. ``` Deployment parameters can be configured through a dedicated `Procfile` located at the root of the repo. An alternative is to use a `heroku.yml` file. In my case, here is the `Procfile`: ``` web: java $JAVA_OPTS -jar target/conf-automation-0.0.1-SNAPSHOT.jar --spring.profiles.active=production --server.port=$PORT ``` 1. `web` configures the app to be accessible via HTTP 2. The other part is the actual command-line to launch the application. It's pretty recognizable if you already launched a Spring Boot application through the CLI 3. The only gotcha is the `--server.port=$PORT` parameter. Heroku decides which port the application should bind to, and exports it in the `$PORT` environment variable. This parameter makes Spring Boot receives HTTP requests on it. The deployment will stop the running JVM, deploy the new JAR, and start the JVM again with this new JAR. With one single dyno, downtime is to be expected, the duration is the time it takes for the JVM to start. Finally, the application requires a database. By default, Spring Boot uses the H2 in-memory database. That means that when the application goes down _e.g._ because of sleeping, all data is lost. The database is used by Camunda under-the-cover for all workflow-related data. For that reason, I configured Spring Boot to use PostgreSQL instead to be able to access the persisted data in case there would be an issue. Heroku allows to set up additional services, called _add-ons_, for applications. Add-ons come in different kinds, such as data stores, logging, monitoring, search, etc. [One add-on](https://elements.heroku.com/addons/heroku-postgresql) wraps PostgreSQL and makes it available to the app. The free plan limits the storage to 10,000 rows. Hence, I need to regularly manually reset the stored data when I receive an email warning about approaching the limit. I'm at the point to get back to H2, as I didn't need to debug: everything works nicely as expected. ## Conclusion In this series, I described how boring administrative tasks around conference submission could be automated. First, I showed both context and requirements, as well as a way to test locally. Then, I went on to detail the necessary steps to integrate the application with Google Calendar and Google Sheets. Finally, I described how to deploy the application on Heroku. Once a developer, always a developer. When you learned how to program, repetitive manual work becomes just a problem to code away.
nfrankel
342,958
Behind the scenes: From the moment you enter a URL
Recently in a job interview, I was asked- "What happens from the moment you enter a URL in the browse...
0
2020-05-27T17:43:35
https://dev.to/salyadav/behind-the-scenes-from-the-moment-you-enter-a-url-1img
webdev, beginners, computerscience, architecture
Recently in a job interview, I was asked- "What happens from the moment you enter a URL in the browser?". Although, I had an overall idea, I was quite unable to construct the entire flow loquaciously. This article is meant to give you (and me) a seamless flow chart of what happens from top to bottom until you see the very webpage. It covers both, the browser components and server side resolution of domains. So without further ado, let's dive in... ## The overall picture Although, a webpage loads in a matter of seconds, there is quite a lot going on in the background. For simplicity, we will split them into three major flows: #### 1. Get the IP address of the server that your domain name refers to ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/0x0qzu9xi1av6ct2dnmm.png) #### 2. Hit the server to fetch what is to be rendered on the UI ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/px0cu6wb80fh107gv40p.png) #### 3. Construct, paint, and render the page ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/s25rhzjnd32gt9rpk968.png) This is it. It's as simple as three steps. However, the complex part kicks in when we dive deeper into the two black boxes- 1. IP address resolution. 2. Constructing and rendering webpage. If you are backend developer, the first will be of prime concern to you, and for frontend folks it's the browser rendering that takes precedence. Anyways, let's look into both of them. ## Domain name to IP Address using Domain Name Server(DNS) Although this article is not meant for heavy theory (there are plenty on the internet), I will give a small summary why this block is important. We as human beings don't retain long numbers. And machines don't understand our sophisticated language. As a win-win solution, we give **names** (domain name) to our servers while they have their identity as IP Addresses (numbers). So how do we bridge the gap and communicate? **Domain Name Server** acts as our mediator! This is what goes behind the curtains: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/nmgeburj0q5o1vazjkxr.png) *Sorry if image is too small on your device, I had to fit in a lot of content in one flow chart. I urge you to download it and analyse each component. They are quite self-explanatory.* ## How the user finally gets a fancy webpage Once we have resolved the IP Address of the server that has our relevant data (the webpage), all there is left is to actually hit it and fetch what we wanted. Most of the time, we get an HTML page in response, but we also have instances when it is a PDF, or other content-types like image, JSON, XML etc. In this section, we will see how the browser converts an HTML file (bunch of nodes, scripts, and stylings) into a full-fledged viewable page. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/yn1m5ni71ykq992o6z2b.png) This is only an overview. But, if you want to dig deeper into how each and every browser component works, refer [here](https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Introduction). This site is pretty much the Magnum Opus of how browser renders the HTML with embedded scripts and painted stylings. However, I would like to mention a couple of important points here- 1- Your **Browser Engine** holds your JS environment like v8(for chrome) that has call stack, memory heap, event loop, Web API.. yada yada. 2- It's the **Render Engine** that parses the HTML nodes into a DOM tree and then further into a painted(CSS applied) render tree to display. 3- Everytime your HTML parser encounters **script** tag, it **PAUSES PARSING DOM elements** (IMPORTANT!!!) and synchronously downloads all scripts first. ## Conclusion Again, the agenda of this article was to help you articulate a big picture into a consolidated 3 min answer if anybody ever asks you- *"What happens when you enter a URL?"*. Of course, there is a lot to explore here, and there are brilliant sources online to do so. Mentioning some of them in the references below. ## Reference [The Big Picture] (https://medium.com/@maneesha.wijesinghe1/what-happens-when-you-type-an-url-in-the-browser-and-press-enter-bb0aa2449c1a) [How DNS works] (https://www.youtube.com/watch?v=mpQZVYPuDGU) [Rendering HTML into the browser] (https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Introduction) Thank you for reading. Hope this helps! 🦄🦄🦄
salyadav
342,998
Loop for range em go
Hoje vamos falar um pouco mais sobre loops. Vocês já ouviram valar sobre o for range? Calma que...
0
2020-05-24T21:23:26
https://dev.to/linivecristine/loop-for-range-em-go-39n7
go, beginners, tutorial
Hoje vamos falar um pouco mais sobre loops. Vocês já ouviram valar sobre o ``for range``? ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/r8muo9kpyug58c3qroba.png) *Calma que não é esse tipo de ranger...* Ele é outra versão do loop for, muito utilizado com slices, arrays, maps e strings. *O que esses tipos têm em comum?* São compostos por um grupo de outros elementos. Ainda não vimos slices, arrays e maps, apenas sabemos que são tipos compostos, ou seja, formado por tipos primitivos. Mas já conhecemos bem o tipo string. Sabemos que a string é uma cadeia de caracteres, ou runes, e que uma rune é do tipo ``int32``, podendo ter até 4 bytes. Já aprendemos a transformar caracteres em slice of bytes, sabemos que existe a tabela ASCII, o UTF-8 e o unicode, entre outras coisas. **Por isso vamos aprender o ``for range`` com strings**. **Esse é o ``for range``**: ```golang serie := "modern family" for indice, valor := range serie { fmt.Printf("índice: %v - valor: %v\n", indice, valor) } /* resultado: índice: 0 - valor: 109 índice: 1 - valor: 111 índice: 2 - valor: 100 índice: 3 - valor: 101 índice: 4 - valor: 114 índice: 5 - valor: 110 índice: 6 - valor: 32 índice: 7 - valor: 102 índice: 8 - valor: 97 índice: 9 - valor: 109 índice: 10 - valor: 105 índice: 11 - valor: 108 índice: 12 - valor: 121 */ ``` O resultado foi numérico, se você leu o post de conversão 👇🏽: {% link https://dev.to/linivecristine/conversao-de-tipos-em-golang-198g %} Já deve saber exatamente o que aconteceu, o tipo deixou de ser string e passou a ser int32, que é o tipo de um caractere. Esses números que saíram no resultado podem ser encontrados na tabela ASCII, o número 32, por exemplo, é referente ao espaço que existe na string. Vamos conhecer parte por parte do ``for range``. ```golang serie := "modern family" for indice, valor := range serie {...} ``` O ``range`` vai percorrer toda a extensão da string, ele vai pulando de letra em letra a cada volta do loop. Toda vez que ele mudar de letra irá retornar dois valores, o índice e o valor daquele caractere. O ``indice`` indicará a localização do caractere na string. Começando do zero até finalizar toda a extensão da variável. O espaço está na localização 6, lembrando que começamos a contar sempre do zero. *Muitas pessoas chamam o índice de ``i``*. Já o valor, value, irá receber a informação do valor daquele caractere. *Podemos chama-lo apenas de ``v``*. Juntando as duas informações ``i`` e ``v``, conseguimos indicar a localização e o valor das letras: ``fmt.Printf("indice: %v, valor: %v", i, v)``. Se não precisarmos de um desses valores, podemos apenas ignora-lo utilizando o underline ``_``. ```golang serie := "modern family" for _, valor := range serie {...} //Estou ignorando o índice ``` Para exibir as letras como caracteres, podemos converter novamente para strings. ```golang serie := "modern family" for i, v := range serie { fmt.Printf("índice: %v - valor: %v - letra: %s\n", i, v, string(v)) } /* resultado: índice: 0 - valor: 109 - letra: m índice: 1 - valor: 111 - letra: o índice: 2 - valor: 100 - letra: d índice: 3 - valor: 101 - letra: e índice: 4 - valor: 114 - letra: r índice: 5 - valor: 110 - letra: n índice: 6 - valor: 32 - letra: índice: 7 - valor: 102 - letra: f índice: 8 - valor: 97 - letra: a índice: 9 - valor: 109 - letra: m índice: 10 - valor: 105 - letra: i índice: 11 - valor: 108 - letra: l índice: 12 - valor: 121 - letra: y */ ``` Muitas das coisas que fazemos com o for range também podem ser feitas com o for normal, esse exemplo é uma delas. Apenas precisamos conhecer a funcionalidade do **``len``**. Length que dizer comprimento, extensão. Ele pegará o comprimento da minha string e assim poderemos fazer o for normal de acordo com a extensão, assim como o for range. ```golang serie := "modern family" for i := 0; i < len(serie); i++ {//enquanto i for menor que a extensão da string fmt.Printf("índice: %v - valor: %v - letra: %s\n", i, serie[i], string(serie[i])) } ``` Outra diferença é esse tal de ``serie[i]``. O for normal não irá pular de letra em letra automaticamente igual ao range. Então temos que indicar de alguma forma que a cada loop iremos mudar de letra. Por isso utilizamos o ``serie[i]``. Essa estrutura é muito utilizada em arrays e slices. Colocamos o nome da variável e o índice que queremos fica dentro dos colchetes. Como o ``i`` começa do zero e será incrementado a cada volta do loop, ele será nosso índice. O resultado dos dois ``for`` será exatamente o mesmo, **mas nem sempre será assim**. Vamos ver o que acontece se um caractere ocupar mais de um byte. ```golang sdds := "São João" for i, v := range sdds{ fmt.Printf("índice: %v - valor: %v - letra: %s\n", i, v, string(v)) } /* resultado: índice: 0 - valor: 83 - letra: S índice: 1* - valor: 227 - letra: ã índice: 3* - valor: 111 - letra: o índice: 4 - valor: 32 - letra: índice: 5 - valor: 74 - letra: J índice: 6 - valor: 111 - letra: o índice: 7* - valor: 227 - letra: ã índice: 9* - valor: 111 - letra: o */ ``` No ``for range`` não aconteceu nenhuma mudança. Mas percebam que os índices estão pulando alguns números, ele foi de 1 para 3 e de 7 para 9. Vamos ver no ``for``: ```golang sdds:= "São João" for i := 0; i < len(sdds); i++ { fmt.Printf("índice: %v - valor: %v - letra: %s\n", i, sdds[i], string(sdds[i])) } /* Resultado: índice: 0 - valor: 83 - letra: S índice: 1 - valor: 195 - letra: Ã* índice: 2 - valor: 163 - letra: £* índice: 3 - valor: 111 - letra: o índice: 4 - valor: 32 - letra: índice: 5 - valor: 74 - letra: J índice: 6 - valor: 111 - letra: o índice: 7 - valor: 195 - letra: Ã* índice: 8 - valor: 163 - letra: £* índice: 9 - valor: 111 - letra: o */ ``` Os índices não pularam nenhum número, mas em compensação alguns caracteres estranhos apareceram. Isso aconteceu pois existem letras acentuadas, elas ocupam 2 bytes, diferente das outras. **O ``for range`` percorrer caractere por caractere (``int32``), não importando quantos bytes eles ocupam. Já o ``for`` irá de byte por byte (``uint8``)**, então se uma letra ocupar mais de um byte, o for irá mostrar. Essa é uma diferença sutil, mas as vezes o resultado não sai como o esperado e não sabemos o motivo. Hoje conhecemos o ``for range`` e trabalhamos um pouco mais com o ``for``, espero que tenham entendido e procurem exercitar e experimentar mais os dois. Se quiserem me acompanhar nos estudos: [É só clicar aqui](https://youtu.be/WiGU_ZB-u0w) e ser feliz. **A missão de hoje foi realizada com sucesso, até amanhã**. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/165vilkos0yzcttia7y1.gif)
linivecristine
343,013
Why Contribute to Open Source
Contributing to open source projects can be a rewarding way to learn, teach, and build experience in...
0
2020-05-24T19:13:32
https://dev.to/caelinsutch/why-contribute-to-open-source-225j
opensource, contributing, software, community
Contributing to open source projects can be a rewarding way to learn, teach, and build experience in any skill. Even better, there's a special type of satisfaction that comes from helping out the broader programming way with your skills. Why do people contribute to open source projects? # Improve software you rely on Lots of open source contributors start by using the software they contribute too. When you run into bugs that impede your development or features that would help it, you may want to look at the source to see if you can squash that bug or add that feature yourself. That way, the entire software community will benefit from your contribution. The code you add will be used by thousands or even millions of other developers, all of who will benefit from your contribution. # Improve existing skills Whether it's coding, user interface design, graphic design, writing, or organizing, there's a task that covers every skill on open source projects. Open source projects have opportunities to work on massive code bases. You'll reach a higher level of expertise, something far more than what you'll gain by simply reading books or making small projects. Normally, there's quite a few steps to contributing to an open source project: 1. Determining what is worth contributing 2. Studying the contribution guidelines of the project 3. Building the project locally 4. Extracting the relevant code 5. Adapting the code and integrating changes 6. Provide test cases and documentation 7. Filing an issue 8. Submitting a PR and working with reviewers Once you've gone through all these steps, you'll gain a much deeper understanding of the oproject and principles that are behind it. Yes, it may be a lot of steps, but there's a lot of overlooked benefit. # Meet other developers The welcoming community of open source is what keeps people coming back. You have the opportunity to develop relationships with developers from around the world. Even better, at open source conferences, you may even meet some of these developers in person! # Find mentors and teach others By contributing to open source, you'll explain how you do things as well as ask others for help. Learning and teaching is fulfilling for all involved. By working with others in your fields, you'll learn and teach plenty of new ideas that can help your future development. # Build your reputation as a developer Your contributions are all public, which means other developers and companies can see how you're giving to the community with your skills. When hiring developers, companies often are looking for more than just a CV and degree. There are people out there with massive and impressive CV's that have likely barely touched the surface of a lot of technologies, they're all talk, no game. A strong list of open source contributions goes a long way to proving your expertise in software. Platforms like Github are an easy way to show employers what your interests and sills are. Open source contributions emphasize your expertise and skill far more than any certificate or degree will ever do. # It's empowering to help You don't have to be a lifelong or constant contributor to enjoy participating to open source. You get to make your contribution to a project that's often far bigger then yourself, and one that will likely impact tons of people. Yes, it may be challenging, but the contribution you're making is well worth it.
caelinsutch
343,031
How to sell your idea to the team
The key is to be prepared. Everything can be changed or improved. Sooner or later, you will have id...
0
2020-06-04T18:19:34
https://dev.to/andsmile/how-to-sell-your-idea-to-the-team-pal
career, personalgrowth, softskills, advice
The key is to be prepared. Everything can be changed or improved. Sooner or later, you will have ideas for improving something in the project you work on. Maybe you read something, watched something, or just had time to think about the current situation, and an idea came to you. It can be anything: - how to improve code - how to improve processes - how to improve the design - etc. It can be small and big. It is not so hard to convince someone to make some minor changes. But it is more challenging to do it when it requires fundamental changes. Don't be afraid to present your idea to the team, just make a preparation. ![Be prepared](https://dev-to-uploads.s3.amazonaws.com/i/kxk9pmditbu39xaq8ll0.jpg) Ok, let's go. Imagine that you have an idea of how to improve current software architecture, and you think it is good and brings lots of advantages. Now the time has come for hard task: remember all your technical and soft skills and convince others that it makes sense. ## How to promote your idea? ### Analyze the current approach First of all, you should analyze the current approach and understand how and why it was implemented or designed it. Most probably, this knowledge will help to convince others. ### Find the pros and cons of the current approach ![Find the pros and cons](https://dev-to-uploads.s3.amazonaws.com/i/3vpy20lgd37fux4aupi0.jpg) You can't just say that your idea is better than the current process/approach/architecture. It does not work like this. Prepare a list of the advantages and disadvantages of the current implementation. This list will help you and others to make a decision. ### Prepare pros and cons in your idea Yes, your approach can also have drawbacks, there is nothing ideal, but your approach can have fewer drawbacks or less critical. This will show which problems of the current process will be fixed in a proposed approach. ### Prepare step by step plan ![Prepare step by step plan](https://dev-to-uploads.s3.amazonaws.com/i/bmjjxx6d0aj40vhff93v.jpg) Prepare a list of steps or tasks that should be done to implement your idea. First, it helps others to see the scope of changes and will show that you know what you are talking about. ### Don't forget about estimation Once you have a plan, you should estimate how much time it will take. How much time from implementing real business tasks will be taken for this improvement. It should not be a precise estimation, but it shall show which numbers you are talking about. It's essential, without it, nothing can't be decided. ### Make a presentation You definitely need to present somehow your approach to the team/ team leader, and it is impossible to do without any prepared materials: pictures, presentation, etc. So yes, prepare a presentation or anything that helps you keep the presentation's structure and give more visual information to the team. ### Be prepared to questions They will definitely be. When your team sees your approach, they will ask questions, and challenge your proposal, to understand if it will cover their needs and improve the process. You should test your own approach as well and have your answers to such questions. ### Don't give to many options ![Don't give to many options](https://dev-to-uploads.s3.amazonaws.com/i/lrp1m0h57wcxn8urgx3e.jpg) You should analyze and present only one, two options to the team, as you already did the analysis and made a better decision. Otherwise, it will be harder to decide and will take ages. ### Give people time to think Give people time to think about your proposal. Maybe they find something that would need to improve, some new cases that should be covered, or perhaps they just realize that yes, it is good and should be implemented. But don't give an unlimited amount of time, organize a follow-up meeting with them in a week or two. ### Feel satisfied that you did it Even if your idea isn't implemented, you already have "+" to your karma. ![Feel satisfied that you did it](https://dev-to-uploads.s3.amazonaws.com/i/0nuu0agikq5s5vypeg6b.jpg) ## In the end Don't be afraid to improve processes, architecture, etc. It will show to others that you are not only a code writer but also can do more complicated things. Presentation and soft skills will definitely be useful for your career. __P.S.__ Have you ever tried to present your idea to the team? Did it work? What other tips can you add?
andsmile
343,195
TRICK: Easy requirements build
A few days ago I was doing a project in Python and wanted to let it practice for anyone who wanted to...
0
2020-05-25T03:52:00
https://dev.to/lucs1590/trick-easy-requirements-build-k1h
python, pip, help, productivity
A few days ago I was doing a project in Python and wanted to let it practice for anyone who wanted to access it, and one of the steps for that was to build the requirements.txt, that commonly loads the necessary packages to run the projects in Python and the easiest alternative is: - view the packages in the project; - select the packages after executing: ```bash $ pip freeze ``` But that's not practical at all, so I looked for an alternative that would meet my need, and found a project that did just that. Its goal is generate requirements.txt based on imports in project. ----- To install this package, just run: ```bash $ pip install pipreqs --user ``` or, if you use Python3: ```bash $ pip3 install pipreqs --user ``` To build automatically your requirements.txt, just run the following command in the project directory: ```bash $ pipreqs ``` or ```bash $ pipreqs /project/location ``` And the magic will happen!! ![Magic](https://dev-to-uploads.s3.amazonaws.com/i/ma3ana2a15bjb3ymz7a3.jpg) I hope this post helped and feel free to get in touch! ;) Thanks for reading! ----- This post is inspired by the following repository: {% github bndr/pipreqs %}
lucs1590
343,057
Placeholder title
What I built Submission Category: Demo Link to Code...
0
2020-05-24T20:51:41
https://dev.to/elsaxo/placeholder-title-45nk
gftwhackathon
[Instructions]: # (To submit to the Grant For The Web x DEV Hackathon, please fill out all sections.) ## What I built ### Submission Category: [Note]: # (Foundational Technology, Creative Catalyst, or Exciting Experiments) ## Demo ## Link to Code [Note]: # (Our markdown editor supports pretty embeds. If you're sharing a GitHub repo, try this syntax: `{% github link_to_your_repo %}`) ## How I built it [Note]: # (For example, what's the stack? did you run into issues or discover something new along the way? etc!) ## Additional Resources/Info [Reminder]: # (We hope you consider expanding your submission into a full-on application for the Grant for the Web CFP, due on June 12.)
elsaxo
343,068
How I make my npm package conformable to TypeScript?
Last time we made an NPM package with JavaScript. How I publi...
6,900
2020-05-24T23:00:21
https://en.taishikato.com/posts/how-i-make-my-npm-package-conformable-to-typescript
npm, typescript, node, productivity
Last time we made an NPM package with JavaScript. {% link taishi/how-i-published-my-first-npm-package-28hi %} Yes. It’s great! We made it😎. BUT, there is one problem. We can not use it with TypeScript projects out of the box because there is no type definition file and TS project can’t know any types of this NPM package. This time we will make a TypeScript file and generate a type definition file. Don't worry. It's just a piece of cake🍰. ## Change your index.js file to index.ts Just change the extension of the file and update the source code. **JavaScript** ```javascript import { v4 as uuidv4 } from 'uuid'; const generateSlug = (target, hasUuidSuffix = false) => { const text = target.toLowerCase(); const filterdText = text.replace(/[^a-zA-Z0-9]/g, ' '); let textArray = filterdText.split(/\s|\n\t/g); textArray = textArray.filter(text => { return text !== ''; }); const slug = textArray.join('-'); if (hasUuidSuffix) return `${slug}-${uuidv4().split('-')[0]}`; return slug; }; export default generateSlug; ``` **TypeScript** ```typescript import { v4 as uuidv4 } from 'uuid'; const generateSlug = (target: string, hasUuidSuffix = false): string => { const text = target.toLowerCase(); const filterdText = text.replace(/[^a-zA-Z0-9]/g, ' '); let textArray = filterdText.split(/\s|\n\t/g); textArray = textArray.filter(text => { return text !== ''; }); const slug = textArray.join('-'); if (hasUuidSuffix) return `${slug}-${uuidv4().split('-')[0]}`; return slug; }; export default generateSlug; ``` They are almost the same this time😅. ## Initialize with tsc command Let’s initialize your project with tsc command, which generates tsconfig.json file. ```shell $ tsc --init message TS6071: Successfully created a tsconfig.json file. ``` ## Add `"declaration": true` to your `tsconfig.json` We should do this to generate corresponding .d.ts file (type definition file) when we execute `yarn build`. Your tsconfig.json looks like below. ```json { "compilerOptions": { "target": "es5", "module": "commonjs", "declaration": true, "strict": true, "esModuleInterop": true }, "exclude": [ "node_modules", "dist" ] } ``` ## Add `"types": "index.d.ts"` to your `package.json` By adding this, a type definition file will be generated as index.d.ts. So your package.json looks like below. ```json { "name": "@taishikato/slug-generator", "version": "2.2.0", "description": "generate slug string", "main": "index.js", "types": "index.d.ts", "repository": "https://github.com/taishikato/slug-generator", "author": "taishikato", "license": "MIT", "private": false, "scripts": { "build": "tsc" }, "dependencies": { "uuid": "^7.0.2" }, "keywords": [ "slug", "npm", "package", "taishikato", "slug generator" ], "devDependencies": { "@types/uuid": "^7.0.2", "typescript": "^3.8.3" } } ``` ## Add .npmignore This file is the key. npm command usually checks .gitignore file to see which file should be excluded in the package. You need to add .npmignore when the files which should be excluded are different from .gitignore. In this case, npm command does not check .gitignore, checks only .npmignore. Your .npmignore looks like below ``` .gitignore yarn.lock node_modules index.ts ``` That’s it! Easy peasy! [taishikato/slug-generator: Slug generator for blog posts or any other contents](https://github.com/taishikato/slug-generator) Thank you for reading ![Cat](https://dev-to-uploads.s3.amazonaws.com/i/fd31k2clxd8k6uignwxq.png)
taishi
343,119
Let's Build an Ubuntu Remix - The Easiest Yet Most Difficult Job [Short Tutorial/Semi-Rant]
All remixes and official flavors have different workflows, teams, and purpose. We are not building an...
0
2020-05-24T23:59:37
https://dev.to/kailyons/let-s-build-an-ubuntu-remix-the-easiest-yet-most-difficult-job-short-tutorial-semi-rant-1m28
ubuntu, linux, bash, desktop
All remixes and official flavors have different workflows, teams, and purpose. We are not building an Ubuntu Studio but more of an Ubuntu MATE. Changing the desktop, and maybe adding a couple of custom additions. I will describe this through my methods of work on Ubuntu Lumina so far. We will be using Lumina as an example as it requires a little more work than others might have issues with. The main difference is that I, the distro maintainer, need to package the desktop and other included applications. While that might seem difficult, it is most likely the easiest part of my job. This is going to be a step-by-step process of how I am developing an Ubuntu Linux remix. > DO NOT TAKE ANYTHING IN THIS AS LEGAL ADVICE, ANY INFORMATION WITH LEGALITY IS SHOWING HOW I DEAL WITH IT BUT DOES NOT RECOMMEND ANYTHING. I AM NOT A LAWYER! - Just to cover my butt # What the Remix Should Have A thing your remix will need is: 1. A desktop environment (or window manager) of choice 2. An icon + Ubuntu **x** ***Remix*** - Icon gives you brand - The remix part might make Canonical not want to kill you for Trademark infringement 3. Installation tools 4. General tools (like browsers, file managers, etc) 5. Deb Packaging Knowledge 6. A Launchpad account 7. Some programming knowledge 8. ISO Builder 9. All required dependencies for the above Nine a simple easy to acquire tools and skills. You don't need to know too much about programming either, just Debian packaging. The need for skills in programming mostly attains for debugging errors. For 1 we are picking Lumina as this is going by *my* methods, but of course, there are many. This one is easy but note that there is no Debian package for it. I made one but it's a bit janky but works rather well. For 2, it's really up to you. Just follow Ubuntu's logo and the logos of other remixes and you should be fine in that respect. 3 is where difficulty hits. I am still struggling, but for example with Calamares, what I did is fork Lubuntu's Calamares settings and modify it to Ubuntu Lumina Remix's branding. In general see what others change and change those things. If Ubiquity is a better fit, see how that works. If there is another you have in mind, roll with it. 4 can be simple, but usually pick apps that can either be modified to your theme or a very similar theme. Keep Qt apps with Qt desktops, and GTK apps with GTK desktops. If it fits, GTK can work with Qt. 5. Mostly pertains to 1, 3, and 4. Just so you can run wild. Some others will also need assistance with this plan. 6 is for if you package anything. 7 is to fix bugs and errors easier. 8 is finally back to something interesting. You have multiple vendors for your installation tools. There is elementary's offering, Ubuntu Budgie's offering (which forks elementary's builder), or there are others. I use a fork of Ubuntu Budgie's but use whatever you like. This one recommends the use of Docker/VM but overall is not a bad option and is fairly easy to understand. 9 is self-explanatory. ---- So why is this difficult? I summed it up in maybe 3 or 4 minutes, so why is this *"The Easiest Yet Most Difficult Job"*, and that's easy to answer. It's easy in the sense that not as much actual effort needs to go into this, while difficult as in the freedom to do it is severely limiting. The main issue with making a Remix is that there are legal hoops and ropes to deal with, plus the fact that you have requirements, and overall can create messes for everyone. It's a fun and enjoyable project but you will need to keep your eye on every last detail and convince a technical board to bring you on officially, of whom I never talked to. It can also easily just take up far more time than you would ever imagine. I *technically* started the Ubuntu Lumina project back in late 2019, and still only have produced what is not even a solid beta. Heck most of the actual work comes since February and I still am behind. It takes time, effort, and overall tons of patience. Would I recommend you make a remix? Yes, it is fun. Just keep in mind that if you are working alone (like me) you should prepare for a ton of work.
kailyons
343,156
Programming for Beginners: The Real 101
# Introduction Hi there, how are you doing? If you reading this, you have voided your warr...
0
2020-05-25T02:15:22
https://dev.to/jaovitorm/the-real-101-41io
beginners, codenewbie
# # Introduction Hi there, how are you doing? If you reading this, you have voided your warranty, please contact customer support. Wait. Wrong article. Sorry, let me start again. Hi there, how are you doing? If you're reading this, you probably never wrote a computer program, or have wrote one computer program. Or, as we usually say, programs == 0 || programs == 1. In any case, now you want to write computer programs, or you just want to read an article about programming and have an intense need for nostalgia in your life. Whichever it is, let's get started! # # Background At college, we had a teacher that was famous for mockingly asking if his students "knew how to program". I'll be honest: I don't. There. Said it. But I've got something else: I understand how computers work and can talk to them. Sort of. But there was a time when I didn't. I was around 14 when I wrote my first program. You can probably guess what it is: Hello World. It was in Java. I copy-pasted it in a Notepad then ran the javac command in the CMD like they tell you to and then... **it threw an error**. I tried running it again: it still didn't work (dev rule #1: try running again). I closed the CMD, told myself that I wasn't smart enough and that coding wasn't for me and went back to playing my guitar. But now I'm here, sharing what I know. And now I know that programming is hard when you don't know what to know. Even when you know what to know, it's still hard. But it can be easy, it can be better. Does the onboarding experience to programming really have to be "paste this in a file and run it. congratulations, you have learned how to program"? I promise I'll try to keep it short. **Try**. # # Why? ![Ryan Reynolds asking "But Why?"](https://media.giphy.com/media/s239QJIh56sRW/giphy.gif) The approach to learning how to program is usually harsh. When you're just starting out, people will tell you: > "go learn Python/HTML/CSS/JavaScript/C, they're really easy to learn". In my opinion, this is as useful as telling a newborn: > "JUST STAND UP AND MOVE YOUR LEGS!! IT'S, LIKE, BASIC!!" (no pun intended). Sure, you can declare a variable, write a for loop, maybe classify species of Iris flowers. But what good does this do for you? There are three questions you must always keep in mind when programming: * **Why**: you're gonna spend a lot of time doing it, so make sure you're motivated (a.k.a have a reason to). It can be for fun, to get a job, to brag to your friends, to mod your favorite game, to cure diseases, etc. * **What**: understand the context of what you're trying to achieve. If learning for fun, what fun things do you want to build? If to get a job, what language will you choose and how's the job market for it? If to brag, why do you even? If modding, how do other modders do it? If to cure diseases, how can a computer help? * **How**: what are the steps for solving the problem you choose. For example, if you want to make a mod that sends a tweet everytime a player character levels up, what would be the steps (algorithm) to do it? Spoiler: you'd need to get that information from the game, process it and send it to Twitter. That, in turn, would raise questions like: "how do I get that information" or "how do I send it to Twitter?". All of us devs have to do this process. That's what programming is about. Also, Googling basic stuff. --- # # What? ![Dog saying "What?", even though dogs don't speak English](https://media.giphy.com/media/3o7527pa7qs9kCG78A/giphy.gif) --- ## # The Context Let's do some storytelling! > Your name is Alice (congratulations if your name is actually Alice). You're an assistant at a company that sells post-its. In this company, there are three departments: * **Sales**: they sell the post-its to customers. You don't like talking to them because they spit while talking. Gross. * **Customer Support**: they handle customer issues. It's almost impossible to talk to them as they're always busy on the phone with a yelling customer. * **Production**: they print and pack the post-its. They think they own the company (spoiler: they don't, they just work there, same as you). --- ## # The Crisis Everyday your boss, Bob, tells you: > Alice, I need to know three things: * how many post-its Sales sold yesterday; * how many complaints Customer Support received and solved yesterday; * and how many post-its Production put in stock. And everyday, you silently cry for 5-10 minutes in the bathroom while smoking (average time to finish a cigarrete/questioning the value of your job/both at the same time 'cause you're efficient). Afterwards, you spend 1 hour getting spit on in a meeting with Sales; 30 minutes waiting for someone in CS to notice you; and 45 minutes listening to Production's Lead blabber about how amazing he is (spoiler: he's a douche). During these 5-10 minutes, you wonder how it would be if you didn't have to go through this. What if your boss asked them himself? Or maybe they could report it directly. What if you quit and got another job? --- ## # The Hope > "But... what if... **IF**... HYPOTETICALLY SPEAKING ABOUT A HYPOTETICAL HYPOTESIS... What if this wasn't necessary? What if this process was... **automated**? What if there was an **automata**, or a **robot**, that could do this work for me EVERYDAY?" Dream job, am I right? Well dear Alice, turns out there is! It's called a computer, you're using one right now to read this! > "But can it, like, do this? Like, ACTUALLY do this?" Of course, as long as you tell it to. Which is the whole point in programming: how the f*** do I tell the computer to do this? --- # # How ![William Dafoe desperately saying "Tell me How", you should help him quickly as he's quite disturbed](https://media.giphy.com/media/10yIEN8cMn4i9W/giphy.gif) > "OK Jo-" I'm Charlie, forgot to tell you. My name's Charlie now. > "OK Charlie, then how the f*** do I tell the computer to do this? Can't you do it for me?" First: write a program. Second: I can, but I don't work for free. --- ## # What even are computers? > "Write a what?" A P-R-O-G-R-A-M. You can think of computers as persons that only understand 2 things: 1's and 0's. A program is a sequence of 1's and 0's. > "I know that, I've watched Matrix, it's those floating green things on the background, right? But if there are only 2 things it understands, how do they turn into other things like this website?" Well, some people found a way to turn those 2 things into a set of **instructions**. They group the bits together and each group corresponds to an instruction, or as I like to call them, **words**. Then they create **central processing units** that read and interpret these words. And like a dog that sits when it's told to sit, when the computer reads these words, it'll do ONE SPECIFIC THING and ONLY THAT THING. This is what we call an **Architecture**. There are many ways to organize and process these words, you have 32-bit CPU, 64-bit CP- > "That's too much." Sorry. > "But I don't see people writing words of 1's and 0's all the time. In those hacker movies, people are writing, like, ACTUAL words, you know?" Well, some *other* people found a way to **translate** computer words into more human words. We call these **programming languages**, because they're, well, languages used to program computers. So they write a bunch of words of a programming language in a sequence, sometimes indented (looking at you, Python), save it (always remember to save) and call it a **program**. > "Even HTML?" Let's not do this, ok? Anyway, these languages have grammars like any other languages such as English. You learn the **syntax** (or just copy-paste from Google), write it in a document with the correct file extension, run the compiler/interpreter and then- > "DO WHAT NOW??" When you want to communicate with someone, you say words, right? If you wanted to say hello to me, you'd say "Hello, Charlie". My brain would, in a split of a second, think: *"this is English, what words do I know for English?"* Then it would think about the meaning of the sentence: > * I know 'hello', it is a **greeting**. * I know that 'hello' can be followed by another word: a **name**. * 'Charlie' follows 'hello', therefore it is a name. * Charlie is my name. * Therefore, Alice is greeting me. If you had told me *"Gkgdfkgd, otrpr"*, I surely wouldn't get it, because I don't know what language that is, but maybe someone in (or out) of this planet does. This is what a computer does, but instead of English, it's Python, for example. This is known as **compilation**, which is actually much more complex and can be used to **transform** any kind of text into another. > "But there's like a trillion programming languages..." Yes. For that, people write compilers specific to a language. They compile that program's language to another, like machine (computer) language or Assembly or C or LISP. > "What?? Anyway, can I just write an English compiler that compiles to machine language?" No one's stopping you, honey... But ask yourself why programming languages exist. --- ## # And programs are...? > "That was rude. But then how do I automate my work?" Well, you can pick up a language and watch some tutorials on YouTube, then write a program. > "Still rude. Also, you said that's not good." I didn't say it. I'm Charlie. I agree with you, though. A good start is describing what you want to build by writing an **algorithm**. > "Oh I know this one!!! It's one of those **Google Interview** things I read about, right!?" Well, yeah, you're not wrong. An algorithm is a **high-level step-by-step description of a process** that achieves something by the end. In your case, it would be: 1. Talk to Sales. 2. Talk to CS. 3. Talk to Production. 4. Talk to Bob, your boss. > "You mean people at Google just sit there and write algorithms like psychopaths? And get paid for it? Well, WHAT AM I DOING HERE?" Nice meme but no, not really. Programmers write algorithms in a programming language. Like I said, the point of programming is how to tell the computer how to do something. But as computers only understand machine language, and therefore, programming languages, the hard work is **translating your algorithm to programming languages**. That's literally all you have to do. There's no magic behind it (spoiler: there's A TON of magic and Googling behind it). --- ## # Always focus on the basics. > "OK, I have my algorithm written down! How do I start programming?" There are a few fundamental concepts of programming that you must understand before creating a new startup. > "Awwwww..." Yes. They are **variables**, **control flow**, **loops** and **input and output**. * **Variables** are where you store your data. Like in basic math, you can assign a value to X. Later, you can use it for doing other calculations **(processing)**, like, say, calculating the distance between X and Y. For example, your bank account has a variable **amount** with **value 0**, because you're here talking to me instead of working. * **Control Flow** is how the program runs. For example, if you go to CS and they're not giving you attention, you don't wanna stay there all day waiting. Maybe go get a coffee and come back later. In this case, the **control** of your algorithm would **flow** elsewhere (to the coffee machine). * **Loops** are repetitions of a set of instructions. In this case, your boss telling you to talk to these departments is a loop that repeats everyday. * **Input and output** are respectively the data that a program receives and the data it gives you back after processing it. For you, it means that after **receiving** an order from Bob you're going to process it and **return the reports** he asked for. These are very common nicknames for input and output you'll see around: **receive** and **return**. --- ## # Building stuff > "Whew. That's a lot. Still, I don't understand how I would write my algorithm using a programming language." There are some common **patterns** in computing for solving general problems. Suppose you want to write letters to a friend, but it takes too much time for both of you to receive the letter and send an answer back. What would you do to solve this problem?" > "That's an e-mail right? You mean e-mails are programs!?" Yes, dear Alice, e-mails are programs. > "Wait, is Instagram a program!?" Yep. > "Then is this website a program too!?" Bingo. > "So you're saying this is all 1's and 0's?" That's right! And each one of them solves a different problem. * E-mails solve the mailing problem. * Instagram solves social networking problems. * Websites solve content distribution problems. And when these **solutions** become problems, we create new solutions. For instance, WhatsApp solves e-mail problems (sort of). > "Then, what is my problem?" Well, you already know. Your problem is: * My boss wants to know some info from the departments. I have to ask people questions, but sometimes they take too long to answer and I hate interacting with these people. That's an information problem. > "And how do I solve it?" The simple way would be writing a **program** that **receives** a question like "how many post-its were sold yesterday?" and **returns** the number of pots-its sold yesterday. Then everytime your boss wants to know something, he'd **run** that program instead of asking you. The departments can **store their data** on a text file everyday so your program can **read** it when you boss runs it. But you must remember: your boss' computer must be able to **understand the language of your program**, which means it'll need an **compiler or interpreter** or an **executable file** to actually work. Actually, this could be a website. Or a chatbot. Or you can just e-mail spreadsheets around. It's up to you, really. > "Seems simple, yet complicated." That's programming. --- # # Conclusion > "So how do I build my robot? Or automata. Or program. I don't know what to call it anymore." You write a program. > "And how do I write a program?" Use a **programming language**. Write its words in a **sequence* respecting the **syntax* and following your **algorithm**. > "Which language should I learn?" I don't know, they say Python is good for beginners. > "YOU SAID THAT'S NOT GOOD." Honey, I didn't say anything. I'm Charlie, remember? > "WELL THEN HOW AM I SUPPOSED TO WRITE A PYTHON PROGRAM THEN?" Learn it's syntax (grammar). > "AND HOW DO I LEARN IT?" Watch tutorials on YouTube, read tutorials in this website or read the language's documentation, and always __**Google your questions**__. The last one helps a lot more than the others. And remember: **programming/coding is just communicating to a computer how to do something using a language it can understand**. > "Are you avoiding my question? It doesn't seem to me like you know how to program..." Yeah, I began the text saying that. I don't know how to program, I just talk to computers using programming languages. > "Wait, aren't you Charlie?" Well, dear, **aren't we all Charlie**? We all know how to program and, at the same time, we don't. Welcome to the club.
jaovitorm
343,179
Developer Fears: Legacy Code
About the series This is part of a series of posts dedicated to talk about the biggest fea...
6,895
2020-05-26T01:50:52
https://dev.to/viguza/developer-fears-legacy-code-2dol
fears, legacycode, career, coding
## About the series This is part of a series of posts dedicated to talk about the biggest fears that we face as developers. ## What is legacy code? I’m sure that most of us have heard of legacy code, usually associated to something bad. You probably have your own definition for it, just like everyone else. But let’s (try to) put a face on it. I went through a lot of different pages looking for a standard to define what’s legacy code, and I came across with [this definition](https://understandlegacycode.com/blog/what-is-legacy-code-is-it-code-without-tests/) and immediately felt identified with it: > Legacy Code is valuable code you’re afraid to change. or written differently > Legacy Code is the code you need to change and you struggle to understand. The important thing about this definition is to understand that every person has its own way to see legacy code and that it will depend on how familiar you are with the code and how do you feel about changing it. ## Why code becomes legacy? ### No longer maintained There are many reasons to stop maintaining a piece of software: - It was successfully delivered - The business priorities changed - Limited budgets Regardless of the reason, it’s imposible to keep updated a code that is no longer maintained, and it’s meant to become legacy. ### Has no tests This is a trick one. It’s clear that anyone would be horrified of changing a piece of code that has no tests, specially if it’s a sensitive part of the software. And based on the initial definition, if you don’t feel comfortable, then it’s legacy code to you. However, tests can give a false confidence if we assume that they are as good as they should be. And as I see it, that’s even worst than coding worried because there are no tests. ### Developer is not around The reality is, software is built through extensive periods of time, and usually it involves a lot of people working on the same code. On one side, that’s something good! The more people involved with the code, the more people that can help to work on it. However that’s not always the case. Sometimes, parts or even the whole software was in charge of a single person and, guess what? That person is no longer around, and it’s not documented, and has no tests... should I keep going? ## What can we do? ### It’s not always bad Even though legacy code is considered to be a bad thing, that’s not always the case. Most of the time, legacy code is still production code and it’s doing it’s job. The only problem is that no one wants to touch it. What happens when the legacy code is no longer working as expected? That’s a whole different deal! ### Your code will be legacy some day The truth is, you probably won’t be on the same place you are forever. And even if you do, you won’t remember every single piece of code you have written. Here are some things you can do to avoid nightmares to developers coming behind you: - **Write tests:** that would give people some confidence when changing it later. - **Follow standards:** understanding code written with the same standards, make the task way easier. - **Document:** yeah, probably you hate documenting too. But being honest, there’s no perfect code, and sometimes it’s just hard to read. ### Reinvent the wheel or not? Some people, specially on the business side, might think that re-writing a piece of software will cost the exact same effort and money than the first time. It’s not always the case. You might want to balance both options, since each case is different from the other, redoing things sometimes is not that bad. In summary: legacy code is there and will continue to be there, it’s relative to each person and we just can do small things to make it a little less painful. How do you deal with legacy code?
viguza
343,242
I made a free theme 👨‍💻🍣
Lasagna free restaurant website
0
2020-05-25T06:31:48
https://dev.to/atulcodex/i-made-a-free-theme-1j0f
html, css, javascript, frontend
--- title: I made a free theme 👨‍💻🍣 published: true description: Lasagna free restaurant website tags: html, css, javascript, frontend cover_image: https://dev-to-uploads.s3.amazonaws.com/i/emopniqykxue1gcdufr5.jpg --- As you might know guys I am a junior frontend developer work in an agency. When I was started to learn code then I have dreamed to make some website templates from scratch on my own. And today I made this 😎 restaurant website template and you can download it from here [DOWNLOAD](https://imojo.in/2hnrreb). Guy's I request you to download this theme and give your suggestion in a comment so can make some better improvement in this theme. Thanks in Advance 💋💐 --- ![Lasagna restaurant website](https://dev-to-uploads.s3.amazonaws.com/i/sgu6j59xiciufo0ituqm.jpg) --- [![Download theme](https://dev-to-uploads.s3.amazonaws.com/i/2myivsjsjxh0nh0vl14o.png)](https://imojo.in/2hnrreb) The lasagna restaurant theme is a one-page template for any type of restaurants, bistros, sushi bar, fast food, casual dining, fast-casual, buffet, pop-up restaurants, and much more. Lasagna theme is a clean and modern HTML, CSS, and Javascript theme for cafe & restaurant and any food-related business website built with the core HTML, CSS, and javascript tech stacks. Lasagna theme supports responsive layout so it looks great on all devices. It has predefined styling for modern cuisine restaurants, Asian food restaurants, and elegant food restaurants which is now ready for a sushi bar restaurant 🍣🍥 but you can customize it according to your business and restaurant. [Live Demo](https://atulprajapati.in/lasagna/) 💻 [Live Documentation](https://atulprajapati.in/lasagna/doc/) 📄 [![Download theme](https://dev-to-uploads.s3.amazonaws.com/i/2myivsjsjxh0nh0vl14o.png)](https://imojo.in/2hnrreb) ###***Features*** -Clean & commented modern codes -100% responsive -Free font, icons, and images -Documentation included both online and offline -Unique Design -Parallax scrolling -Single page -Easy to customize -Header side navigation -Developer friendly -[4 Second load time](https://atulprajapati.in/wp-content/uploads/2020/05/Lasagna-restaurant-theme-speed-report.pdf) -Supporting for all Browsers -Page loader -24/7 support [![Download theme](https://dev-to-uploads.s3.amazonaws.com/i/2myivsjsjxh0nh0vl14o.png)](https://imojo.in/2hnrreb)
atulcodex
343,281
แชร์ SSH Session ข้ามเครือข่าย
ประเดิม dev.to ด้วยเรื่องนี้ก่อนเลย ที่มา - ข้ามไปได้ มี Pain Point หนึ่งเมื่อต้อง Suppor...
0
2020-05-25T08:53:06
https://dev.to/kapong/ssh-session-1md0
linux, ssh, tmate
ประเดิม dev.to ด้วยเรื่องนี้ก่อนเลย ## ที่มา - ข้ามไปได้ มี Pain Point หนึ่งเมื่อต้อง Support ลูกค้าจำนวนมากๆ โดยต้อง SSH เข้าไปยัง Server ปลายทาง ที่ลูกค้าไม่อยากให้เราเข้าถึง IP ตรงๆ และรู้ Username/Password (ผมก็ไม่อยากรู้ให้เดือดร้อนในภายหลัง) วิธีที่คิดออกไวๆ คือ Anydesk (หรือ Team Viewer) เข้าไปยังเครื่องของลูกค้า ที่ลูกค้า SSH ต่อไปยัง Server ปลายทางให้เรียบร้อย มันก็ดี แต่มีปัญหาเรื่อง Bandwidth ที่ใช้พอตัวอยู่ แถมขนาดหน้าจอก็เล็กอีกต่างหาก หาวิธีตั้งนานจนในที่สุดได้พบเจอกับ [Tmate](https://tmate.io) ความดีงามคือ ลงง่าย ใช้ง่าย (ยิ่งใช้ Tmux มาอยู่แล้วยิ่งง่าย) ไม่กี่คำสั่ง และการ Copy URL ให้ปลายทาง ก็เป็นอันเรียบร้อย ## Tmate [Tmate](https://tmate.io) เป็น Opensource ตัวหนึ่งพัฒนาต่อมาจาก Tmux เพื่อให้สามารถแชร์ Session ข้ามเครือข่ายได้ (HTTP, SSH) โดยอาศัย Server ตัวกลางอีกทอดหนึ่ง (ตอนนี้ยังฟรี) มีบนหลาย OS มากๆ ยกเว้น Windows วิธีการติดตั้งก็จะคล้ายกับการติดตั้ง Software ทั่วไป (มีบน repo official ของหลาย OS แล้ว) เช่น ใน Ubuntu ``` apt-get update && apt-get install -y tmate ``` หรือใน MacOS ก็ลงผ่าน Home Brew ได้เลย ``` brew install tmate ``` (ควรอัพเดจเป็น version 2.4 ขึ้นไปเพื่อ feature ที่มากขึ้น แต่อาจจะต้อง compile เอง) ## วิธีใช้ สั่งผ่านคำสั่ง `tmate` บน Terminal Session **ของ User ที่ต้องการ** ![CMD](https://dev-to-uploads.s3.amazonaws.com/i/mhjj04o6u8mh9ia3jo4v.png) จะปรากฏ information สำหรับการเชื่อมต่อ มีทั้งการเชื่อมต่อแบบ Read-only และการเชื่อมต่อแบบที่ผู้เข้าร่วมสามารถพิมพ์ Command ได้ด้วย ก็ Copy information นี้ส่งให้คนที่อยากเข้าร่วมครับ เมื่อพร้อมก็กด `q` หรือ `Ctrl + c` เพื่อออกจากหน้านี้แล้วเริ่ม Session ครับ ![Connection](https://dev-to-uploads.s3.amazonaws.com/i/8w51edxgpcpyy2xdj39c.png) Session ID จะถูก Random ใหม่ทุกครั้ง และมีการเข้าร่วม 2 รูปแบบคือแบบ SSH และแบบ HTTP ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wvxxmy58n447b3dg9spo.png) หากดูข้อความไม่ทันสามารถใช้คำสั่ง `tmux show-message` เพื่อขอดู information การ connect อีกครั้งครับ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/7uuy7wqjrdefiek89rmk.png) ข้อควรระวัง 1. การ Detach (Ctrl + b, d) แบบ tmux ทำให้ออกจาก tmate ได้แต่ Session จะไม่ตัด และจะไม่สามารถ Attach กลับมาได้ (พยายามลองแล้วไม่มี) หากไม่สามารถกลับเข้าไปได้แล้วอยากจะปิด session ก็ลอง kill จาก pid โดยใช้คำสั่ง `ps aux | grep tmate` 2. จะหยุด Session ได้ก็ต่อเมื่อ `exit` จนหมด ทุกคนที่เชื่อมต่ออยู่ก็จะหลุดทุกคนอัตโนมัติ **แต่อย่าลืม**
kapong
343,321
Animation libraries ReactJs
Hi! Does anyone have experience with animation libraries for ReactJs? For a school project I'm looki...
0
2020-05-25T09:40:55
https://dev.to/janessalabeur/animation-libraries-reactjs-gpo
animation, react, discuss, javascript
Hi! Does anyone have experience with animation libraries for ReactJs? For a school project I'm looking into react-spring or anime.js. What are you're experiences with this? Thanks in advance!
janessalabeur
343,403
[Bahasa] Implementasi Draggable View di Android
Alkisah ketika sedang mendevelop prototipe UI mesin vending powerbank ReCharge yang dapat menampilkan...
0
2020-05-25T12:24:56
https://dev.to/hyuwah/implementasi-draggable-view-di-android-576d
android, bahasa
Alkisah ketika sedang mendevelop prototipe UI mesin vending powerbank [ReCharge](https://recharge.id/) yang dapat menampilkan iklan layar penuh, ada kebutuhan untuk suatu elemen / widget yang bisa dipindah-pindah di layar. Waktu itu yang terbayang adalah *chat head* nya Facebook dan *lucky egg* nya Tokopedia. ![Mockup prototipe UI layar penuh](https://miro.medium.com/max/233/1*519vvfzJ2JTpWrSzzCV36g.png) <figcaption>Mockup prototipe UI layar penuh</figcaption> ![Facebook Chat Heads](https://miro.medium.com/max/472/0*XiSPZBnjQtsyTGZp.jpg) <figcaption>Facebook Chat Heads. Jaman-jaman Holo theme itu udah “kekinian”.</figcaption> ![Lucky Egg Tokopedia ](https://miro.medium.com/max/236/0*zQ_wexzQ_A8vJ7tq) <figcaption>Lucky Egg Tokopedia</figcaption> ## Bare Minimum ```kotlin private var widgetDX: Float = 0F private var widgetDY: Float = 0F fun setupStickyDraggable(){ iv_sticky_draggable.setOnTouchListener { v, event -> when(event.actionMasked){ MotionEvent.ACTION_DOWN -> { widgetDX = v.x - event.rawX widgetDY = v.y - event.rawY } MotionEvent.ACTION_MOVE -> { v.x = event.rawX + widgetDX v.y = event.rawY + widgetDY } else -> { return@setOnTouchListener false } } true } } ``` Yang pertama adalah *bare minimum*, tujuannya disini agar view bisa digeser-geser dulu. Disini kita set ***setOnTouchListener*** pada sebuah view `iv_sticky_draggable` (dalam kasus ini ImageView). Di ***setOnTouchListener*** ada dua parameter yaitu `v` (view itu sendiri) dan event. Event yang kita butuhkan adalah `ACTION_DOWN` & `ACTION_MOVE`. Di line 8–9 kita simpan dulu koordinat dari view dikurangi koordinat absolut titik sentuh ke **widgetDX** & **widgetDY** (ketika `ACTION_DOWN`). > **v.x** & **v.y** adalah posisi relative view terhadap koordinat awal view. **event.rawX** & **event.rawY** adalah koordinat absolut dari titik sentuh pada layar Ketika `ACTION_MOVE` (line 12–13), kita update koordinat view dengan nilai yang sudah disimpan tadi, ditambah dengan nilai pergerakkan titik sentuh di layar, dengan begitu si view tersebut akan tergeser mengikuti sentuhan jari kita dilayar. That’s it. ![It’s moving! Tapi tembus-tembus.](https://miro.medium.com/max/264/1*UjQKDZKNDKVIkTIzkCL78g.gif) <figcaption>It’s moving! Tapi tembus-tembus.</figcaption> ## Screen Border Collision ```kotlin private var widgetDX: Float = 0F private var widgetDY: Float = 0F fun setupStickyDraggable(){ iv_sticky_draggable.setOnTouchListener { v, event -> val viewParent:View = (v.parent as View) val PARENT_HEIGHT = viewParent.height val PARENT_WIDTH = viewParent.width when(event.actionMasked){ MotionEvent.ACTION_DOWN -> { widgetDX = v.x - event.rawX widgetDY = v.y - event.rawY } MotionEvent.ACTION_MOVE -> { // Screen border Collision var newX = event.rawX + this.widgetDX newX = Math.max(0F, newX) newX = Math.min((PARENT_WIDTH - v.width).toFloat(), newX) v.x = newX var newY = event.rawY + this.widgetDY newY = Math.max(0F, newY) newY = Math.min((PARENT_HEIGHT - v.height).toFloat(), newY) v.y = newY } else -> { return@setOnTouchListener false } } true } } ``` Oke viewnya sudah bisa bergeser, tapi masalahnya, view itu bisa hilang dari layar kalau kita geser ke tepian layar. Solusinya adalah dengan **membatasi** nilai X dan Y dalam rentang *0* hingga nilai maksimal *Width* dan *Height* parent dari view tersebut. Jadi pertama kita ambil nilai *Width* dan *Height* (line 6–8). Kita harus *cast* parent dari view menjadi `View` agar dapat mengakses properties *Width* & *Height*-nya. Disini saya batasi dengan bantuan `Math.max()` & `Math.min()`. Line 18 & 23 artinya bila nilai negatif maka akan dipaksa menjadi 0, sedangkan line 19 & 24 artinya nilai akan dipaksa menjadi `PARENT_HEIGHT` dikurangi tinggi si view bila nilainya lebih besar dari itu. Setelah melalui 2 pengecekan (pada setiap axis) tersebut barulah di set ke *Width* & *Height* pada view. ![Collide with screen border](https://miro.medium.com/max/264/1*SUlSlwVH7ADO1xmIIPWNgQ.gif) <figcaption>Collide with screen border</figcaption> ## Sticky to Screen Border ```kotlin private var widgetDX: Float = 0F private var widgetDY: Float = 0F fun setupStickyDraggable(){ iv_sticky_draggable.setOnTouchListener { v, event -> val viewParent:View = (v.parent as View) val PARENT_HEIGHT = viewParent.height val PARENT_WIDTH = viewParent.width when(event.actionMasked){ MotionEvent.ACTION_DOWN -> { widgetDX = v.x - event.rawX widgetDY = v.y - event.rawY } MotionEvent.ACTION_MOVE -> { // Screen border Collision var newX = event.rawX + this.widgetDX newX = Math.max(0F, newX) newX = Math.min((PARENT_WIDTH - v.width).toFloat(), newX) v.x = newX var newY = event.rawY + this.widgetDY newY = Math.max(0F, newY) newY = Math.min((PARENT_HEIGHT - v.height).toFloat(), newY) v.y = newY } MotionEvent.ACTION_UP -> { // Stick to Left or Right screen if(event.rawX >= PARENT_WIDTH / 2) v.x = (PARENT_WIDTH) - (v.width).toFloat() else v.x = 0F // Stick to Top or Bottom screen if(event.rawY >= PARENT_HEIGHT / 2) v.y = (PARENT_HEIGHT) - (v.height).toFloat() else v.y = 0F // IF BOTH X & Y set to stick, the view will only stick to corner } else -> { return@setOnTouchListener false } } true } } ``` Agar DraggableView nya tidak menghalangi user, kita bisa tambah *behavior* **sticky**, jadi setelah di-*drag*, view akan menempel pada tepian layar (atau parent view) baik dalam sumbu x, sumbu y atau keduanya. Untuk implementasinya digunakan `MotionEvent.ACTION_UP`, sebagai contoh agar sticky pada sumbu X (line 29–32), bila koordinat titik sentuh berada di tengah ke kanan, maka view akan diset ke *screen border* kanan (**PARENT_HEIGHT - v.height**), bila berada di tengah ke kiri, maka view akan diset ke screen border kiri (**0F**). Begitu pula untuk sumbu Y (line 35–38). ![Sticky X ](https://miro.medium.com/max/264/1*WISEprzRpr4rajmycPm7Qw.gif) <figcaption>Sticky X</figcaption> ![Sticky Y](https://miro.medium.com/max/264/1*Ib-rvxj83cwT6c4vnw1z-g.gif) <figcaption>Sticky Y</figcaption> ![Sticky XY](https://miro.medium.com/max/264/1*BYker4gDuvpAaFTLRqiWyQ.gif) <figcaption>Sticky XY</figcaption> ## Sticky to First Position ```kotlin private var widgetDX: Float = 0F private var widgetDY: Float = 0F // Add these private var widgetXOrigin : Float = 0F private var widgetYOrigin : Float = 0F fun setupStickyDraggable(){ iv_sticky_draggable.setOnTouchListener { v, event -> val viewParent:View = (v.parent as View) val PARENT_HEIGHT = viewParent.height val PARENT_WIDTH = viewParent.width when(event.actionMasked){ MotionEvent.ACTION_DOWN -> { widgetDX = v.x - event.rawX widgetDY = v.y - event.rawY // save widget origin coordinate widgetXOrigin = v.x widgetYOrigin = v.y } MotionEvent.ACTION_MOVE -> { // Screen border Collision var newX = event.rawX + this.widgetDX newX = Math.max(0F, newX) newX = Math.min((PARENT_WIDTH - v.width).toFloat(), newX) v.x = newX var newY = event.rawY + this.widgetDY newY = Math.max(0F, newY) newY = Math.min((PARENT_HEIGHT - v.height).toFloat(), newY) v.y = newY } MotionEvent.ACTION_UP -> { // Back to original position v.x = widgetXOrigin v.y = widgetYOrigin } else -> { return@setOnTouchListener false } } true } } ``` *Behavior* lain yang dapat kita implementasikan adalah **Sticky to first position**. Disini kita perlu menyimpan nilai koordinat awal view (line 18–19) pada variabel mis. **widgetXOrigin** & **widgetYOrigin** (line 4–5) saat `ACTION_DOWN`, kemudian pada `ACTION_UP` kita tinggal set koordinat view pada nilai koordinat awal tersebut (line 35–36) ## Animation Lebih lanjut lagi, agar gerakkan pada *behavior* sticky-nya *smooth*, kita bisa menggunakan `.animate()` pada view tersebut. Jadi alih-alih meng-*assign* langsung dengan `=` , kita gunakan seperti ini: ```kotlin // TO ANIMATE USE animate() v.animate().x(0F).setDuration(250).start() // INSTEAD OF v.x = 0F ``` Berikut hasilnya: ![Smooth sticky](https://miro.medium.com/max/268/1*dMzIJlT12hmSTkVkzNnxEQ.gif) <figcaption>Smooth sticky</figcaption> ## Ready to use Library Untuk memudahkan kalian mengimplementasi hal-hal diatas, saya buat sebuah library open-source yang mencakup hal-hal diatas, silahkan cek disini: {% github hyuwah/DraggableView no-readme %} Sekian cerita kali ini mengenai implementasi Draggable View di Android. Semoga membantu & selamat mencoba :) Originally [posted on Medium](https://medium.com/@hyuwah/implementasi-draggable-view-di-android-eb84e50fbba9) Cover Photo by [Marcel Walter](https://unsplash.com/@marcelwalter) on Unsplash
hyuwah
343,405
Combinando o virtualenvwarpper com o pyenv
virtualenvwarpper e pyenv O virtualenvwarpper é um plugin que que cria alguns a facilitado...
0
2020-05-25T12:28:32
http://gabubellon.me//blog/virtualenvwarpper-pyenv
python, pyenv
--- title: Combinando o virtualenvwarpper com o pyenv published: true date: 2020-01-10 00:00:00 UTC tags: #python #pyenv canonical_url: http://gabubellon.me//blog/virtualenvwarpper-pyenv --- # virtualenvwarpper e pyenv O virtualenvwarpper é um plugin que que cria alguns a facilitadores e atalhos para utilizar ambientes virtuais com o python. Mais detalhes na documentação oficial: [https://virtualenvwrapper.readthedocs.io/en/latest/command\_ref.html](https://virtualenvwrapper.readthedocs.io/en/latest/command_ref.html). Combinando o mesmo com o pyenv temos o _pyenv-virtualenvwrapper_, um plugin do pyenv que permite criar ambientes virtuais de forma rápida e prática utilizando todas as versões de python gerenciadas pelo pyenv. > **Para Ler na antes:** [Instalando e utilizando o pyenv](/blog/pyenv) ## Instalando pyenv-virtualenvwrapper A instalação é realizada seguindo as recomendações do repositório oficial [https://github.com/pyenv/pyenv-virtualenvwrapper](https://github.com/pyenv/pyenv-virtualenvwrapper) As versões de python `global` do pyenv precisam ter o `virtualenvwrapper` instalados: ``` #pip e pip3 caso tenha python2.7 e python3 simultaneamente. pip install setuptools pip install virtualenvwrapper pip3 install setuptools pip3 install virtualenvwrapper ``` Instalando o _pyenv-virtualenvwrapper_: ``` #$(pyenv root) é o a variável do pyenv que indica onde o mesmo está instalado. git clone https://github.com/pyenv/pyenv-virtualenvwrapper.git $(pyenv root)/plugins/pyenv-virtualenvwrapper ``` on : ``` export PATH="$HOME/.pyenv/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" ``` É necessário modifcar o arquivo de configuração de seu interpretador de comandas (.bashrc para o bash ou .zshrc para zsh/ohmyzsh) e adicionar algumas linhas antes e depois as configurações do pyenv Adicionar antes: ``` #Configurações do virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs #virtualenvs folder source $HOME/.local/bin/virtualenvwrapper.sh #virtualenvwrapper script location ``` Adicionar depois: ``` #deixar explicito para o pyenv o uso do virtualenvwrapper export PYENV_VIRTUALENVWRAPPER_PREFER_PYVENV="true" ``` No final terá algo similar a: ``` #Configurações do virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs #virtualenvs folder source $HOME/.local/bin/virtualenvwrapper.sh #virtualenvwrapper script location #Bloco do pyenv já existente no arquivo export PATH="$HOME/.pyenv/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" #deixar explicito para o pyenv o uso do virtualenvwrapper export PYENV_VIRTUALENVWRAPPER_PREFER_PYVENV="true" ``` ### Exemplo de uso pyenv-virtualenvwrapper No exemplo a seguir estamos configurando a versão global do python para a `miniconda3-latest`, após isso é criado um virutalenv com o virtualenvwrapper (`test_conda`). Voltamos a versão global do python para a `system e 3.7.4`, acessamos o virtualenv criado (`test_conda`) e verificamos que o mesmo foi criado utilizando o python da instalação do `miniconda3-latest` Um novo virtualenv é criado (`test_2.7`) e ativado, validando que o mesmo utilizou a versão de python configurada como padrão (`system`) ``` $ pyenv global > system > 3.7.4 > $ pyenv global miniconda3-latest $ (miniconda3-latest) $ python > Python 3.7.4 (default, Aug 13 2019, 15:17:50) > [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> exit() > $ (miniconda3-latest) $ mkvirtualenv test_conda > WARNING: the pyenv script is deprecated in favour of `python3.7 -m venv` > (miniconda3-latest) > $ pyenv global system 3.7.4 > $ workon test_conda > (test_conda) $ python > Python 3.7.4 (default, Aug 13 2019, 15:17:50) > [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> exit() > (test_conda) $ deactivate > $ python > WARNING: Python 2.7 is not recommended. > This version is included in macOS for compatibility with legacy software. > Future versions of macOS will not include Python 2.7. > Instead, it is recommended that you transition to using 'python3' from within Terminal. > > Python 2.7.16 (default, Dec 13 2019, 18:00:32) > [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.32.4) (-macos10.15-objc-s on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> exit() > $ mkvirtualenv test_2.7 > New python executable in /Users/gabriel.bellon/.virtualenvs/test_2.7/bin/python > Installing setuptools, pip, wheel... > done. > virtualenvwrapper.user_scripts creating /Users/gabriel.bellon/.virtualenvs/test_2.7/bin/predeactivate > virtualenvwrapper.user_scripts creating /Users/gabriel.bellon/.virtualenvs/test_2.7/bin/postdeactivate > virtualenvwrapper.user_scripts creating /Users/gabriel.bellon/.virtualenvs/test_2.7/bin/preactivate > virtualenvwrapper.user_scripts creating /Users/gabriel.bellon/.virtualenvs/test_2.7/bin/postactivate > virtualenvwrapper.user_scripts creating /Users/gabriel.bellon/.virtualenvs/test_2.7/bin/get_env_details > (test_2.7) > $ python > WARNING: Python 2.7 is not recommended. > This version is included in macOS for compatibility with legacy software. > Future versions of macOS will not include Python 2.7. > Instead, it is recommended that you transition to using 'python3' from within Terminal. > > Python 2.7.16 (default, Dec 13 2019, 18:00:32) > [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.32.4) (-macos10.15-objc-s on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> exit() > (test_2.7) $ deactivate > $ pyenv global > system > 3.7.4 ``` Uma vez com a configurações criadas é testada, agora fica prático realizar o gerenciamento de múltiplos projetos python com diferentes versões.Com o uso do _pyenv_ e do _pyenv-virtualenvwrapper_ criar combinações de instalações e ambientes virtuais padrões para qualquer situação de projeto.
gabubellon
343,861
Data Scientist With Python
One of the best courses I have taken on my career-building Python skills to succeed as a dat...
6,934
2020-05-26T06:43:38
https://www.datacamp.com/tracks/data-scientist-with-python?tap_a=5644-dce66f&tap_s=841152-474aa4
datascience, computerscience, machinelearning, python
##One of the best courses I have taken on my career-building Python skills to succeed as a data scientist. With no prior coding experience in Python, I learn how easily this language allows you to import, clean, manipulate, and visualize data—all integral skills for any aspiring data professional or researcher. ###<q>I would recommend all my linked connections with similar interests looking to boost their Python skills and wanting to begin their journey to becoming a confident data scientist should try out this awesome course by DataCamp.</q> ###You guys can follow me on my social media profiles for the projects I have done as a data scientist. <b><i>Github:</i></b> [abhiwalia15](https://github.com/abhiwalia15) <b><i>LinkedIn:</i></b> [mrinalwalia](https://www.linkedin.com/in/mrinal-walia-b0981b158/) <i>#python #datascience #machinelearning #deeplearning #datacamp #ai #bigdata #covid19 #computervision #artificialintelligence #Datacamp</i> #https://www.datacamp.com/tracks/data-scientist-with-python?tap_a=5644-dce66f&tap_s=841152-474aa4
abhiwalia15
343,411
Sorted CSS Colors – Tool I created to see similar CSS colors together
So, I've been working on a tool to arrange the named CSS colors in a way that I see the similar color...
0
2020-05-25T12:36:15
https://dev.to/scriptype/sorted-css-colors-2fpj
codepen, css, javascript
So, I've been working on a tool to arrange the named CSS colors in a way that I see the similar colors together. Result was more impressive than I expected! I developed it using CodePen: [https://codepen.io/pavlovsk/pen/zYvbGKe](https://codepen.io/pavlovsk/pen/zYvbGKe) And then, exported it to: [https://enes.in/sorted-colors](https://enes.in/sorted-colors) It's fully keyboard accessible, too! I didn't test it myself yet but, should work in screen readers, as well. ![Screenshot](https://dev-to-uploads.s3.amazonaws.com/i/mf3oz8c479lxoijta6cx.png)
scriptype
343,415
Machine Learning - Over-fitting & Under-fitting
In my last post on "BIAS and VARIANCE" we heard about two words - Under-Fit and Over-Fit. In this pos...
0
2020-05-25T12:40:31
https://dev.to/seluccaajay/machine-learning-over-fitting-under-fitting-o91
machinelearning, datascience
In my last post on "BIAS and VARIANCE" we heard about two words - Under-Fit and Over-Fit. In this post, I am going to tell you precise;y what is Over-fitted and Under-Fitted model. UNDER-FITTING: It occurs when the model is too simple, say when there is Low Variance and High Bias. When the accuracy of the model is too low than our expectation, the model we have built is said to be under-fit. Below is the graphical representation of an under-fit model. (The red dots in the graph describes the data points, where major of those data points are present away from the line) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/m0tn98yqigpfsk1kxr0v.PNG) OVER-FITTING: It occurs when the model is too complex, when there is Low Bias and High variance. (The machine learning model that we build, should not always be 100% accurate, which generally means Over-fitted model) Below is the graphical representation of an Over-fit model. (The line is drawn as per the red dots i.e. the data points) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/lj4dn4q7cjgnxfxkgbwn.PNG) Bias and Variance both contribute to errors in a model (but ideally there should be a right fit point, where both the bias and variance deviate from their value) but it's the prediction error that you want to minimize, not the bias or variance specifically. Below is the graphical representation of the right fit point, where the model will have a good accuracy, without being over-fit or under-fit. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/99f48botrxm0w814agb8.PNG) Ideally we want low variance and low bias. In reality, though, there's usually a trade-off. A suitable fit should acknowledge significant trends in the data and play down or even omit minor variations. This might mean re-randomizing our training, test data or using cross-validation, adding new data to better detect underlying patterns or even switching algorithms. Specifically, this might entail switching from linear regression to non-linear regression to reduce bias by increasing variance.
seluccaajay
343,456
Answer: create list from pandas dataframe column values
answer re: get list from pandas dataf...
0
2020-05-25T14:26:42
https://dev.to/nilotpalc/answer-create-list-from-pandas-dataframe-column-values-4n76
pandas, list
{% stackoverflow 22341390 %}
nilotpalc
343,484
React vs Vue: Compare and Contrast
Neither ReactJS or VueJS are overly novel anymore. With lots of time to establish identities, can we...
6,947
2020-05-25T15:19:42
https://dev.to/ben/react-vs-vue-compare-and-contrast-13jp
healthydebate, javascript, vue, react
Neither ReactJS or VueJS are overly novel anymore. With lots of time to establish identities, can we have a discussion about what fundamentally differentiates these popular JavaScript approaches? Feel free to debate, but keep it respectful. 😇
ben
343,792
<header> vs. <head> vs. <h1> through <h6> Elements
Hi i know u may find it boring but i added something interesting some things beginners like me get co...
6,932
2020-05-26T03:41:20
https://dev.to/saifyusuph/header-vs-head-vs-h1-through-h6-elements-j02
html, css, javascript, beginners
Hi i know u may find it boring but i added something interesting some things beginners like me get confused of. It is easy to confuse the <header> element with the <head> element or the heading elements, <h1> through <h6>. They all have different semantic meanings and should be used according to their meanings. For reference… The <header> element is a structural element that outlines the heading of a segment of a page. It falls within the <body> element. The <head> element is not displayed on a page and is used to outline metadata, including the document title, and links to external files. It falls directly within the <html> element. Heading elements, <h1> through <h6>, are used to designate multiple levels of text headings throughout a page. Thanks for reading i hope you see errors so u can comment. #webdev #dev.to #coder#javascript
saifyusuph
343,499
How to graceful multiple select dom with mouse move area?
Select elements in the drag area using the mouse or touch.
0
2020-05-25T15:59:39
https://dev.to/ihavecoke/how-to-graceful-multiple-select-dom-with-mouse-move-area-2i5f
javascript, vue, react
--- title: How to graceful multiple select dom with mouse move area? published: true description: Select elements in the drag area using the mouse or touch. tags: Javascript, Vue.js, React --- I have found a awesome repo which can select elements in the drag area using the mouse or touch. In my case i just use it to select multiple photos on gallery and then edit, move, duplicate it like iOS photo’s. It’s so cool just enjoy it [selecto](https://github.com/daybrush/selecto)
ihavecoke