id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
201,732
js typewriter
A post by weptim
0
2019-11-07T10:00:55
https://dev.to/weptim/js-typewriter-3a4f
100daysofcode
{% codepen https://codepen.io/weptim/pen/dyyeVQx %}
weptim
201,823
Convert PDF to Editable DOCX with Python
While working with document conversion feature, you came across a requirement to convert PDF to DOCX....
0
2019-11-07T12:56:18
https://blog.groupdocs.cloud/2019/11/06/convert-pdf-to-editable-word-document-with-python-sdk/
python, pdftodocx, documentconversion, restapi
While working with document conversion feature, you came across a requirement to convert PDF to DOCX. I would like to introduce GroupDocs.Conversion Cloud SDK for Python for the purpose. It can also convert all popular industry standard documents from one format to another without depending on any third-party tool or software. All you need to convert PDF to DOCX in Python follow these steps: * Before we begin with coding, sign up with [groupdocs.cloud](https://docs.groupdocs.cloud/display/gdtotalcloud/Creating+and+Managing+Account) to get your APP SID and APP Key. * Install groupdocs-conversion-cloud package from [pypi](https://pypi.org/project/groupdocs-conversion-cloud/) with the following command. ```>pip install groupdocs-conversion-cloud``` * Open your favorite editor and copy paste following code into the script file 1. Import the GroupDocs.Conversion Cloud Python package 2. Initialize the API 3. Upload source PDF document to GroupDocs default storage 4. Convert the PDF document to editable DOCX ```python # Import module import groupdocs_conversion_cloud # Get your app_sid and app_key at https://dashboard.groupdocs.cloud (free registration is required). app_sid = "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx" app_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Create instance of the API convert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(app_sid, app_key) file_api = groupdocs_conversion_cloud.FileApi.from_keys(app_sid, app_key) try: #upload soruce file to storage filename = 'Sample.pdf' remote_name = 'Sample.pdf' output_name= 'sample.docx' strformat='docx' request_upload = groupdocs_conversion_cloud.UploadFileRequest(remote_name,filename) response_upload = file_api.upload_file(request_upload) #Convert PDF to Word document settings = groupdocs_conversion_cloud.ConvertSettings() settings.file_path =remote_name settings.format = strformat settings.output_path = output_name loadOptions = groupdocs_conversion_cloud.PdfLoadOptions() loadOptions.hide_pdf_annotations = True loadOptions.remove_embedded_files = False loadOptions.flatten_all_fields = True settings.load_options = loadOptions convertOptions = groupdocs_conversion_cloud.DocxConvertOptions() convertOptions.from_page = 1 convertOptions.pages_count = 1 settings.convert_options = convertOptions . request = groupdocs_conversion_cloud.ConvertDocumentRequest(settings) response = convert_api.convert_document(request) print("Document converted successfully: " + str(response)) except groupdocs_conversion_cloud.ApiException as e: print("Exception when calling get_supported_conversion_types: {0}".format(e.message)) ``` * And that’s it. PDF document is converted to DOCX and API response includes the URL of the resultant document. [Read more](https://blog.groupdocs.cloud/2019/11/06/convert-pdf-to-editable-word-document-with-python-sdk/). ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/8flv5h8uv7a2sbc0dvy3.PNG)
tilalahmad
201,881
Day 5 of⚡️ #30DaysOfWebPerf ⚡️: Your laptop is a filthy liar
Sia Karamalegos @thegree...
3,017
2019-11-07T14:32:24
https://sia.codes/posts/30-days-web-perf-5/
webperf, devtools, webpagetest, webdev
{% twitter 1192448463717437440 %} {% twitter 1192448470419943429 %} {% twitter 1192448472202518531 %} {% twitter 1192448473196613633 %} {% twitter 1192448474379423744 %}
thegreengreek
201,922
Don't throw away your old MBP, upgrade it!.
I have a Mid-2010 MBP that my kids use for homework, youtube and "stuff". Back then Apple allowed yo...
0
2019-11-07T16:00:39
https://dev.to/hminaya/don-t-throw-away-your-old-mbp-upgrade-it-1ld0
hardware, mac, beginners, upgrade
I have a Mid-2010 MBP that my kids use for homework, youtube and "stuff". Back then Apple allowed you to swap out some of the internal components (HDD, RAM, Battery) and perform upgrades. If you still have one of these MBPs it's still worth it to do some small upgrades and get some more life out if. So far I've done the following 👇 🧠 Upgraded RAM from 4GB to 16GB! 💾 Swapped out the 250 GB HDD for a 512GB Samsung EVO SSD 🔋 Replaced the original battery The difference in terms of performance is huge, an SSD and 16GBs of RAM are worth it!. Best of all, it's very simple to do, just open up the bottom cover and swap the parts, just like old times!. If you need some guides to walk you through it head over to [iFixIt](https://www.ifixit.com/Device/MacBook_Pro), they cover pretty much everything. For parts I usually compare between [MacSales](https://www.macsales.com/) and Amazon. Getting quality parts through MacSales is usually better and not that expensive.
hminaya
884,640
How I Manage My Knowledge
This post is originally from my blog. This is an overview of the tools and software I use to...
0
2021-11-01T21:16:18
https://dev.to/uzayg/how-i-manage-my-knowledge-1cgp
automation, python, productivity, git
This post is originally from my [blog](https://www.uzpg.me/general/2021/07/20/my-knowledge-process.html). This is an overview of the tools and software I use to maintain and index of my knowledge and life in an efficient and shareable way. # Tech The main knowledge base program I use to save and gather information is [Archivy](https://archivy.github.io), an open source project I created that supports hierarchical and bidirectional notes, local bookmarking (downloads the webpages you want) and is highly extensible. I run a local instance of this program on my computer, through which I edit / manage my knowledge, simply opening a new browser tab or vim when I have content to write. All my data is stored as markdown files in a local git repository that I push to a private GitHub repo for backup / access on my phone. I wrote a [plugin](https://github.com/archivy/archivy-static-site-generator) to turn user Archivy knowledge bases into static HTML websites so that users can share their knowledge bases online. Mine is hosted at [knowledge.uzpg.me](https://knowledge.uzpg.me). The git setup I have makes it so that whenever I push my changes and indicate new files in public visibility this website is updated. This setup allows for a few advantages: - **extensibility** - the software I use is very flexible so it's very easy for me to script / setup infrastructure around my knowledge base. This is possible using tools like Archivy [plugins](https://github.com/archivy/awesome-archivy) and [APIs](https://archivy.github.io/reference/architecture/). - **ease of sharing** - this extensibility allows many ways for me share my knowledge base, and also distribute the extensions / scripts I use to organize it. My knowledge base can then also have value for other people, and I can directly send my notes, on top of having it act as a personal knowledge repository. - **ease of access** - I can access my knowledge whenever I want, on any device, through github / my public static site. # Content The actual content I save into my knowledge base can be divided into multiple types. ## Notes Notes are a very important part of my knowledge base, as they are useful for retention and helpful to go back on things I've learned. My knowledge base is constantly compounding with new information that I can link together. Whenever I read a non-fiction book I always try to highlight and annotate it. Then, once I'm done I come back to it and try to synthesize its main ideas. Although this process is long, it helps me really make sure I understand all the core ideas of the work and have a way to review its essential message. Examples: [The Selfish Gene](https://knowledge.uzpg.me/dataobj/1214/) or [The Theoretical Minimum](https://knowledge.uzpg.me/dataobj/1913/). I also keep notes on courses or talks that I have attended, often related to STEM. For these I use embedded LaTeX and screenshots of course material. [Example](https://knowledge.uzpg.me/dataobj/1923/) ## Miscellaneous lists This part of my knowledge base acts as somewhat of a personal record, or journal of the things I've done and the content I've consumed. Indeed, I find it useful to compile collections of miscellaneous content I appreciated. It helps me keep track what I've done and when I did it. This is a form of **bookmarking** generalized beyond just links. For example, the lists of [the books I've read](https://knowledge.uzpg.me/dirs/books/), [words I like](https://knowledge.uzpg.me/dataobj/153/), [quotes](https://knowledge.uzpg.me/dataobj/1611/), [poetry](https://knowledge.uzpg.me/dataobj/1925/) or [articles I saved](https://knowledge.uzpg.me/dataobj/1849/). I gradually add to these whenever I find something relevant. I also jot down lists of events or activities I participated in and would like to keep a digital reference of, in the form of a journal. I'm very fond of cyber-security for example, and when I do a cyber-security competition (CTF) I enjoy saving my opinion of the event and [its challenges](http://localhost:5000/dataobj/318). ## Web Content All of these different types of content benefit from links to articles as references. One of the core feature of Archivy is it's ability to download / store web content locally. To this effect, I often use Archivy's functionality to download relevant articles or webpages that I then link inside my notes. This also ensures the content survives [link rot](https://en.wikipedia.org/wiki/Link_rot). I can also use existing scripts to quickly download content from my online accounts, like [my pocket account](https://github.com/archivy/archivy-pocket) or [my Hacker News posts](https://github.com/archivy/archivy-hn). ## General Documents I also keep many standalone documents that I'd like to have backed up in my knowledge base: for example, school material that I can then share with classmates, or the [poetry I write infrequently](https://knowledge.uzpg.me/dirs/poetry/). # Conclusion This process allows me to quickly search and navigate all the things I've learned / done / appreciated. I can explore and handle this content manually or programmatically through scripting. This way of interfacing helps me access my knowledge base efficiently and share without any hassle. I'm very satisfied with my setup but plan on adding more features to the software I use; including a graph view of links similar to Obsidian's, and better tagging. I also think there are many interesting plugin ideas I could develop to script with my content: like a script that uses Natural Language Processing to generate spaced repetition quizzes on your knowledge base. I plan on implementing ML integrations to have automated suggestions of tags and links between notes, something I'm really excited about too!
uzayg
202,140
What is cloud computing, benefits and services?
Cloud computing provides a simple way to access servers, storage, databases and a broad set of applic...
0
2019-11-08T03:28:36
https://dev.to/siddharthr0318/what-is-cloud-computing-benefits-and-services-22ao
aws, linux, cloudcomputing
Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. Benefits of Cloud Computing: Flexibility Pay per service Security Environmental Friendly Disaster Recovery Read More: https://realprogrammer.in/what-is-cloud-computing-benefits-and-services/
siddharthr0318
202,166
What is Big-O Notation? Understand Time and Space Complexity in JavaScript.
As we know, there may be more than one solution to any problem. But it is hard to define, what is the...
0
2020-01-10T08:43:05
https://dev.to/chandra/what-is-big-o-notation-understand-time-and-space-complexity-in-javascript-4684
javascript, productivity, bigonotation, algorithms
As we know, there may be more than one solution to any problem. But it is hard to define, what is the best approach and method of solving that programming problem. Writing an algorithm that solves a definite problem gets more difficult when we need to handle a large amount of data. How we write each and every syntax in our code matters. There are two main complexities that can help us to choose the best practice of writing an efficient algorithm: ###1. Time Complexity - Time taken to solve the algorithm ###2. Space Complexity - The total space or memory taken by the system. When you write some algorithms, we give some instructions to our machine to do some tasks. And for every task completion machine needs some time. Yes, it is very low, but still, it takes some time. So here, is the question arises, does time really matters. Let's take an example, suppose you try to find something on google and it takes about 2 minutes to find that solution. Generally, it never happens, but if it happens what do you think what happens in the back-end. Developers at google understand the time complexity and they try to write smart algorithms so that it takes the least time to execute and give the result as faster as they can. So, here is a challenge that arises, how we can define the time complexity. ## What is Time Complexity?: It quantifies the amount of taken by an algorithm. We can understand the difference in time complexity with an example. *Suppose you need to create a function that will take a number and returns a sum of that number upto that number. Eg. addUpto(10); it should return the sum of number 1 to 10 i.e. 1 + 2+ 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10;* We can write it this way: ` function addUpTo(n) { let total = 0; for (let i = 1; i <= n; i++) { total += i; } return total; } addUpTo(5); // it will take less time addUpTo(1000) // it will take more time ` Now you can understand why the same function takes different time for different inputs. This happens because the loop inside the function will run according to the size of the input. If the parameter passed to input is 5 the loop will run five times, but if the input is 1000 or 10,000 the loop will run that many times. This makes some sense now. But there is a problem, different machines record different timestamp. As the processor in my machine is different from yours and same with multiple users. ##So, how can we measure this time complexity? Here, Big-O-Notation helps us to solve this problem. According to Wikipedia, *Big O Notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. The letter O is used because the growth rate of a function is also referred to as the order of the function.* According to Big O notation, we can express time complexities like 1. If the complexity grows with input linearly, that mean its O(n). _'n' here is the number of operation that an algorithm have to perform._ 2. If complexity grows with input constantly then, the Big O Notation will be O(1). 3. If complexity grows with input quadratically, then the Big O Notation will be O(n^2). _you can pronounce it as O of n square_ 4. If the complexity grows with input with inverse of exponentiation, we can say. ![log algorithm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rmumlwqacl6jdputjnk.png) We can simplify these expressions like below. Basically while calculating the Big O Notation we try to ignore the lower values and try to focus on highest factor which can increase the time of the performance. So, 1. instead of O(2n) prefer O(n); 2. instead of O(5n^2) prefer O(n^2); 3. instead of O(55log n) prefer O(log n); 4. instead of O(12nlog n) prefer O(nlog n); ![Log Image](https://thepracticaldev.s3.amazonaws.com/i/a3e1m9j7v9ipd8dmxzim.png) For better understanding, please have a look at some algorithms which we use daily that have O(n),O(n^2), and O(log n) complexities? In Quora, Mark Gitters said, `` O(n): buying items from a grocery list by proceeding down the list one item at a time, where “n” is the length of the list O(n): buying items from a grocery list by walking down every aisle (now “n” is the length of the store), if we assume list-checking time is trivial compared to walking time O(n): adding two numbers in decimal representation, where n is the number of digits in the number. O(n^2): trying to find two puzzle pieces that fit together by trying all pairs of pieces exhaustively O(n^2): shaking hands with everybody in the room; but this is parallelized, so each person only does O(n) work. O(n^2): multiplying two numbers using the grade-school multiplication algorithm, where n is the number of digits. O( log n ): work done by each participant in a phone tree that reaches N people. Total work is obviously O( n ), though. O( log n ): finding where you left off in a book that your bookmark fell out of, by successively narrowing down the range `` and Arav said, " If you meant algorithms that we use in our day to day lives when we aren't programming: O(log n): Looking for a page in a book/word in a dictionary. O(n): Looking for and deleting the spam emails (newsletters, promos) in unread emails. O(n ^ 2): Arranging icons on the desktop in an order of preference (insertion or selection sort depending on the person)." I hope you are now familiar with the complexities. I am not completing the topic in this article, I will make another in future. If you have any questions and suggestions please write down the comment or feel free to contact me. Thanks for giving your valuable time in reading this article.
chandra
202,205
Symfony on a lambda: first deployment
Deploy easely Symfony application on a AWS Lambda
3,140
2019-11-08T07:27:54
https://julesmatsounga.com/en/article/symfony-lambda-chapter-1
php, symfony, lambda, serverless
--- title: Symfony on a lambda: first deployment published: true description: Deploy easely Symfony application on a AWS Lambda tags: PHP, Symfony, Lambda, Serverless series: Symfony on a lambda canonical_url: https://julesmatsounga.com/en/article/symfony-lambda-chapter-1 --- # Symfony on a lambda: first deployment ***English isn't my first language, help me improve my posts by pointing my mistakes*** We can run serveless on many platforms, and same goes for cloud functions. As Bref only support AWS, we will focus on this platform. As mentioned, we will use [Bref](https://bref.sh/). It help run PHP on a AWS Lambda. We will use it with Symfony, a popular PHP framework. If you are lost, you can find the project on Github: [link](https://github.com/hyoa/symfony-on-lambda) Each branch will match a chapter. ### Requirements * Have the requirements to run Symfony on your computer (PHP and some extensions) * Have an AWS account(you can create one here: [registration](https://portal.aws.amazon.com/billing/signup?nc2=h_ct&src=default&redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation#/start)). We will stay below free tier limit, so no worries for your wallet. * Create an access key: * Create a new user [here](https://console.aws.amazon.com/iam/home?#/users$new?step=details) * Add a user name * Enable ` Programmatic access` * Click **Attach existing policies directly**, search for **AdministratorAccess **and select it. Warning: it is recommended to only select right that you really need. But too keep this presentation simple, we will use a full access key. * Finish creating the user * Take note of keys generated, we will need them after * Install serverless: `npm install -g serverless` * Create the configuration of serverless: `serverless config credentials --provider aws --key <key> --secret <secret>` where key and secret are from the keys generated earlier ### Symfony We now have everything we need to start our development. Let's go ! #### Installation of Symfony In your terminal, type the following command at the root of your projects `composer create-project symfony/website-skeleton [my_project_name]` (remember to replace [my_project_name] with yours) Once the installation done, you can go in the folder where is your Symfony application. We can now install Bref, which is required to deploy PHP on a lambda. To do so, once in the project, type `composer require bref/bref` ### Creation of the serverless.yml This file will define the architecture that we will deploy on AWS, that will be called **CloudFormation**. It's in this file that we will define services and resources and their configurations that we want on AWS. To create this file, Bref give us a command that will help init a project. Type `vendor/bin/bref init` et select `HTTP application`. The command create our **serverless.yml** file and an **index.php** file at the root of our project. You can delete **index.php** we wont use it. The **serverless.yml** should look like this: ```yaml service: app #Name of your application provider: name: aws #Provider used by Serverless region: us-east-1 #Region where you will deploy your CloudFormation runtime: provided plugins: - ./vendor/bref/bref functions: api: #Name of the function handler: index.php description: '' timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds) layers: - ${bref:layer.php-73-fpm} events: - http: 'ANY /' - http: 'ANY /{proxy+}' ``` The most interesting part is **functions** where we will be able to define our functions that we need (it can be commands, apis, crons, etc...). * handler: the php file used by the lambda * description: well, a description ? * timeout: maximum execution time of the function * layers: Layers are environment used by the lambda to run, you can have multiple layers to add other extensions, dependencies etc. Here, we use the layer of Bref, that run PHP with the layer of Node ([more on layers](https://docs.aws.amazon.com/fr_fr/lambda/latest/dg/configuration-layers.html)) * events: events that will trigger the function. On this function, the HTTP event will trigger the lambda, but there is a lot of other events triggered by AWS that we can listen. We will see an other one later Now that we have a better understanding of our file, lets changed it to run Symfony. ```yaml service: cloud-project provider: name: aws region: eu-west-2 runtime: provided stage: dev environment: APP_ENV: prod plugins: - ./vendor/bref/bref functions: website: handler: public/index.php timeout: 28 layers: - ${bref:layer.php-73-fpm} events: - http: 'ANY /' - http: 'ANY /{proxy+}' console: handler: bin/console timeout: 120 layers: - ${bref:layer.php-73} - ${bref:layer.console} ``` Not too many change. I changed the name of the application in `cloud-project` (you can put whatever you want). I also add a stage that define the environment publish. In **environment** we have the environment variables used by our functions. I also create 2 functions: * website that is the same than the api function we had in the file. We only change the handler that now target the **index.php** of Symfony. * console used to run Symfony command #### Symfony configuration We will have to change some files in Symfony to make it work on a lambda. System file is **readonly** except **/tmp**, we have to change where are stored cache and logs. In **src/Kernel.php**, we need to add 2 methods: ```php // src/Kernel.php ... public function getLogDir(): string { if (getenv('LAMBDA_TASK_ROOT') !== false) { return '/tmp/log/'; } return parent::getLogDir(); } public function getCacheDir(): string { if (getenv('LAMBDA_TASK_ROOT') !== false) { return '/tmp/cache/'.$this->environment; } return parent::getCacheDir(); } ``` We also need to change **index.php**. Once deployed, the lambda that use API Gateway have a domain that is created and that end with th stage deployed (ex: https://lamnda/dev). It can create some issue with PHP framework. You can easily solve this by creating a custom domain that get ride of this suffix (more information](https://bref.sh/docs/environment/custom-domains.html)). But to keep this presentation accessible, we won't do it. We have to change some servers variables so the Symfony routing wont break. ```php // public/index.php $_SERVER['SCRIPT_NAME'] = '/dev/index.php'; if (strpos($_SERVER['REQUEST_URI'], '/dev') === false) { $_SERVER['REQUEST_URI'] = '/dev'.$_SERVER['REQUEST_URI']; } ``` You need to put this line before `new Kernel(...)`. `/dev` is the stage defined in **serverless.yml**. If you create your own application, you should really create your own domain. It can be a bit long (propagation of DNS) but otherwise it's quite simple. We don't need anything else. Let's code a bit. #### Building a homepage We will create a simple landing page si we have something to deploy. But we will keep it simple: Create a file **HomeController** in **src/Controller** ```php <?php namespace App\Controller; use Symfony\Bundle\FrameworkBundle\Controller\AbstractController; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\Routing\Annotation\Route; class HomeController extends AbstractController { /** * @Route("/", name="home") */ public function homeAction(): Response { return $this->render('home/index.html.twig'); } } ``` Create a file **index.html.twig** in **templates/home** ```twig {% extends 'base.html.twig' %} {% block body %} <h1>Symfony and lambdas</h1> {% endblock %} ``` Yes, it's really simple but we don't really need anything else. To see if it's working, type: `php bin/console server:run` and go to the url displayed. You should see your page. Now, let's deploy it ! #### Deploy in the cloud It might disappoint you, but you just need to run `serverless deploy`. The command will package your project, create the required resources on AWS. After few minutes, the endpoints of your function will be displayed. Use the first one, and you should see your site deployed ! You can see your **CloudFormation** on AWS [here](https://eu-west-2.console.aws.amazon.com/cloudformation/home?region=eu-west-2#/stacks?filteringText=&filteringStatus=active&viewNested=true&hideStacks=false). You can also see the resources created by selecting **Model** then **Display in Designer**. *** We now have a Symfony application running on AWS. Of course we are only at the beginning, but we will discover more in the next chapters. In the next chapter we will talk about assets. Because without assets, there is no CSS or JS !
hyoa
202,291
What's the most inefficient thing you do?
We all have weird things we do, long ways around short problems. We should probably get around to...
0
2019-11-08T11:03:54
https://dev.to/moopet/what-s-the-most-inefficient-thing-you-do-4bmh
discuss, workflow, watercooler
--- title: What's the most inefficient thing you do? published: true tags: discuss, workflow, watercooler cover_image: https://thepracticaldev.s3.amazonaws.com/i/j1egpfb1l5pamnsfseec.jpg --- We all have weird things we do, long ways around short problems. We should probably get around to sorting them out, but we've gotten used to That Way Of Doing Them. Sometimes we've created monster workflows that Rube Goldberg or Heath Robinson would eye suspiciously before backing away from. Sometimes we hold onto them because they're comfortable, sometimes because fixing them seems too difficult, sometimes just because we don't know any better until it's way too late to make any difference. What do you do that fits this description? Well, what are you vaguely embarrassed to admit to, anyway? -- Cover image by [Valentin Petkov](https://unsplash.com/@thefreak1337) on Unsplash.
moopet
202,342
Setting Up a Python Remote Interpreter Using Docker
Why a Remote Interpreter instead of a Virtual Environment? A well-known pattern in Python...
0
2019-11-08T13:28:19
https://dev.to/alvarocavalcanti/setting-up-a-python-remote-interpreter-using-docker-1i24
python, pycharm, vscode, tdd
# Why a Remote Interpreter instead of a Virtual Environment? A well-known pattern in Python (and many other languages) is to rely on virtual environment tools (`virtualenv`, `pyenv`, etc) to avoid the [SnowflakeServer](https://martinfowler.com/bliki/SnowflakeServer.html) anti-pattern. These tools create an isolated environment to install all dependencies for any given project. But as of today there's an improvement to that pattern, which is to use Docker containers instead. Such containers provide much more flexibility than virtual environment, because they are not limited to a single platform/language, instead they offer a fully-fledged virtual machine. Not to mention the `docker-compose` tool where one can have several containers interacting with each other. This article will guide the reader on how to set up the two most used Python IDEs for using Docker containers as remote interpreters. ## Pre-requisites A running Docker container with: - A volume mounted to your source code (henceforth, `/code`) - SSH setup - SSH enabled for the `root:password` creds and the root user allowed to login Refer to [this gist](https://gist.github.com/alvarocavalcanti/24a6f1470d1db724a398ea6204384f00) for the necessary Docker files. ## PyCharm Professional Edition 1. Preferences (CMD + ,) > Project Settings > Project Interpreter 1. Click on the gear icon next to the "Project Interpreter" dropdown > Add 1. Select "SSH Interpreter" > Host: localhost, Port: 9922, Username: root > Password: password > Interpreter: /usr/local/bin/python, Sync folders: Project Root -> /code, Disable "Automatically upload..." 1. Confirm the changes and wait for PyCharm to update the indexes ## Visual Studio Code 1. Install the [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) extension 1. Install the [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension 1. Open the Command Pallette and type `Remote-Containers`, then select the `Attach to Running Container...` and selecet the running docker container 1. VS Code will restart and reload 1. On the `Explorer` sidebar, click the `open a folder` button and then enter `/code` (this will be loaded from the remote container) 1. On the `Extensions` sidebar, select the `Python` extension and install it on the container 1. When prompet on which interppreter to use, select `/usr/local/bin/python` 1. Open the Command Pallette and type `Python: Configure Tests`, then select the `unittest` framework ## Expected Results 1. Code completion works 1. Code navigation works 1. Organize imports works 1. Import suggestions/discovery works 1. (VS Code) Tests (either classes or methods) will have a new line above their definitions, containing two actions: `Run Test | Debug Test`, and will be executed upon clicking on them 1. (PyCharm) Tests (either classes or methods) can be executed by placing the cursor on them and then using `Ctrl+Shift+R` ## Bonus: TDD Enablement One of the key aspects of the Test-Driven Development is to provide a short feedback on each iteration (write a failing test, fix the test, refactor). And a lot of times a project's tooling might work against this principle, as it's fairly common for a project to have a way of executing its test suite, but it is also common that this task will run the entire suite, not just a single test. But if you have your IDE of choice able to execute just a single test in a matter of seconds, you will feel way more comfortable on given TDD a try.
alvarocavalcanti
202,369
How to build applications with Vue’s composition API
Written by Raphael Ugwu✏️ Vue’s flexible and lightweight nature makes it really awesome for...
0
2019-11-10T16:53:29
https://blog.logrocket.com/how-to-build-applications-with-vues-composition-api/
vue, javascript, tutorial, webdev
--- title: How to build applications with Vue’s composition API published: true date: 2019-11-08 14:00:31 UTC tags: vue,javascript,tutorial,webdev canonical_url: https://blog.logrocket.com/how-to-build-applications-with-vues-composition-api/ cover_image: https://thepracticaldev.s3.amazonaws.com/i/qjup3j93vxj4foj548zh.png --- **Written by [Raphael Ugwu](https://blog.logrocket.com/author/raphaelugwu/)**✏️ Vue’s flexible and lightweight nature makes it really awesome for developers who quickly want to scaffold small and medium scale applications. However, Vue’s current API has certain limitations when it comes to maintaining growing applications. This is because the API organizes code by [component options](https://012.vuejs.org/api/options.html) ( Vue’s got a lot of them) instead of logical concerns. As more component options are added and the codebase gets larger, developers could find themselves interacting with components created by other team members, and that’s where things start to get really confusing, it then becomes an issue for teams to improve or change components. Fortunately, Vue addressed this in its latest release by rolling out the [Composition API](https://vue-composition-api-rfc.netlify.com/#summary). From what I understand, it’s a function-based API that is meant to facilitate the composition of components and their maintenance as they get larger. In this blog post, we’ll take a look at how the composition API improves the way we write code and how we can use it to build highly performant web apps. [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ## Improving code maintainability and component reuse patterns [Vue 2](https://vuejs.org/v2/guide/) had two major drawbacks. The first was **difficulty maintaining large components.** Let’s say we have a component called `App.vue` in an application whose job is to handle payment for a variety of products called from an API. Our initial steps would be to list the appropriate data and functions to handle our component: ```jsx // App.vue <script > import PayButton from "./components/PayButton.vue"; const productKey = "778899"; const API = `https://awesomeproductresources.com/?productkey=${productKey}`; // not real ;) export default { name: "app", components: { PayButton }, mounted() { fetch(API) .then(response => { this.productResponse = response.data.listings; }) .catch(error => { console.log(error); }); }, data: function() { return { discount: discount, productResponse: [], email: "ugwuraphael@gmail.com", custom: { title: "Retail Shop", logo: "We are an awesome store!" } }; }, computed: { paymentReference() { let text = ""; let possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; for (let i = 0; i < 10; i++) text += possible.charAt(Math.floor(Math.random() * possible.length)); return text; } } }; </script> ``` All `App.vue` does is retrieve data from an API and pass it into the `data` property while handling an imported component `payButton`. It doesn’t seem like much and we’ve used at least three component options – `component`, `computed` and `data` and the [`mounted()`](https://vuejs.org/v2/api/#mounted) lifecycle Hook. In the future, we’ll probably want to add more features to this component. For example, some functionality that tells us if payment for a product was successful or not. To do that we’ll have to use the `method` component option. Adding the `method` component option only makes the component get larger, more verbose, and less maintainable. Imagine that we had several components of an app written this way. It is definitely not the ideal kind of framework a developer would want to use. Vue 3’s fix for this is a `setup()` method that enables us to use the composition syntax. Every piece of logic is defined as a composition function outside this method. Using the composition syntax, we would employ a separation of concerns approach and first isolate the logic that calls data from our API: ```jsx // productApi.js <script> import { reactive, watch } from '@vue/composition-api'; const productKey = "778899"; export const useProductApi = () => { const state = reactive({ productResponse: [], email: "ugwuraphael@gmail.com", custom: { title: "Retail Shop", logo: "We are an awesome store!" } }); watch(() => { const API = `https://awesomeproductresources.com/?productkey=${productKey}`; fetch(API) .then(response => response.json()) .then(jsonResponse => { state.productResponse = jsonResponse.data.listings; }) .catch(error => { console.log(error); }); }); return state; }; </script> ``` Then when we need to call the API in `App.vue`, we’ll import `useProductApi` and define the rest of the component like this: ```jsx // App.vue <script> import { useProductApi } from './ProductApi'; import PayButton from "./components/PayButton.vue"; export default { name: 'app', components: { PayButton }, setup() { const state = useProductApi(); return { state } } } function paymentReference() { let text = ""; let possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; for (let i = 0; i < 10; i++) text += possible.charAt(Math.floor(Math.random() * possible.length)); return text; } </script> ``` It’s important to note that this doesn’t mean our app will have fewer components, we’re still going to have the same number of components – just that they’ll use fewer component options and be a bit more organized. [Vue 2](https://vuejs.org/v2/guide/)‘s second drawback was an inefficient component reuse pattern. The way to reuse functionality or logic in a Vue component is to put it in a [mixin](https://vuejs.org/v2/guide/mixins.html) or [scoped slot](https://vuejs.org/v2/guide/components-slots.html). Let’s say we still have to feed our app certain data that would be reused, to do that let’s create a mixin and insert this data: ```jsx <script> const storeOwnerMixin = { data() { return { name: 'RC Ugwu', subscription: 'Premium' } } } export default { mixins: [storeOwnerMixin] } </script> ``` This is great for small scale applications. But like the first drawback, the entire project begins to get larger and we need to create more mixins to handle other kinds of data. We could run into a couple of issues such as name conflicts and implicit property additions. The composition API aims to solve all of this by letting us define whatever function we need in a separate JavaScript file: ```jsx // storeOwner.js export default function storeOwner(name, subscription) { var object = { name: name, subscription: subscription }; return object; } ``` and then import it wherever we need it to be used like this: ```jsx <script> import storeOwner from './storeOwner.js' export default { name: 'app', setup() { const storeOwnerData = storeOwner('RC Ugwu', 'Premium'); return { storeOwnerData } } } </script> ``` Clearly, we can see the edge this has over mixins. Aside from using less code, it also lets you express yourself more in plain JavaScript and your codebase is much more flexible as functions can be reused more efficiently. ## Vue Composition API compared to React Hooks Though Vue’s Composition API and React Hooks are both sets of functions used to handle state and reuse logic in components – they work in different ways. Vue’s `setup` function runs only once while creating a component while React Hooks can run multiple times during render. Also for handling state, React provides just one Hook – `useState`: ```jsx import React, { useState } from "react"; const [name, setName] = useState("Mary"); const [subscription, setSubscription] = useState("Premium"); console.log(`Hi ${name}, you are currently on our ${subscription} plan.`); ``` The composition API is quite different, it provides two functions for handling state – `ref` and `reactive` . `ref` returns an object whose inner value can be accessed by its `value` property: ```jsx const name = ref('RC Ugwu'); const subscription = ref('Premium'); watch(() => { console.log(`Hi ${name}, you are currently on our ${subscription} plan.`); }); ``` `reactive` is a bit different, it takes an object as its input and returns a reactive proxy of it: ```jsx const state = reactive({ name: 'RC Ugwu', subscription: 'Premium', }); watch(() => { console.log(`Hi ${state.name}, you are currently on our ${state.subscription} plan.`); }); ``` Vue’s Composition API is similar to React Hooks in a lot of ways although the latter obviously has more popularity and support in the community for now, it will be interesting to see if composition functions can catch up with Hooks. You may want to check out this [detailed post](https://dev.to/voluntadpear/comparing-react-hooks-with-vue-composition-api-4b32) by Guillermo Peralta Scura to find out more about how they both compare to each other. ## Building applications with the Composition API To see how the composition API can further be used, let’s create an image gallery out of pure composition functions. For data, we’ll use [Unsplash’s API](https://unsplash.com/developers). You will want to sign up and get an API key to follow along with this. Our first step is to create a project folder using Vue’s CLI: ```jsx # install Vue's CLI npm install -g @vue/cli # create a project folder vue create vue-image-app #navigate to the newly created project folder cd vue-image-app #install aios for the purpose of handling the API call npm install axios #run the app in a developement environment npm run serve ``` When our installation is complete, we should have a project folder similar to the one below: ![vue project files](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2019/11/vueimage.png?resize=332%2C448&ssl=1) Vue’s CLI still uses Vue 2, to use the composition API, we have to install it differently. In your terminal, navigate to your project folder’s directory and install Vue’s composition plugin: ```jsx npm install @vue/composition-api ``` After installation, we’ll import it in our `main.js` file: ```jsx import Vue from 'vue' import App from './App.vue' import VueCompositionApi from '@vue/composition-api'; Vue.use(VueCompositionApi); Vue.config.productionTip = false new Vue({ render: h => h(App), }).$mount('#app') ``` It’s important to note that for now, the composition API is just a different option for writing components and not an overhaul. We can still write our components using component options, mixins, and scoped slots just as we’ve always done. ## Building our components For this app, we’ll have three components: - `App.vue` : The parent component — it handles and collects data from both children components- `Photo.vue` and `PhotoApi.js` - `PhotoApi.js`: A functional component created solely for handling the API call - `Photo.vue` : The child component, it handles each photo retrieved from the API call First, let’s get data from the Unsplash API. In your project’s `src` folder, create a folder `functions` and in it, create a `PhotoApi.js` file: ```jsx import { reactive } from "@vue/composition-api"; import axios from "axios"; export const usePhotoApi = () => { const state = reactive({ info: null, loading: true, errored: false }); const PHOTO_API_URL = "https://api.unsplash.com/photos/?client_id=d0ebc52e406b1ac89f78ab30e1f6112338d663ef349501d65fb2f380e4987e9e"; axios .get(PHOTO_API_URL) .then(response => { state.info = response.data; }) .catch(error => { console.log(error); state.errored = true; }) .finally(() => (state.loading = false)); return state; }; ``` In the code sample above, a new function was introduced from Vue’s composition API – `reactive`. `reactive` is the long term replacement of `Vue.observable()` , it wraps an object and returns the directly accessible properties of that object. Let’s go ahead and create the component that displays each photo. In your `src/components` folder, create a file and name it `Photo.vue`. In this file, input the code sample below: ```jsx <template> <div class="photo"> <h2>{{ photo.user.name }}</h2> <div> <img width="200" :alt="altText" :src="photo.urls.regular" /> </div> <p>{{ photo.user.bio }}</p> </div> </template> <script> import { computed } from '@vue/composition-api'; export default { name: "Photo", props: ['photo'], setup({ photo }) { const altText = computed(() => `Hi, my name is ${photo.user.name}`); return { altText }; } }; </script> <style scoped> p { color:#EDF2F4; } </style> ``` In the code sample above, the `Photo` component gets the photo of a user to be displayed and displays it alongside their bio. For our `alt` field, we use the `setup()` and `computed` functions to wrap and return the variable `photo.user.name`. Finally, let’s create our `App.vue` component to handle both children components. In your project’s folder, navigate to `App.vue` and replace the code there with this: ```jsx <template> <div class="app"> <div class="photos"> <Photo v-for="photo in state.info" :photo="photo" :key="photo[0]" /> </div> </div> </template> <script> import Photo from './components/Photo.vue'; import { usePhotoApi } from './functions/photo-api'; export default { name: 'app', components: { Photo }, setup() { const state = usePhotoApi(); return { state }; } } </script> ``` There, all `App.vue` does is use the `Photo` component to display each photo and set the state of the app to the state defined in `PhotoApi.js`. ## Conclusion It’s going to be interesting to see how the Composition API is received. One of its key advantages I’ve observed so far is its ability to separate concerns for each component – every component has just one function to carry out. This makes stuff very organized. Here are some of the functions we used in the article demo: - `setup` – this controls the logic of the component. It receives `props` and context as arguments - `ref` – it returns a reactive variable and triggers the re-render of the template on change. Its value can be changed by altering the `value` property - `reactive` – this returns a reactive object. It re-renders the template on reactive variable change. Unlike `ref`, its value can be changed without changing the `value` property Have you found out other amazing ways to implement the Composition API? Do share them in the comments section below. You can check out the full implementation of the demo on [CodeSandbox](https://codesandbox.io/s/vue-template-x9bqm?fontsize=14). * * * **Editor's note:** Seeing something wrong with this post? You can find the correct version [here](https://blog.logrocket.com/how-to-build-applications-with-vues-composition-api/). ## Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps   ![LogRocket Dashboard Free Trial Banner](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/1d0cd-1s_rmyo6nbrasp-xtvbaxfg.png?resize=1200%2C677&ssl=1)   [LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.   In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.   [Try it for free](https://logrocket.com/signup/). * * * The post [How to build applications with Vue’s composition API](https://blog.logrocket.com/how-to-build-applications-with-vues-composition-api/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
202,381
9 Ways to Increase Your Financial Flow
The simplest way of calculating your cash flow is to deduct your expenses from your income. The formu...
0
2019-11-08T14:38:09
https://dev.to/anna_j_stinson/9-ways-to-increase-your-financial-flow-3age
The simplest way of calculating your cash flow is to deduct your expenses from your income. The formula itself is rather simple and tells you that you can increase your disposable cash by either increasing your income or reducing your expenses. Both are valid ways, but often the best results are achieved by combining both approaches and often the only way to avoid living paycheck to paycheck or even worse, going into debt. The same rules apply in business as well. ##Lease, Don’t Buy Long-term, leasing if more expensive than buying. So why all experts advise startups to lease then? Because of the financial flow. By leasing the equipment you need, you are actually buying it with incremental payments. This essentially creates a payment plan, allowing you to skip big purchases right off the bat when you need the cash the most. Of course, if you are flush with money, you can afford to buy everything you need and pay a full retail price at once. Then again, if you are in that situation, you wouldn’t be reading articles about increasing cash flow. ##Offer Discounts Quick payments are a great way of bolstering your cash flow and one way of getting them is to offer discounted prices, especially on larger orders. There will be some of your customers who simply can’t afford to shell out that much cash in advance, but some of them will jump at the opportunity to save some money. If you can reduce the payment time from the usual 30 days to just 10 by offering a 2% discount for early payment, you should do it. Even at the larger discount, the exchange is still favorable, especially if you are strapped for cash. ##Check Your Expenses The least popular way of increasing your cash flow is by cutting expenses, either operational or by firing people. Unfortunately, sometimes it is the only way of moving forward. If you are forced to do it, think carefully and create a plan before committing to it. These changes will shake your company to the core and shouldn’t be taken lightly. ##Maintain Your Equipment Equipment maintenance isn’t something startups usually bother with, on the account that they are usually getting brand new stuff that needs little or no intervention. As time goes by and wear and tear kicks in, all of a sudden everything starts breaking down, causing massive downtime and delays. The best way to avoid this is by using preventive maintenance. Eventually, a part or a whole machine will need replacing, but by doing proper maintaining, this can be delayed significantly. ##Consider Buying Used Equipment Not a very popular suggestion, but often you can find used equipment in excellent condition for a fraction of the price of a new one. This can be demanding on your time, as you have to sort through piles of junk to find one decent piece, but you can save thousands of dollars on just one purchase. Start with your area looking for local auctions and advertisements. Every once and a while, somebody will go out of business and this is a perfect opportunity to get your hands on professional equipment for little money. Make sure to bring someone who knows how to assess the condition, if you are unsure what to look for. ##Expand Your Market In the short term, expanding set you back significantly. You will have to expand production or hire more people to provide services, invest in a marketing campaign and gazillion other smaller things that will need money. In the long term, this is perhaps the best way to increase your cash flow. Not only will this increase your income, but it will also open up new possibilities as well. ##Invest A small percentage of your monthly income should be devoted to investments. This will ensure your long-term cash flow by providing you with dividends and interests. To be a successful investor, you need to educate yourself first and getting to know all [the important trading terminology](https://www.asktraders.com/learn-to-trade/trading-terminology/) is just the first step. Learn the difference between stock, bonds, ETF’s, mutual funds and index funds, and see which one of these can benefit you the most. Over time, your portfolio will yield different returns and you should be ready to swap investments as the opportunity arises. ##Offer New Product/Service Hold a brainstorming session with your team to come up with a product or a service that you can offer to your customers. It may sound farfetched, but in reality, it is surprising how often good ideas can spring up when you least expect them. It doesn’t even have to be in line with your core business. Perhaps there is some space you aren’t currently using that can be rented? ##Consider a Loan Finally, if you are left with no other option, [taking out a loan](https://www.fundera.com/blog/advantages-of-sba-loans) until you can recover may be the only solution, however unpopular it is. If you have reasonable expectations that your income will improve in the future and that you are just missing one vital part, like a piece of crucial machinery, taking out a commercial loan doesn’t have to be the end of the world. After all, take a look at the United States government, in debt for over $20 trillion. A few thousand dollars are almost insignificant in comparison. Of course, it is very important to have a clear idea of why you need the loan and the discipline to spend it on that exact purpose. People often make mistakes of diverting some newly-found funds to other things and neglecting the purpose of the loan. Not only will this get you in trouble with your lender, but it will also jeopardize the whole idea. The cash is the king and whoever says other ways is either lying or is delusional. It is no wonder companies like Apple keep hundreds of billions stashed in offshore accounts, just waiting for an opportunity to present itself. However, getting to that level demands a lot of work and the first step is creating a positive financial flow. Only then can you think about creating an emergency fund for unexpected opportunities.
anna_j_stinson
202,408
Dual-booting Linux Mint
Why Linux I’ve always heard that Linux is the way to go but I never tried it. I had...
0
2019-11-21T17:53:10
https://dev.to/stephencavender/dual-booting-linux-mint-3cgk
linux
--- title: Dual-booting Linux Mint published: true date: 2019-11-08 05:00:00 UTC tags: Linux canonical_url: --- ## Why Linux I’ve always heard that Linux is the way to go but I never tried it. I had Windows and it worked fine for me. I took some training at work that required Linux so I started using it inside a virtual machine. I got comfortable with it and decided it would be fun to try at home. ## Why Mint Based on this [Dev.to](https://dev.to/pluralsight/which-distribution-of-linux-should-i-use-51g7) article it sounded like where a Linux newbie like myself should start. I tried a couple versions inside VirtualBox before committing. I used [OSBoxes](https://www.osboxes.org/) to quickly get them up and running. ## Why Dual-Boot I chose to dual-boot because I didn’t want to risk losing Windows if I messed up the Mint install. Also because the Mint install made it really easy. ## How I did it ### Disclaimer! I’ll recount the steps I took and the references I used but can’t guarantee any of it for anyone else.Also it’s a good idea to follow along with [Mint’s install docs](https://www.linuxmint.com/documentation.php). ### 1. Back Up Data I backed up my data because there’s always a chance it could get wiped from existence. ### 2. Download Linux Mint I grabbed the 64bit Cinnamon version from [here](https://linuxmint.com/download.php). ### 3. Create a Bootable USB I used [Etcher](https://www.balena.io/etcher/) to flash the image onto my USB drive but any flashing software should do the trick. ![Etcher](https://www.cavender.dev/assets/images/linux-dual-boot/etcher.png) ### 4. Create Disk Space My first attempt didn’t take because I didn’t have any room. I ended up freeing up some space from my Windows partitions. ![Disk Management](https://www.cavender.dev/assets/images/linux-dual-boot/disk.png) ### 5. Update Boot Configuration I had to disable secure boot and change the boot order in the BIOS. ### 6. Install Mint I followed the on-screen instructions at this point. Here are the important bits: - Dual booting with Windows - Create partitions - Root (I used 20Gb) - Swap (I used 8Gb) - Home (I used the rest of my free space)A few more on-screen instructions and I was ready to go! ### 7. Use Mint Mint is installed and ready to go. I’m on a Razer Blade Stealth and everything works out of the box except for closing the lid. I’m sure there are other things that don’t quite work that I haven’t encountered yet. When I close the lid Mint is supposed to suspend but when I open the lid back up I have to hard shutdown before my laptop will wake up and respond. Other than that I’m very happy with Mint and hope that this article helps you!
stephencavender
202,441
Spartan Breakpoints!
Just wanted to get some opinions from other UI Enthusiasts about the breakpoints they are using for their UIs
0
2019-11-08T22:57:39
https://dev.to/srsheldon/spartan-breakpoints-59a1
responsive, css, breakpoints, ux
--- title: Spartan Breakpoints! published: true description: Just wanted to get some opinions from other UI Enthusiasts about the breakpoints they are using for their UIs tags: responsive, css, breakpoints, ux --- ![res]( https://thepracticaldev.s3.amazonaws.com/i/kgzm8zqnlfi3k1gy5dri.jpg) So I know this topic has probably been talked about more than enough, there is even a [really awesome article about it](https://dev.to/rstacruz/what-media-query-breakpoints-should-i-use-292c) on Dev.to but I wanted to get some feedback on a slightly new set of breakpoints. I was hoping to make them even more generic and get some feedback and thoughts from the incredible developer community here on Dev.to. I was going to call this new set of breakpoints "the spartan breakpoint system" because the media queries are approximately every 300 pixels. I was planing on using it in a component library I am building for fun to teach myself some of the various custome element APIs and enhance my web accessibility skills. Here's a table comparing a few different CSS Framework breakpoints: | Size | Devices | Spartan Breakpoints | [Bootstrap](https://getbootstrap.com/docs/4.3/layout/overview/#responsive-breakpoints) | [Bulma](https://bulma.io/documentation/overview/responsiveness/#breakpoints) | [Tailwind](https://tailwindcss.com/docs/breakpoints/) | [Foundation](https://foundation.zurb.com/sites/docs/v/5.5.3/media-queries.html) | [Semantic UI](https://github.com/Semantic-Org/Semantic-UI/blob/383871090cda527df916e1751279b3de79b07480/src/themes/default/globals/site.variables#L208-L216) | | ---- | ---- | --------- | -------- | --- | ---- | ---- | ---| | Extra Small (xs) | small phone | 0 - 300px | 0 - 575px | 0 - 768px| 0 - 639px | 0 - 640px| 320 - 767px | | Small (sm) | phone | 301 - 600px | 576 - 767px | 769 - 1023px | 640 - 767px | 641 - 1,024px | 768 - 991px | | Medium (md) | large phone/small tablet| 601 - 900px | 768 - 991px| 1024 - 1,215px | 768 - 1,023px| 1,025 - 1,440px | 992 - 1,199x | | Large (lg) | tablet | 901 - 1,200px | 992 - 1,200px| 1,216 - 1,407px | 1,024- 1,279px | 1,441 - 1,920px | 1,200 - 1,919px | | Extra Large (xl) | desktop/large tablet | 1,201 - 1,500px | > 1,200px | > 1,408px | > 1,280px | > 1,921px | > 1,920px | Thanks in advance everyone for your feedback!
srsheldon
202,494
problem rendering images in react app
Hello I am fairly new to react &amp; node. I have an app which displays image buttons amongst other t...
0
2019-11-08T18:39:36
https://dev.to/rrn518/problem-rendering-images-in-react-app-5d64
Hello I am fairly new to react & node. I have an app which displays image buttons amongst other things. When starting from visual studio code they render perfectly unless my associated node listener is started first on the same port, in which case they don't render at all. 404 is returned for all. Why is that and what's the fix please ? Cheers
rrn518
886,077
A new life transition
Its true that with any transition in life there’s upheaval, fear, and frustration, that’s the point...
0
2021-11-03T02:46:01
https://dev.to/mikeketterling/a-new-life-transition-5kd
beginners, career
Its true that with any transition in life there’s upheaval, fear, and frustration, that’s the point of a transition. I think that sharing any life transition ultimately helps inspire at least someone out there, and therefore, doing something good for someone. That’s all I hope for with this and any other posts I’ll be publishing. With this being my first post, I thought it only be fitting to write about this transition. As I first said this has not been an easy experience, and if I can offer my story and maybe some advice for any future transitions, then great. Everyone’s experiences will be different in a transition into the tech space – it’s important to recognize this, and I hope that my experience resonates with someone. I come from a background in Human Resources and healthcare. Most of my work experience had to do with all the non-technical aspects of most office settings. So as far as what I had been used to for some years, the jump to learning multiple technical languages was going to be significant - but not out of reach. In knowing that I was going to be learning many new skills, I had a few strategies in mind. ##My Three Strategies *A healthy routine* I knew from working the past year or so at home that for me to complete a technical program I would have to have a solid routine to help me through the rough days and keep me on track when I’m ahead in the coursework. I also knew that a routine would support my mental exhaustion as well, and to be honest this is what I was most concerned about. My mental state had gone through a lot in that past year with COVID-19, and my job. My mental fortitude was stretched so thin to the point I almost completely broke. It was a rough time for all of us in the world I do believe, and I do not want to take that away from anyone - but my own experience was difficult. If I had to say what kept me going, it was my wife and dogs. My wife, so that we can keep driving towards our combined aspirations, and my dogs were the only ones around me when I was at my lowest, to bring me back to rational thinking when I needed it the most. Taking all this into consideration I started to put together a routine that seemed realistic and that I could follow. I wanted to keep my body sharp as well as my mind, a regular morning workout when I wake up, followed up with strong and healthy meals through the day. Setting time aside every night for homework and much needed time for family and friends. *Prep-work* Most bootcamps can offer you insight into what languages or tech stacks you will be using during the bootcamp. Some bootcamps will even provide you with pre-work modules, prepping you for your time at the bootcamp of your choosing. For me I unfortunately entered the program at the almost last possible moment, so I couldn’t spend a significant amount of time with the provided pre-work modules, but I would change that if I could go back. Another option is purchasing or finding free resources to review along with your prep coursework. This is a strategy I use often when learning something new. I like to see and hear plenty of different examples of how to complete similar tasks, and if you’re like me, and have adequate time, I think it’s a great strategy to immerse you in content. *Find your motivation* The last strategy I’ll mention is finding your motivation. Motivation might be the most important piece to anyone’s success. It helps drive you towards a finish line especially when the road bumps come, and the path gets tough. You’ve got to be able to lean on your motivation when things get rough. I think your motivation needs to be important to you regarding a bootcamp. The obstacles and trouble you will face during these learning opportunities can be intense, and the stronger the motivation you have, the more likely you will overcome adversity. For me, my motivation is my family, and giving them as many opportunities as possible. I know that with a career in tech being something I’m more passionate about, not only will I be happier, but my family will be as well. I strongly believe that anyone can learn anything. Sometimes the understanding will come quickly, and sometimes it won’t, but if you put in the work and continue your path, you can be successful to. I reflect on this thought almost daily, and especially when the understanding is coming slower than I want. But I’ll keep digging in and I hope you do too.
mikeketterling
202,507
Machine Learning Applications in Tabletop Gaming
Digital Dungeon Diving Those who know me are well aware of my passion for gaming (To those who don...
0
2019-11-08T19:11:35
https://dev.to/geoffreyianward/machine-learning-applications-in-tabletop-gaming-dng
<h3>Digital Dungeon Diving</h3> ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/zo2orx6fcg9rs7euu1ck.jpg) <p>Those who know me are well aware of my passion for gaming (To those who don't know me - hello! I love gaming). While I love all forms of gaming, I have a special place in my heart for tabletop games. I love the ability to gather with friends and explore game systems and mechanics together in a shared social environment. Tabletop roleplaying games are even more precious to me, as they challenge us to think creatively to solve problems as a team.</p> <p>However, the actual mechanics for playing tabletop role-playing games remain firmly locked in the technology of the 1970s. Some relics, like the tiny miniatures we use to represent player characters and monsters alike, are immensely customizable and allow players to find figures that represent their ideal heroes and villains. These minis, while technically unnecessary, are a valuable part of the tabletop gaming experience. When we look to how these minis interact with the game world, however, we often find that the rest of our game world doesn't hold up when compared to these detailed miniatures.</p> <p>Often, sprawling cities and labyrinthine dungeons are reduced to a simple hand-drawn map, blocked out on sheets with dry-erase markers. While very reusable, and instantly adaptable to changing game conditions, these 'battle maps' are barely interactive and typically crude, not to mention their effect on shattering immersion. Some intrepid dungeon masters have also acknowledged these shortcomings, and the tabletop community has been hard at work trying to engineer solutions. These approaches break down in two ways, typically.</p> ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/yskzc5qyoc19tv8dsh9y.jpg) <p>The first solution (and currently, the more popular one) is to build out the game maps using modular 'dungeon kits' that represent walls, floors, traps, and doors. These can be combined to great effect, given ample time, space, and money, allowing gamers to build giant three-dimensional playgrounds for their miniatures to explore. The effect of a dungeon finished in this way is absolutely impressive.</p> <p>However, these maps present immediate problems. Firstmost, they are slow to build. This means that maps cannot be constructed on the fly, severely limiting the scope of exploration available to the players. Secondly, there is an issue with 'fog of war'. In this instance, players can see the entire map laid out before them, and can use that knowledge to plan strategies based on information that their characters should not have. These are not game-breaking, but they do limit the arena in which a session can be allowed to breathe.</p> ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/vzy8jf0e4110n67t0ei8.jpg) <p>A second solution has been growing in popularity, although it remains a niche. Some resourceful gamers have been utilizing projectors and televisions to digitally render the maps directly onto the tabletop. This approach is expensive but opens up an enormous variety of options when it comes to gaming. While the setup using a tv is nice, it requires a dedicated gaming table with a television literally built into it, which is not an option for many people. Mounting a projector on the ceiling is a much less obtrusive approach that allows for a lot of the same mechanics. Either way, a digital map allows for all sorts of new interactions. We can simulate fog of war, we can change the map as easily as changing a spreadsheet, we can even add in immersion building elements such as flickering torches and running water. </p> ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/wjaa8ywf1huanzq54c56.jpg) <p>So it's with this context in mind that I approached Sony's Future Lab Concept 'T'. Concept 'T' is a projector, with sensors built into it, mounted above a table, much like the projectors being used for gaming. The difference here is those sensors, which Sony uses to pair with machine learning algorithms to 'see' objects that are placed onto the table's surface. So far, Sony has shown us how this technology can be used to interact with a book, bringing characters to life straight off the page, or with objects, recognizing them and projecting their information adjacent to the object being sensed. </p> ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/hojyciquh8vzp7nn1tot.jpg) <p>These machine learning systems could be adapted to great effect to incorporate tabletop gaming. Let's imagine for a second, placing a miniature upon a table. Instantly, the projector recognizes the miniatures and projects the characters' names and stats onto the table next to the figures. If a character is carrying a torch, the projector could illuminate the area around the figure, and track that light as the character moves around the map. Cards played could be read using OCR technology, and effects could be animated in response to those cards being played. Fog of war could be rendered based on the actual lines of sight of the miniatures, based on where they are placed on the map. </p> <p>The applications for this technology in gaming goes on and on and on. What's important here is that using these systems, we might finally have a way to bring tabletop roleplaying into the twenty-first century. Sony has yet to announce plans to bring the Concept T model into mass production, so, for now, the technology will have to be cobbled together from other pieces. I'm thinking that Microsoft's new Kinect DK (which uses the same sensors as the HoloLens 2) would be able to accomplish a lot of the same functionality, as far as the computer vision concerns go. </p> https://youtu.be/0TThak7sF94
geoffreyianward
202,539
Default a View in NavigationView with SwiftUI
A guide to default a View in NavigationView with SwiftUI
3,158
2019-11-08T21:14:42
https://medium.com/@maeganwilson_/default-a-view-in-navigationview-with-swiftui-b6e64a17fb20
swift, swiftui
--- title: Default a View in NavigationView with SwiftUI published: true description: A guide to default a View in NavigationView with SwiftUI tags: swift, swiftui canonical_url: https://medium.com/@maeganwilson_/default-a-view-in-navigationview-with-swiftui-b6e64a17fb20 series: SwiftUI Examples --- # Default a View in NavigationView with SwiftUI I'm going to walk through the steps to create a default view in a `NavigationView` in a brand new project. The finished GitHub project can be found here. {% github maeganjwilson/swiftui-examples %} # 1. Create a Single View App Create a new XCode project using SwiftUI. # 2. In ContentView.swift, add a NavigationView A fresh `ContentView.swift` looks like this: ```swift import SwiftUI struct ContentView: View { var body: some View { Text("Hello, World!") } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } ``` To add a NavigationView that looks like a list, we first need to embed the `Text` in a `List`. Embedding in a `List` can be done by `CMD + Click` on `Text` and choosing Embed in List from the menu. ![GIF Showing the `CMD + Click`](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/embed-list.gif?raw=true) You should then get the code sample below. ```swift struct ContentView: View { var body: some View { List(0 ..< 5) { item in Text("Hello, World!") } } } ``` Now, put the list inside the `NavigationView`. `ContentView` should now have the following code: ```swift import SwiftUI struct ContentView: View { var body: some View { NavigationView{ List(0 ..< 5) { item in Text("Hello, World!") } } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } ``` If using the Live Preview in Xcode, then the preview should look like the picture below. ![NavigationView of Hello World!](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/NavigationView-1.png?raw=true) Let's also make the list different on each row. Change the string in the text to say "Navigation Link \(item)" and make the list range 1 to 5 instead of 0 to 5. This is what the code should look like. ```swift List(1 ..< 5) { item in Text("Navigation Link \(item)") } ``` Here is what the preview will look like ![NavigationView changed with the above changes](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/NavigationView-2.png?raw=true) # 3. Add a NavigationLink The `Text` needs to be inside a `NavigationLink` in order to navigate to a different view. We will use `NavigationLink(destination: Destination, tag: Hashable, selection: Binding<Hashable?>, label: () -> Label)`. Let's break this down a bit before implementing it. - `destination` the `View` to present when the link is selected - `tag` a value that is of type `Hashable` to distinguish between which link is selected - To read more about Hashable [click here](https://developer.apple.com/documentation/swift/hashable). The link will take you to Apple's documentation about Hashable. - `selection` a variable that is an optional `Hashable` type that will change values to the tag - `label` a closure that returns a `View` which is what the user will see and be able to click on. Now that all the parts are explained let's implement the `NavigationLink`. ```swift List(1 ..< 5) { item in NavigationLink(destination: Text("Destination \(item)"), tag: item, selection: self.$selectedView) { Text("Navigation Link \(item)") } } ``` Once it's implemented, you should get an error that says `Use of unresolved identifier '$selectedView'`. This error is expected since we do not have a Binding variable called `selectedView` in our code. Let's add it to the `ContentView` struct. Place `@State private var selectedView: Int? = 0` before declaring `body`. The error should go away now. When declaring `selectedView`, the type needs to be optional since `NavigationLink` wants an optional Hashable type. As of right now, running the app, it will look like no default view is given. This is because there is no `NavigationLink` with a tag of 0. If `selectedView` is assigned a tag that doesn't exist, then the view will be the list of NavigationLinks. ![no default](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/Simulator-1.png?raw=true) If you change the initial value of `selectedView` to 1, then it will open to the destination of `NavigationLink` that has a tag of 1. ![GIF of opening a default view](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/Simulator-2.gif?raw=true) # Basics are done! Now the basic tutorial is finished of how to achieve this. I'm going to continue in the next section on how to improve the UX because on iOS this is not excellent behavior, but on iPadOS when in landscape, this behavior is excellent! # Bettering the UX On iPhones, you don't usually want the total view to be taken over. You usually want the user to decide where to navigate. On iPads in landscape, the screen is so big that having a view selected is okay since the navigation links are always shown. This can be achieved by using `onAppear()` and figuring out which device is being used. First, we need to add `onAppear()` to the `List`. Then, we need to get the device type. ```swift NavigationView{ List(1 ..< 5) { item in NavigationLink(destination: Text("Destination \(item)"), tag: item, selection: self.$selectedView) { Text("Navigation Link \(item)") } } // this is the part to add .onAppear{ let device = UIDevice.current } } ``` Now, we need to do something based on each device. We can get the device type by using `.model`. We can then use a simple if statement to determine if it's an iPhone or an iPad and set the selection based on that. We also need to check the orientation of the iPad. ```swift .onAppear{ let device = UIDevice.current if device.model == "iPad" && device.orientation.isLandscape{ self.selectedView = 1 } else { self.selectedView = 0 } } ``` That would be it! The view will change on device and orientation. Here's a gif of the iPad: ![GIF of iPad](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/ipad-finished.gif?raw=true) Here's a gif of the iPhone's implementation: ![GIF of iPad](https://github.com/maeganjwilson/swiftui-examples/blob/master/NavigationExample/post-resources/iphone-finished.gif?raw=true) --- If you enjoy my posts, please consider sharing it or Buying me a Coffee! <a href="https://www.buymeacoffee.com/appsbymw" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/arial-blue.png" alt="Buy Me A Coffee" style="height: 51px !important;width: 217px !important;" ></a>
maeganwilson_
202,591
Python: Compile standalone executable with nuitka
So nuitka compile python code into an executable. I have always impress with Go ability to generate s...
0
2019-11-09T02:07:47
https://dev.to/k4ml/python-compile-standalone-executable-with-nuitka-1ml1
python
So nuitka compile python code into an executable. I have always impress with Go ability to generate single binary and even cross compile for different OS. I wish we can have the same thing with python. As always, let's start with just a simple script first:- ``` import requests resp = requests.get("https://httpbin.org/get") print(resp.content) ``` Actually the first I try is with just a simple `print("hello world")` but that just too simple. I want to see if nuitka can handle more significant code like the requests library above. To compile the code:- ``` python3 -mnuitka --follow-imports hello.py ``` It will generate a file called `hello.bin` in the same directory you run the command above. Execute the file (`./hello.bin`) and it works! But if you copy the file to different system (I compiled it on ubuntu 18.04 and try to run it on my laptop running Manjaro), I got this error:- ``` ./hello.bin: error while loading shared libraries: libpython3.6m.so.1.0: cannot open shared object file: No such file or directory ``` So it still need the same python libraries that it get compiled with. And since my manjaro system using python 3.7, hence the error. Fortunately there's second option:- ``` python3 -mnuitka --follow-imports --standalone hello.py ``` This time it will generate a folder instead named `hello.dist`. Inside the folder you'll see various `.so` files and a file named `hello`. This time when I copied the folder `hello.dist` from Ubuntu 18.04 to my Manjaro's laptop, the command works! Unfortunately the reverse didn't work though. When compiling on my Manjaro and try to run it on Ubuntu 18.04:- ``` ./hello: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/kamal/hello.dist/libpython3.7m.so.1.0) ./hello: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /home/kamal/hello.dist/libpython3.7m.so.1.0) ``` This is because manjaro is using more recent glibc than ubuntu 18.04. We can check this with ldd:- ``` ldd --version ldd (GNU libc) 2.30 ``` And the only workaround for this is to compile is on the oldest system you want to support. I try this with ubuntu 14.04, it work for simple script but then failed with missing ssl certs when compiling my [tgcli] script. So a standalone distribution of your python program kind of work. For my tgcli, the installation look like:- ``` tar xzf tgcli.dist-glibc-2.27.tar.gz sudo mv tgcli.dist /usr/local/ sudo ln -s /usr/local/tgcli.dist/tgcli /usr/local/bin/tgcli ``` For more fun story on getting standalone python program, can also read this sharing from [syclladb]. [tgcli]:https://github.com/web2gram/tgcli [syclladb]:https://www.scylladb.com/2019/02/14/the-complex-path-for-a-simple-portable-python-interpreter-or-snakes-on-a-data-plane/
k4ml
202,732
Noob Exercises
Just started with a place, where I am trying to give some of people, I have been teaching some assign...
0
2019-11-09T14:48:38
https://dev.to/th3n00bc0d3r/noob-excercises-19kj
html, tutorial, css, javascript
Just started with a place, where I am trying to give some of people, I have been teaching some assignment, thought if there are more people interested in it, would be a good share. We are starting with Vanilla JS, CSS and HTML I will be personally looking at all submission and will do my best to comment on the submissions to my full knowledge. {% github th3n00bc0d3r/Noob-Exercises %}
th3n00bc0d3r
202,769
Reading through the Python standard library
A couple of years ago I decided to read the entire Python standard library. A few months back, I fin...
0
2019-11-27T18:05:43
https://www.mattlayman.com/blog/2016/readthrough-python-standard-library/
python
A couple of years ago I decided to read the entire Python standard library. A few months back, I finished. What I learned is this: **while there is some interesting "hidden" stuff in there, you don't need to do this to become proficient.** Did you know that nearly all the HTTP status codes are in the standard library? Judging by all the Python packages that defined their own status codes, I assumed that the codes weren't in there. I was [wrong](https://docs.python.org/2/library/httplib.html). And they got [better in 3.5](https://docs.python.org/3/library/http.html#http-status-codes). Reading through the library revealed many "easter eggs" like that. Even though learning about those hidden corners of the standard library is fun, there's a really large problem with reading through the whole library. ### It's a lack of context. One challenge of reading through everything is that you may be unaware of what parts are excellent (like `os.path`) and what parts are not (I won't name names here). Occasionally, the reader is given a warning about the dragons ahead. You might find a tip to use [requests](https://docs.python.org/3/library/urllib.request.html). I think those kinds of suggestions are rare. Maybe you're asking: ### "*So if the library doesn't provide much direction, what do I do?*" Part of getting good with the standard library is experiential. The best ways to gain that experience is through practice and exposure. This is where I point you to the community. Your local Python user group (hi, [Python Frederick](https://www.meetup.com/python-frederick/)!), online Python communities like on IRC, and open source Python projects are excellent places to get exposure. You'll encounter people who can provide pointers and read code from those who are a bit farther on this journey than you are. If it's tough for you to get into those social groups, maybe [The Hitchhiker's Guide to Python](http://docs.python-guide.org/en/latest/) will work for you. For folks that have a programming background, I can also recommend [Dive Into Python 3](http://www.diveintopython3.net/). I got my start with the Python 2 version and can attest that it's a good resource. If you're tempted to read through the standard library like I was, **cool! Best of luck!** Please don't forget that there are people around you that can help provide that missing context. *Photo credit to [Loughborough University](https://www.flickr.com/photos/loughboroughuniversitylibrary/6333984637).* This article first appeared on [mattlayman.com](https://www.mattlayman.com/blog/2016/readthrough-python-standard-library/).
mblayman
202,779
HELLO Docker Desktop Windows Subsytem for Linux 2 Tech Preview
A couple of days ago, during my Thursday Twitch stream. I made the amazing discovery that Docker has...
0
2019-11-09T18:50:54
https://dev.to/talk2megooseman/hello-docker-desktop-windows-subsytem-for-linux-2-tech-preview-4kp2
ubuntu, docker, tutorial
A couple of days ago, during my Thursday Twitch stream. I made the amazing discovery that [Docker has a `Docker Desktop WSL 2 Tech Preview`.](https://docs.docker.com/docker-for-windows/wsl-tech-preview/) ### [Why is this amazing?](#why) Now applications that are running inside of `Linux Subsystem` can directly run docker!! You don't have to do any work around forcing you to keep the project on the Windows kernel because Docker Desktop couldn't handle file path pointing inside of the `Linux Subsystem` (like I had to do). It also comes with added performance: > Docker Desktop also leverages the dynamic memory allocation feature in WSL 2 to greatly improve the resource consumption. This means, Docker Desktop only uses the required amount of CPU and memory resources, enabling CPU and memory-intensive tasks such as building a container to run much faster. Also, in general, it's amazing to do web development in the `Linux Subsystem` since you will encounter libraries, packages, and instructions only for Linux. ### [Ok already, just what do I need to do to use it?](#prerequisites) There are a couple of prerequisites you need in order to take advantage of new Docker goodness: > 1) Install Windows 10 Insider Preview build 18932 or later. > 2) Enable WSL 2 feature on Windows. For detailed instructions, refer to the Microsoft documentation. > 3) Install a default distribution based on Ubuntu 18.04. You can check this with wsl lsb_release -a. You can download Ubuntu 18.04 from the Microsoft store. ### [How do I install WSL?](#install) WSL 2 is only **available in Windows 10 builds 18917 or higher**. So that means you will need to **opt-into the Windows Insider Program** to get beta builds of Windows, so be aware of this fact. ![Join the Windows Insider Program and select the 'Fast' ring.](https://insider.windows.com/en-us/) You can check your Windows version by opening Command Prompt and running the ver command. Now you need to install a WSL Distro, Windows has a great and simple guide for this part of things. **For docker, you will need to install `Ubuntu`**: https://docs.microsoft.com/en-us/windows/wsl/install-win10 ### [Enabling WSL2 Distro](#enable) In **PowerShell as Administrator** run the following commands: ```powershell Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform ``` Now we want to tell WSL to use our distro of choice in WSL2. Replacing <Distro> with the distro you're using. ```powershell wsl --set-version Ubuntu 2 wsl --set-default-version 2 ``` Not sure what distro you're on? Use this command to check. ```powershell wsl -l ``` **Now we got all the POWER Linux kernel** ![Fire](https://i.kym-cdn.com/entries/icons/original/000/022/134/elmo.jpg) ##### Pro Tip Add all working projects directly into the `Linux Subsystem` instead of running files living on the Windows system. There is a major performance boost since the Linux Subsystem doesn't have to go and communicate over the virtual network to the files sitting on the Windows side. ### [I already have WSL2 installed on my machine](#upgrade) Nice, but you need to **make sure you are running `Ubuntu 18.04`**. In my situation, I was running `Ubuntu` but did not have the latest version. If you don't know which version you're on you can check using the following command in PowerShell. ```powershell wsl lsb_release -a ``` If you need to upgrade to `Ubuntu 18.04` you can do so running these commands as I did inside of WSL Ubuntu. ```bash sudo apt dist-upgrade sudo apt update sudo do-release-upgrade ``` If you come across a `yarnpkg` error while trying to run the updates I found the solution to the issue here: {% github https://github.com/yarnpkg/yarn/issues/4453#issuecomment-329463752 %} ### [Installing Docker Desktop WSL 2](#installing_docker) [You can download the special build of Docker Desktop here](https://docs.docker.com/docker-for-windows/wsl-tech-preview/#download) After installing you might need to update your system to get things working with WSL2 **Do not update the version. For me when I did the WSL2 Tech Preview option was gone** After its installed you can enable WSL2 integration by going to the dock menu and clicking the button shown here. ![Docker menu](https://docs.docker.com/docker-for-windows/images/wsl2-ui.png) It should open up a dialog where you can just click `start` and begin the fun. ### [Running Docker in WSL2](#running) Now running Docker in any of your projects inside of Ubuntu is no different from running it in PowerShell. But now with all the power and knowledge that Docker is running in a different system. So clone your projects or copy them over into the `Linux Subsystem` and run: ```bash docker # or docker-compose up # in my case ``` Now you're suited up and ready for battling projects in WSL2 using docker ![Ready for Battle](https://i.imgur.com/35hRzgm.jpg) Go and have fun!
talk2megooseman
202,799
Branch Previews with Google App Engine and GitHub Actions
Leveraging GitHub Actions for easy-to-use, automated branch preview deployments
0
2019-11-09T19:20:00
https://dev.to/bobheadxi/branch-previews-with-google-app-engine-and-github-actions-3pco
automation, tutorial, devops, deployment
--- title: Branch Previews with Google App Engine and GitHub Actions published: true description: Leveraging GitHub Actions for easy-to-use, automated branch preview deployments tags: automation, tutorial, devops, deployment cover_image: https://bobheadxi.dev/assets/images/posts/appengine/branch-staged.png --- > **⚠️ WARNING:** Please make sure you pin usages of `bobheadxi/deployments` to a specific version! The usage examples in this article may not be up to date - please see the Action usage docs for more details: [GitHub Deployments Action](https://github.com/marketplace/actions/github-deployments) Shortly after I returned to school, early in October 2019 I started [working part-time remotely for Sumus](https://bobheadxi.dev/sumus), a property management company based out of Lethbridge, Alberta. My role was primarily as a software developer on a investor portal they wanted to build. I wasn't starting from scratch - there was already a sizeable codebase going, and a simple deployment set up on [Google App Engine](https://cloud.google.com/appengine/). Right off the bat I had a number of tech-debt-related issues I wanted to address before I started developing new features, one of which was automating this deployment process. App Engine does not seem to have a great way of doing this outside of using their source control, so I decided to do this myself. As I progressed on the automation of the App Engine deployment, I realized branch previews were not that much more of a hassle to set up, so I got those up and running as well - [Heroku has a nice article about why staging environments are nice to have](https://dev.to/heroku/staging-environments-are-overlooked-here-s-why-they-matter-3ghd). This blog post will cover some of the work I did on this front, and hopefully give a good idea about how you can go about creating a similar setup for your own projects if you want. Our project consists of a React frontend serviced by a Node.js backend, so my post will lean a bit towards that particular setup, but should apply to a variety of different stacks. Here's a sneak peak of the end result: <p align="center"> <img src="https://bobheadxi.dev/assets/images/posts/appengine/environments-deployed.png"> </p> * [The Problem](#the-problem) * [Solution](#solution) * [Staging and Release](#staging-and-release) * [Versioning Frontends and Backends](#versioning-frontends-and-backends) * [Automation](#automation) * [GitHub Actions + App Engine](#github-actions--app-engine) * [GitHub Deployments](#github-deployments) * [Wrapup](#wrapup) <br /> ## The Problem First off, a quick intro to App Engine. This was my first encounter with App Engine, so this won't be the best rundown, but in a nutshell App Engine seems like a reasonably priced way to deploy your application in a serverless fashion with the flexibility to scale to your needs. It also offers nice out-of-the-box integration with Google's other monitoring offerings, which is a nice plus. Most of the official documentation seems to indicate that deployment happens primarily through: * defining an application specification, the [`app.yaml`](https://cloud.google.com/appengine/docs/standard/python/config/appref) * using the [`gcloud` CLI](https://cloud.google.com/sdk/gcloud/) to push a deployment out from your copy of the codebase The old process for deploying our application involved making sure I had all my credentials and stuff set up, and running: ```sh gcloud app deploy app.yaml ``` There was no real way short of making sure I wasn't axing someone else's deployment or notifying everyone of what is currently active short of shooting a Slack announcement and hoping it's seen and handled appropriately. I felt like the entire process would be more comfortable if it was automated and tied to source control, so that: * permissions are easier to manage as the team grows * it's easier to tell what is deployed, and where * less work to continuously update and manage deployments ## Solution I didn't actually start off with leveraging GitHub Actions for automating this process - my first iteration used [CircleCI](https://circleci.com/), where we run our tests, style checks, and and so on. This had the advantage of allowing me to stage deployments based on whether or not previous checks pass: <p align="center"> <img src="https://bobheadxi.dev/assets/images/posts/appengine/pipeline.png"> </p> Unfortunately this was eating up a huge chunk of our pipeline minutes - as you can see in the image above, the `appengine_stage` job takes more than 97% of each build when a branch is configured to stage. This brought us uncomfortably close to hitting the [CircleCI free tier](https://circleci.com/pricing/), so I ended up moving it to GitHub Actions to split up our workloads. ### Staging and Release I first ran into the concept of branch previews working on the [UBC Launch Pad website](/ubclaunchpad-site), where we leveraged [Netlify's](https://www.netlify.com/) great branch preview feature. It was a fantastic way to do some live testing and get feedback quickly, so I leveraged branch previews again during my time with [nwPlus working on the nwHacks 2019 website](/nwhacks2019), where I used a tool I worked on, [`ubclaunchpad/inertia`](https://github.com/ubclaunchpad/inertia), to quickly stage previews for the nwPlus design team to provide feedback on. Now that I'm back to working on websites, I figured branch previews would come in useful here again (and they have so far!). To accomodate this, I introduced some extra steps to our deployment flow: * *Staging* deployments are primarily for previewing branches. By default, the only staged branch would be `master`, but additional branches can be staged by adding the desired branch to the GitHub Action configuration. These deployments are named based on their branch, i.e. `stage-master` or `stage-my-branch`. * *Release* deployments are for deploying tags. These deployments are [promoted](https://cloud.google.com/sdk/gcloud/reference/app/deploy#--promote) (unlike the staging deployments) such that all traffic to the application route to the most recent release deployment by default. These are named based on their tag, i.e. `release-v0-3-0`. ### Versioning Frontends and Backends A bit of a conundrum when deploying multiple versions of a multi-component service is making sure that they talk to the correct instances - for example, a branch preview deployment probably does not want to have its frontend talk to the backend of a different deployment if you are trying to demonstrate a new feature. For service-to-service deployments, this is fairly straight forward - App Engine provides a variety of default environment variables you can use to interpret the appropriate backend to talk to. We can take advantage of how App Engine addresses un-promoted deployments: ```sh https://${version}-dot-${project}.appspot.com # unpromoted https://${project}.appspot.com # promoted ``` In a [multi-service setup](https://cloud.google.com/appengine/docs/standard/nodejs/configuration-files), you also get an additional `${component}` piece attached to the address: ```sh https://${version}-dot-${component}-dot-${project}.appspot.com # unpromoted https://${component}-dot-${project}.appspot.com # unpromoted ``` Then, following by using a "versioning" scheme of `version=branch_name`, we can easily determine where the desired service should be located, and point our requests to the correct address. The only hurdle to this is for frontends. We have to know the version at buildtime, which sadly App Engine's default build feature does not provide, so you'll have to either: * generate a `.env` file to be consumed at build time and upload it with your build * build in CI with the appropriate variables The latter is probably best practice anyway, since you want to optimize your App Engine setup for fast instance start times, but in case you are running your builds in App Engine (our deployments were previously) this is a minor hurdle to be aware of. ### Automation #### GitHub Actions + App Engine [GitHub Actions](https://github.com/features/actions) is a pretty new product, which I guess is probably GitHub's answer to [GitLab's CI/CD features](https://about.gitlab.com/product/continuous-integration/). I've come to like it a lot more for anything outside of running your tests and whatnot, since it has a lot of interesting hooks and triggers based on normal GitHub activity that you can leverage, but for this example I won't be using many of those. If you're following along you might want to consult the official [workflow syntax documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions). Anyway, to get started I set up a *staging* workflow: ```yml # .github/workflows/appengine-stage.yml name: appengine-stage on: push: branches: - master # insert branches to stage here ``` All I really want to do here is declare what branches I want to stage, and make staging additional branches just a matter of adding it to the configuration in your PR (and removing it when you're done). I'm thinking of using PR labels for this, but haven't figured out a good way to do it yet. The first step is to actually grab your branch name. GitHub only provides you with the [commit's reference](https://git-scm.com/book/en/v2/Git-Internals-Git-References), which takes the form of: ```sh refs/heads/${branch_name} ``` So we'll want to extract it with a script: ```yml steps: - name: Extract branch name id: get_branch shell: bash run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/} | tr / -)" ``` There's a couple of things going on here: * `[set-output name=branch;]` uses GitHub Actions' ability to [set an output for a step](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/contexts-and-expression-syntax-for-github-actions#steps-context) to allow other steps to access the extracted branch * `${GITHUB_REF#refs/heads/}` trims off the leading `refs/heads/` bit of a reference * `echo ${...} | tr / -` pipes the branch name to `tr` which then replaces all slashes with dashes (App Engine does not allow slashes in version names, and I have a habit of using them) Then, in other steps you can access the branch name like so: ```yml stage-${{ steps.get_branch.outputs.branch }} ``` All that's really left to do is run the deployment. ```yml - uses: actions-hub/gcloud@268.0.0 env: APPLICATION_CREDENTIALS: ${{ secrets.GCLOUD_SERVICE_KEY }} with: args: app deploy client/app.yaml server/app.yaml --no-promote --quiet --version stage-${{ steps.get_branch.outputs.branch }} ``` The *release* workflow is very similar, except it runs on releases and generates version names based on the tagged version: ```yml # .github/workflows/appengine-release.yml name: appengine-release on: release: types: [ published ] jobs: release: # ... steps: - name: Extract tag name id: get_tag shell: bash run: echo "##[set-output name=tag;]$(echo ${GITHUB_REF#refs/tags/} | tr . -)" ``` I also have a separate workflow for pruning previews. Since previews are typically set up for pull requests, the prune job runs when pull requests close (ideally it should be on branch deletion, but there doesn't seem to be a simple trigger for that at the moment): ```yml # .github/workflows/appengine-prune.yml name: appengine-prune on: pull_request: types: [ closed ] jobs: prune: # ... steps: # ... - uses: actions-hub/gcloud@268.0.0 env: APPLICATION_CREDENTIALS: ${{ secrets.GCLOUD_SERVICE_KEY }} with: args: app versions delete stage-${{ steps.get_branch.outputs.branch }} --quiet ``` #### GitHub Deployments As a bit of a stretch goal, I wanted to be able to see the deployments within the GitHub UI, just like with the Netlify branch previews. For example: <p align="center"> <img src="https://bobheadxi.dev/assets/images/posts/appengine/branch-staged.png"> </p> There's another example of this at the top of this article. It's mostly a small quality of life thing, but the more I thought about it the more I wanted it so... Anyway, this feature is called ["GitHub Deployments"](https://developer.github.com/v3/repos/deployments/). I tried a bunch of [available Actions from the marketplace](https://github.com/marketplace?utf8=%E2%9C%93&type=actions&query=github+deployment) for working with this, but for some reason I couldn't really get any of them to work the way I wanted, which is to: * create a new deployment * set a status for it * change that deployment's status * replace the previous deployment's status I was probably holding them all wrong, but after a few hours I just went ahead and [wrote my own Action, `bobheadxi/deployments`](https://github.com/bobheadxi/deployments), for doing exactly what I wanted. Then all I had to do was add a step before and after each of my workflows: ```yml jobs: deploy: steps: - uses: bobheadxi/deployments@master id: deployment with: step: start token: ${{ secrets.GITHUB_TOKEN }} env: release-${{ steps.get_tag.outputs.tag }} transient: true desc: Setting up staging deployment for ${{ steps.get_tag.outputs.tag }} # ... as before - name: Update deployment status uses: bobheadxi/deployments@master if: always() with: step: finish token: ${{ secrets.GITHUB_TOKEN }} status: ${{ job.status }} env: ${{ steps.deployment.outputs.env }} env_url: https://release-${{ steps.get_tag.outputs.tag }}-dot-project.appspot.com deployment_id: ${{ steps.deployment.outputs.deployment_id }} ``` For pruning, I needed to be able to go and deactivate all deployments associated with the preview environment. Since I owned the action I just added the feature. ```yml - uses: bobheadxi/deployments@master with: step: deactivate-env token: ${{ secrets.GITHUB_TOKEN }} env: stage-${{ steps.get_branch.outputs.branch }} desc: Deployment was pruned ``` And that was it! As a bonus, notifications for these deployments show up in [Slack via the GitHub integration](https://slack.github.com/): <p align="center"> <img src="https://bobheadxi.dev/assets/images/posts/appengine/slack-deploy.png"> </p> ## Wrapup There's definitely a bunch of caveats in this approach, and if the resources are available to you it might be easier to use a platform like [Heroku](https://www.heroku.com/) to do all this hard work for you. That said, this was a fun hack and has made staging previews for the team to assess and rolling out releases a lot safer-feeling and less of a hassle.
bobheadxi
202,877
Comparing Services for Cheap Cloud Hosting and Storage (Cloud / AWS / S3 / Amazon Cloudfront / ... ???)
Hi Dev.to! 👋👋👋 Some questions for all you performance aficionados and AWS / Cloud experts out there...
0
2019-11-09T22:54:03
https://dev.to/kp/comparing-services-for-cheap-cloud-hosting-and-storage-cloud-aws-s3-amazon-cloudfront-3i1g
explainlikeimfive, help, aws
Hi Dev.to! 👋👋👋 Some questions for all you performance aficionados and AWS / Cloud experts out there. I'm looking for a cheap (as close to free as possible) service for: #### 1. Hosting AND serving images. These images will be used on a website, in emails, etc. I want to plan for: * 100GB of added storage / month * 100M image views (GET requests) / month * 100K new image uploads (PUT / POST requests) / month #### 2. CDN / Edge caching - so as to serve requests as fast as possible. Here I am looking to reduce response times and website load times that end-users will experience. AWS both has an amazing suite of products and at the same time is very difficult to get started with.[AWS S3's pricing model](https://aws.amazon.com/s3/pricing/) is confusing. I did also play a bit with their [calculator](https://calculator.s3.amazonaws.com/index.html), but it's hard to say if I'm entering the numbers in correctly. ----- Q1: In the AWS ecosystem: * For S3: What is "Storage pricing" vs "Request Pricing"? * What is S3 Select and how is it different from S3? * What is S3 Intelligent-Tiering? * What is S3 Glacier? * And what about Amazon CloudFront? ----- Q2: Is AWS the best (and cheapest) available option? What about services like: * Cloudflare * Cloudinary * Photon by Jetpack etc? * Versus using my Linode server itself for hosting and serving images? * versus the 1000+ other options out there? Thoughts on what service I should be using? Looking for advice from folks that are knowledgeable on the matter. 🙏🙏🙏
kp
202,911
Recursion
Recursion
0
2019-11-10T02:54:53
https://dev.to/nickytonline/recursion-5fbf
jokes
[Recursion](https://dev.to/nickytonline/recursion-5fbf) {%instagram B4qzQvlpN1g %}
nickytonline
202,954
React app global state management with hooks
React app global state management made easy with hooks and Context API: https://link.medium.com/bZ...
0
2019-11-10T08:41:01
https://dev.to/spinalorenzo/react-app-global-state-management-with-hooks-5b20
react, javascript
![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/1gtpumifya77ru7yylqy.png) React app global state management made easy with hooks and Context API: https://link.medium.com/bZs5cKG6r1 How to achieve this: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/s34zy9h1bfwc3t06yo5p.png) You can read how it works on medium, or jump directly to the code: https://github.com/Spyna/react-context-hook
spinalorenzo
202,981
Building URL Shortener with MongoDB, Express Framework And TypeScript
This post was first published on my blog. Hi, in the last post I published, I talked about Express...
0
2019-11-10T09:32:50
https://dev.to/itachiuchiha/building-url-shortener-with-mongodb-express-framework-and-typescript-4a71
typescript, mongodb, javascript, express
*This post was first [published on my blog](https://aligoren.com/building-url-shortener-with-mongodb-express-framework-and-typescript/).* Hi, in the last post I published, I [talked](https://dev.to/aligoren/developing-an-express-application-using-typescript-3b1 "talked") about Express Framework and TypeScript. In this post, I'll use that structure. So, I won't talk about what structure we will use. ## Before Starting Before we starting, we'll use MongoDB for this project and to get environment variable values, we'll use the dotenv package. **nodemon**: [Nick Taylor](https://dev.to/nickytonline) suggested to me. Using nodemon you don't need to stop-start your applications. It's already doing this for you. **mongoose**: A driver to connect MongoDB. **dotenv**: A package to get environment variable values. ### Install Packages ```bash npm i typescript nodemon express mongoose pug ts-node dotenv @types/node @types/mongoose @types/express ``` Let's edit the **scripts** section in the **package.json** file. ```json "scripts": { "dev": "nodemon src/server.ts", "start": "ts-node dist/server.js", "build": "tsc -p ." } ``` **tsconfig.json** ```json { "compilerOptions": { "sourceMap": true, "target": "es6", "module": "commonjs", "outDir": "./dist", "baseUrl": "./src" }, "include": [ "src/**/*.ts" ], "exclude": [ "node_modules" ] } ``` Let's create a project structure #### public ##### css In this folder, we will have two CSS files named **bootstrap.css** and **app.css**. In bootstrap.css file, we'll be used bootstrap 4.x. And the app.css file we'll be used for custom styles. **app.css** ```css .right { float: inline-end; } ``` ##### js In this folder, we will have a file named app.js. Client-side operations will be here. **app.js** ```js const btnShort = document.getElementById('btn-short') const url = document.getElementById('url') const urlAlert = document.getElementById('url-alert') const urlAlertText = document.getElementById('url-alert-text') const validURL = (str) => { const pattern = new RegExp('^(https?:\\/\\/)?'+ '((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.)+[a-z]{2,}|'+ '((\\d{1,3}\\.){3}\\d{1,3}))'+ '(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*'+ '(\\?[;&a-z\\d%_.~+=-]*)?'+ '(\\#[-a-z\\d_]*)?$','i'); return !!pattern.test(str); } function saveClipBoard(data) { var dummy = document.createElement('input'); var text = data; document.body.appendChild(dummy); dummy.value = text; dummy.select(); var success = document.execCommand('copy'); document.body.removeChild(dummy); return success; } const shortenerResponse = (isValidUrl, serverMessage) => { let message = '' if (isValidUrl) { urlAlert.classList.remove('alert-danger') urlAlert.classList.add('alert-success') urlAlert.classList.remove('invisible') message = ` <strong>Your URL:</strong> <a id="shorted-url" href="${serverMessage}" target="_blank">${serverMessage}</a> <button class="btn btn-sm btn-primary right" id="btn-copy-link">Copy</button> <span class="mr-2 right d-none" id="copied">Copied</span> ` } else { urlAlert.classList.remove('alert-success') urlAlert.classList.add('alert-danger') urlAlert.classList.remove('invisible') message = `<strong>Warning:</strong> ${serverMessage}` } urlAlertText.innerHTML = message } url.addEventListener('keypress', (e) => { if (e.which == 13 || e.keyCode == 13 || e.key == 'Enter') { btnShort.click() } }) btnShort.addEventListener('click', async () => { const longUrl = url.value const isValidUrl = validURL(longUrl) if(isValidUrl) { const response = await fetch('/create', { method: 'POST', body: JSON.stringify({ url: longUrl }), headers: { 'Content-Type': 'application/json' } }).then(resp => resp.json()) let success = response.success let message = '' if(success) { const { url } = response message = `${window.location.origin}/${url}` } else { message = `URL couldn't shortened` } shortenerResponse(success, message) } else { shortenerResponse(isValidUrl, 'Please enter a correct URL') } }) document.addEventListener('click', (e) => { if (e.target && e.target.id == 'btn-copy-link') { const shortedUrl = document.getElementById("shorted-url") const isCopied = saveClipBoard(shortedUrl.href) if (isCopied) { document.getElementById('copied').classList.remove('d-none') } } }) ``` ### src #### controllers In this folder, we'll have controllers and their model and interface files. ##### controllers/shortener.controller.ts In this controller, we will insert a long URL to the Mongo Database. By the way, we didn't have a MongoDB connection yet. **generateRandomUrl**: A private method to generate random characters. It expects a character length number. **index**: An async method to show index page. **get**: An async method to get short URL information. It expects shortcode as a parameter. Like: `http://example.com/abc12` **create**: An async method to short long URL. Firstly, it looks up the long URL. If it exists, it will show the shortcode in the MongoDB. Using **shortenerModel** we can save documents to MongoDB and search in MongoDB. ```typescript import * as express from 'express' import { Request, Response } from 'express' import IControllerBase from 'interfaces/IControllerBase.interface' import shortenerModel from './shortener.model' import IShortener from './shortener.interface'; class ShortenerController implements IControllerBase { public path = '/' public router = express.Router() constructor() { this.initRoutes() } public initRoutes() { this.router.get('/', this.index) this.router.get('/:shortcode', this.get) this.router.post('/create', this.create) } private generateRandomUrl(length: Number) { const possibleChars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; let urlChars = ""; for (var i = 0; i < length; i++) { urlChars += possibleChars.charAt(Math.floor(Math.random() * possibleChars.length)); } return urlChars; } index = async(req: Request, res: Response) => { res.render('home/index') } get = async(req: Request, res: Response) => { const { shortcode } = req.params const data: IShortener = { shortUrl: shortcode } const urlInfo = await shortenerModel.findOne(data) if (urlInfo != null) { res.redirect(302, urlInfo.longUrl) } else { res.render('home/not-found') } } create = async(req: express.Request, res: express.Response) => { const { url } = req.body const data: IShortener = { longUrl: url } let urlInfo = await shortenerModel.findOne(data) if (urlInfo == null) { const shortCode = this.generateRandomUrl(5) const shortData: IShortener = { longUrl: url, shortUrl: shortCode } const shortenerData = new shortenerModel(shortData) urlInfo = await shortenerData.save() } res.json({ success: true, message: 'URL Shortened', url: urlInfo.shortUrl }) } } export default ShortenerController ``` ##### controllers/shortener.interface.ts In this interface, we're using an interface named ISHortener. It has two optional parameters. ```typescript interface IShortener { longUrl?: string, shortUrl?: string } export default IShortener ``` ##### controllers/shortener.model.ts In this file, we're building a mongoose schema. It has two optional parameters such as **shortener.interface.ts**. Also, this model expects IShortener. ```typescript import * as mongoose from 'mongoose' import IShortener from './shortener.interface' const shortenerSchema = new mongoose.Schema({ longUrl: String, shortUrl: String }) const shortenerModel = mongoose.model<IShortener & mongoose.Document>('Shortener', shortenerSchema); export default shortenerModel; ``` #### interfaces In this folder, we'll only have one interface file. That will be **IControllerBase**. ##### interfaces/IControllerBase.interface.ts ```typescript interface IControllerBase { initRoutes(): any } export default IControllerBase ``` #### middleware There is nothing here, we have created this folder, in case you need middleware. #### src/app.ts In this file, we'll connect to the MongoDB. We're also using **dotenv** to get environment variables. **initDatabase**: We're connecting MongoDB here. ```typescript import * as express from 'express' import { Application } from 'express' import * as mongoose from 'mongoose'; import 'dotenv/config'; class App { public app: Application public port: number constructor(appInit: { port: number; middleWares: any; controllers: any; }) { this.app = express() this.port = appInit.port this.initDatabase() this.middlewares(appInit.middleWares) this.routes(appInit.controllers) this.assets() this.template() } private middlewares(middleWares: { forEach: (arg0: (middleWare: any) => void) => void; }) { middleWares.forEach(middleWare => { this.app.use(middleWare) }) } private routes(controllers: { forEach: (arg0: (controller: any) => void) => void; }) { controllers.forEach(controller => { this.app.use('/', controller.router) }) } private initDatabase() { const { MONGO_USER, MONGO_PASSWORD, MONGO_PATH } = process.env mongoose.connect(`mongodb+srv://${MONGO_USER}:${MONGO_PASSWORD}${MONGO_PATH}`, { useCreateIndex: true, useNewUrlParser: true, useFindAndModify: false, useUnifiedTopology: true }) } private assets() { this.app.use(express.static('public')) this.app.use(express.static('views')) } private template() { this.app.set('view engine', 'pug') } public listen() { this.app.listen(this.port, () => { console.log(`App listening on the http://localhost:${this.port}`) }) } } export default App ``` #### src/server.ts This is a file to serve the application. ```typescript import App from './app' import * as bodyParser from 'body-parser' import ShortenerController from './controllers/shortener/shortener.controller' const app = new App({ port: 5000, controllers: [ new ShortenerController() ], middleWares: [ bodyParser.json(), bodyParser.urlencoded({ extended: true }), ] }) app.listen() ``` ### views In this folder, we'll have view files. #### views/home/home.pug ```pug <!DOCTYPE html> html(lang="en") head meta(charset="UTF-8") meta(name="viewport", content="width=device-width, initial-scale=1.0") meta(http-equiv="X-UA-Compatible", content="ie=edge") link(rel="stylesheet", href="css/bootstrap.css") link(rel="stylesheet", href="css/app.css") title TypeScript URL Shortener! body main(class="container") div(class="jumbotron") div(class="row") div(class="col-md-12 align-self-center") h1(class="text-center") URL Shortener label(for="url") URL div(class="input-group") input.form-control(type="text", id="url", role="url", aria-label="Short URL") div(class="input-group-append") button(class="btn btn-md btn-danger", id="btn-short", role="button", aria-label="Short URL Button") Short URL div(class="row") div(class="col-md-12") div(class="alert alert-danger invisible mt-3", id="url-alert" role="alert") span(id="url-alert-text") URL shorthened footer(class="footer") div(class="container") span(class="text-muted") TypeScript URL Shortener! script(src="js/app.js") ``` ### MongoDB To connect MongoDB, we need to have a MongoDB server. Instead of install a new MongoDB server, we'll use [MongoDB Cloud](https://cloud.mongodb.com/). There is a Free Tier. You don't need to pay for it. After you created an account, your cluster will be preparing. There are somethings you have to do. The first one, you need to create a database user. ![MongoDB Admin](https://thepracticaldev.s3.amazonaws.com/i/1ju4933j3s9q5vajqsez.png) The last thing you have to do, you need to give IP permission. In the MongoDB cloud, you have to do that. ![MongoDB Network](https://thepracticaldev.s3.amazonaws.com/i/uueq1dixtz1ppnntmddd.png) ### .env In this file, we'll have MongoDB information; ```env MONGO_USER=YOUR MONGO USERNAME MONGO_PASSWORD=YOUR MONGO PASSWORD MONGO_PATH=YOUR MONGO DATABASE URL ``` That's all. Let's run the application :) ```bash npm run dev ``` ### Screenshot ![URL Shortener Screenshot](https://thepracticaldev.s3.amazonaws.com/i/82ow0pqjd5ir2omsk96k.png) ### Conclusion This was an excellent experience for me. I really loved TypeScript and Express with MongoDB. **GitHub**: [https://github.com/aligoren/ts-url-shortener](https://github.com/aligoren/ts-url-shortener)
itachiuchiha
203,086
Monadic parser combinators in C#
The concept of parsing has always seemed very complicated to me. I thought that to work in this area...
0
2019-11-10T15:55:35
https://tyrrrz.me/blog/monadic-parser-combinators
csharp, parser, monads, functional
The concept of parsing has always seemed very complicated to me. I thought that to work in this area you had to have access to some secret knowledge brought by an alien race or something. Some time ago, I had to implement proper markdown parsing in [DiscordChatExporter](https://github.com/Tyrrrz/DiscordChatExporter) so that I could replace the ineffective regular expressions I had been using. I had no idea how to approach this problem, so I spent days researching into this, eventually learning about parser combinators. This concept introduced me to a whole new paradigm of writing parsers that actually makes it a fun and enjoyable experience. In this article I will try to give a brief high-level overview of what is a parser and what constitutes a formal language, then scope into parser combinators to show how easy it is to build parsers with it. We will also write a working JSON processor as an exercise. But first, let's start with a simple question... ## What is a parser I'm sure for most people the word "parser" isn't new. We are "parsing" things all the time after all, either directly through the likes of `int.Parse` and `XElement.Parse`, or indirectly when deserializing HTTP responses, reading application settings, etc. But what is a parser in a general sense of the word? As humans, we are gifted with a lot of innate abilities, one of which is the ability to subconsciously deconstruct text into logical components. This is quite an important skill because it lets us detect patterns, analyze semantics, and compare different snippets of text with each other. For instance, do you see some sort of logical structure when you look at `123 456.97`? You can easily tell that it's a number made out of several components: - Digits (`123`) - Thousands separator (space) - Digits (`456`) - Decimal separator (`.`) - Digits (`97`) For obvious reasons, a computer can't inherently detect patterns like that. After all, it only sees a seemingly random sequence of bytes: `31 32 33 20 34 35 36 2E 39 37`. As we're dealing with text, we need some way to analyze it. To do that, we essentially need to produce the same set of syntactic components that we were able to see naturally: ```csharp new SyntacticComponents[] { new NumericLiteralComponent(123), new ThousandsSeparatorComponent(" "), new NumericLiteralComponent(456), new DecimalSeparatorComponent("."), new NumericLiteralComponent(97) } ``` This is what parsers do. They take an input, usually in the form of text, and formalize it using domain objects. In case of an invalid input, a parser rejects it with an informative error message. ```plaintext [Input] ------ <Parser> / \ ✓ / \ X / \ [Domain objects] [Error message] ``` Of course, it's a fairly basic example, there are much more complicated languages and inputs out there. But generally speaking, we can say that a parser is a piece of code that can help build syntactic structure of input text, effectively helping the computer "understand" it. Whether an input is considered valid or not is decided by a set of grammar rules that effectively define the structure of the language. ## Formal grammar Parsing numbers isn't rocket science and you wouldn't be reading this article if that was what you were after. Everyone can write a quick regular expression to split text like that into syntactic components. Speaking of regular expressions, do you know why is it that they are called *regular*? There's an area in computer science called *the formal language theory* that specifically deals with languages. Essentially, it's a set of abstractions that help us understand languages from a more formal standpoint. A formal language itself builds mainly upon the concept of grammar, which is a set of rules that dictate how to produce valid symbols in a given language. When we talk about valid and invalid inputs, we refer to grammar. Based on the complexity of these rules, grammars are separated into different types according to the [Chomsky hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy). At the lowest level you will find the two most common grammar types, the *regular* and *context-free* grammars. ```plaintext +---------------------------------+ | | | CONTEXT-FREE GRAMMARS | | | | | | +--------------------+ | | | | | | | REGULAR GRAMMARS | | | | | | | +--------------------+ | +---------------------------------+ ``` The main difference between the two is that rules in regular grammar, unlike context-free, can't be recursive. A recursive grammar rule is one that produces a symbol that can be further evaluated by the same rule. HTML is a good example of a context-free language, because an element in HTML can contain other elements, which in turn can contain other elements, and so on. This is also why it inherently [can't be parsed using regular expressions](https://stackoverflow.com/a/1732454/2205454). As a result, while an input that adheres to a regular grammar can be represented using a sequence of syntactic components, context-free grammar is represented using a higher-level structure -- a syntax tree: ```plaintext [ HTML document ] | \ | \ <body> <head> / \ \ <main> <footer> \ / | \ <div> <p> <title> ``` So if we can't use regular expressions to build these syntax trees, what should we do? ## Parser combinators There are many approaches for writing parsers for context-free languages. Most language tools you know are built with either manual loop-stack parsers, parser generator frameworks, or parser combinators. Parser combinators, as a concept, revolves around representing each parser as a modular function that takes on some input and produces either a successful result or an error: ```plaintext f(input) -> (result, inputRemainder) | (error) ``` These parsers can be transformed or combined to form more complex parsers by wrapping the function in another function. Generally speaking, combinators are just another class of functions that take other parser functions and produce more intricate ones. ```plaintext F(f(input)) -> g(input) ``` The idea is to start by writing parsers for the simplest grammar rules in your language and then gradually move up the hierarchy using different combinators. By going up level by level, you should eventually reach the top-most node that represents the so-called start symbol. That might be too abstract to understand so how about we look at a practical example? ## JSON processor using parser combinators To better understand this approach, let's write a functional JSON parser using C# and a library called [Sprache](https://github.com/sprache/Sprache). This library provides a set of base low-level parsers and methods to combine them, which are essentially building blocks that we can use to make our own complex parsers. To start off, I created a project and defined classes that represent different entities in JSON grammar, 6 of them in total: - `JsonObject` - `JsonArray` - `JsonString` - `JsonNumber` - `JsonBoolean` - `JsonNull` Here is the corresponding code for them, condensed into one snippet for brevity: ```csharp // Abstract entity that acts as a base class for all data types in JSON public abstract class JsonEntity { public virtual JsonEntity this[string name] => throw new InvalidOperationException( $"{GetType().Name} doesn't support this operation."); public virtual JsonEntity this[int index] => throw new InvalidOperationException( $"{GetType().Name} doesn't support this operation."); public virtual T GetValue<T>() => throw new InvalidOperationException( $"{GetType().Name} doesn't support this operation."); public static JsonEntity Parse(string json) => throw new NotImplementedException("Not implemented yet!"); } // { "property": "value" } public class JsonObject : JsonEntity { public IReadOnlyDictionary<string, JsonEntity> Properties { get; } public JsonObject(IReadOnlyDictionary<string, JsonEntity> properties) { Properties = properties; } public override JsonEntity this[string name] => Properties.TryGetValue(name, out var result) ? result : null; } // [ 1, 2, 3 ] public class JsonArray : JsonEntity { public IReadOnlyList<JsonEntity> Children { get; } public JsonArray(IReadOnlyList<JsonEntity> children) { Children = children; } public override JsonEntity this[int index] => Children.ElementAtOrDefault(index); } // Abstract literal public abstract class JsonLiteral<TValue> : JsonEntity { public TValue Value { get; } protected JsonLiteral(TValue value) { Value = value; } public override T GetValue<T>() => (T) Convert.ChangeType(Value, typeof(T)); } // "foo bar" public class JsonString : JsonLiteral<string> { public JsonString(string value) : base(value) { } } // 12345 // 123.45 public class JsonNumber : JsonLiteral<double> { public JsonNumber(double value) : base(value) { } } // true // false public class JsonBoolean : JsonLiteral<bool> { public JsonBoolean(bool value) : base(value) { } } // null public class JsonNull : JsonLiteral<object> { public JsonNull() : base(null) { } } ``` You can see that all of our JSON types inherit from `JsonEntity` class which defines a few virtual methods. These methods throw an exception by default, but they are overridden with proper implementation on types that support them. Using `JsonEntity.Parse` you are able to convert a piece of JSON text into our domain objects and traverse the whole hierarchy using indexers: ```csharp var price = JsonEntity.Parse(json)["order"]["items"][0]["price"].GetValue<double>(); ``` Now, of course that won't work just yet because our `Parse` method isn't implemented. Let's fix that. Start by downloading the Sprache library from NuGet, then create a new internal static class named `JsonGrammar`. This is where we will define the grammar for our language: ```csharp internal static class JsonGrammar { } ``` As I've explained above, this approach is all about building simple independent parsers first and slowly working your way up the hierarchy. For that reason it makes sense to start with the simplest entity there is, `JsonNull`, which can only have one value: ```csharp internal static class JsonGrammar { private static readonly Parser<JsonNull> JsonNull = Parse.String("null").Return(new JsonNull()); } ``` Let's quickly look into what we've just wrote here. On the right hand side of the equals sign, we are calling `Parse.String` to create a basic parser that will look for a sequence of characters that make up the string "null". This method produces a delegate of type `Parser<IEnumerable<char>>`, but since we're not particularly interested in the sequence of characters itself, we chain it with `Return` extension method that lets us specify a concrete object to return instead. Doing this also changes the delegate type to `Parser<JsonNull>`. It's worth noting that as we write this, no parsing actually happens just yet. We are only building a delegate that can be later invoked to parse a particular input. If we call `JsonNull.Parse("null")` it will return an object of type `JsonNull`. If we try to call it on any other input, it will throw an exception with a detailed error. That's pretty cool, although not particularly useful yet. Let's move on to `JsonBoolean`. This type, unlike `JsonNull` actually has two potential states, `true` and `false`. We can handle them with two separate parsers: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<JsonBoolean> TrueJsonBoolean = Parse.String("true").Return(new JsonBoolean(true)); private static readonly Parser<JsonBoolean> FalseJsonBoolean = Parse.String("false").Return(new JsonBoolean(false)); } ``` This works very similarly to the previous parser we wrote, except now we have two different parsers for one entity. As you've probably guessed, that's where combinators come into play. We can merge these two parsers into one using an `Or` combinator like this: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<JsonBoolean> TrueJsonBoolean = Parse.String("true").Return(new JsonBoolean(true)); private static readonly Parser<JsonBoolean> FalseJsonBoolean = Parse.String("false").Return(new JsonBoolean(false)); private static readonly Parser<JsonBoolean> JsonBoolean = TrueJsonBoolean.Or(FalseJsonBoolean); } ``` The `Or` combinator is an extension method that takes two parsers of the same type and produces a new parser that succeeds if either one of them succeeds. That means if we try to call `JsonBoolean.Parse("true")` we will get `JsonBoolean` which has `Value` equal to `true`. Similarly, if we call `JsonBoolean.Parse("false")` we will get a `JsonBoolean` whose `Value` is `false`. And, of course, any unexpected input will result in an error. One of the coolest things about using parser combinators is how expressive your code is. It can be read quite literally, in fact: ```plaintext JsonBoolean is either TrueJsonBoolean or FalseJsonBoolean. TrueJsonBoolean is a string "true" which produces a `JsonBoolean` whose value is `true`. FalseJsonBoolean is a string "false" which produces a `JsonBoolean` whose value is `false`. ``` Reading code like this makes it really easy to infer the structure of the text we're trying to parse. Let's handle our next data type, `JsonNumber`: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<JsonNumber> JsonNumber = Parse.DecimalInvariant .Select(s => double.Parse(s, CultureInfo.InvariantCulture)) .Select(v => new JsonNumber(v)); } ``` As you can see, Sprache already provides `Parse.DecimalInvariant` out of the box, which we can use to match a number. Since that returns `Parser<string>` as it matches the text that represents the number, we need to transform it to `double` first and then to our `JsonNumber` object. The `Select` method here works quite similarly to LINQ's `Select` -- it lazily transforms the underlying value of the container into a different shape. This lets us map raw character sequences into more complex higher-level domain objects. By the way, types that have a `Select` operation (or more colloquially known, "map" operation) are called "functors". As you can see, they are not limited to collections (i.e. `IEnumerable<T>`) but can also be containers with a single value, just like our `Parser<T>` here. With that out of the way, let's proceed to `JsonString`: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<JsonString> JsonString = from open in Parse.Char('"') from value in Parse.CharExcept('"').Many().Text() from close in Parse.Char('"') select new JsonString(value); } ``` Here you can see how we combined three consecutive parsers into one with the use of LINQ comprehension syntax. You are probably familiar with this syntax from working with collections, but it's a bit different here. Each line beginning with `from` represents a separate parser that produces a value. We specify the name for the value on the left and define the actual parser on the right. To reduce the parameters to a single result, we terminate with a `select` statement that constructs the object we want. This works because chaining `from` statements internally calls `SelectMany` extension method, which the author of this library defined to work with `Parser<T>`. Oh, and the types that let you do that with `SelectMany` (also known as "flat map") are what we call "monads". The parser we just wrote will try to match a double quote, followed by a (possibly empty) sequence of characters that doesn't contain a double quote, terminated by another double quote, ultimately returning a `JsonString` object with the text inside. Moving on to our first non-primitive type, `JsonArray`: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<JsonArray> JsonArray = from open in Parse.Char('[') from children in JsonEntity.Token().DelimitedBy(Parse.Char(',')) from close in Parse.Char(']') select new JsonArray(children.ToArray()); } ``` Structurally, a JSON array is just a sequence of entities separated by commas, contained within a pair of square brackets. We can define that using the `DelimitedBy` combinator which tries to match the first parser repeatedly separated by the second one. Notice how this combinator takes `Parse.Char(',')` instead of simply `','`. We could actually have used a more complicated parser in its place, one that doesn't even return a `char` or `string`. This is the power of parser combinators -- as we're gradually moving up the structure of our data, we're working with parsers of increasingly higher order. If you've followed the steps here closely, you probably noticed that the code above doesn't actually compile. That's because we're referencing `JsonEntity` which is a parser that we haven't defined yet. This is because this grammar rule is recursive -- an array can contain any entity, which can be, among other things, an array as well, which can contain any entity, which can be an array, which... you get the point. As a temporary solution, we can define a dummy in place of `JsonEntity`, just to make it compile: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<JsonArray> JsonArray = from open in Parse.Char('[') from children in JsonEntity.Token().DelimitedBy(Parse.Char(',')) from close in Parse.Char(']') select new JsonArray(children.ToArray()); public static readonly Parser<JsonEntity> JsonEntity = null; } ``` Also, notice the `Token()` extension method? This wraps our parser in a higher-order parser that consumes all whitespace immediately around our input. As we know, JSON ignores whitespace unless it's within double quotes, so we need to account for that. If we don't do that, our parser will return an error when it encounters whitespace. Parsing `JsonObject` is very similar, except it contains properties instead of raw entities. So we'll have to start with a parser for that first: ```csharp internal static class JsonGrammar { // ... private static readonly Parser<KeyValuePair<string, JsonEntity>> JsonProperty = from name in JsonString.Select(s => s.Value) from colon in Parse.Char(':').Token() from value in JsonEntity select new KeyValuePair<string, JsonEntity>(name, value); private static readonly Parser<JsonObject> JsonObject = from open in Parse.Char('{') from properties in JsonProperty.Token().DelimitedBy(Parse.Char(',')) from close in Parse.Char('}') select new JsonObject(properties.ToDictionary(p => p.Key, p => p.Value)); // ... } ``` Since our model implements `JsonObject` using a dictionary, an individual property is expressed using `KeyValuePair<string, JsonEntity>`, that is the name of the property (`string`) and its value (`JsonEntity`). As you can see, we used LINQ comprehension syntax again to combine sequential parsers. A `JsonProperty` is made out of a `JsonString` for the name, a colon, and a `JsonEntity` which denotes its value. We use `Select()` on `JsonString` to lazily extract only the raw `string` value, as we're not interested in the object itself. For the `JsonObject` parser, we pretty much wrote the same code as we did for `JsonArray`, replacing square brackets with curly braces and `JsonEntity` with `JsonProperty`. Finally, having finished with each individual entity type, we can properly define `JsonEntity` by combining the parsers we wrote earlier: ```csharp internal static class JsonGrammar { // ... public static readonly Parser<JsonEntity> JsonEntity = JsonObject .Or<JsonEntity>(JsonArray) .Or(JsonString) .Or(JsonNumber) .Or(JsonBoolean) .Or(JsonNull); } ``` And update the static method we have on `JsonEntity` class itself so that it calls the corresponding parser: ```csharp public abstract class JsonEntity { // ... public static JsonEntity Parse(string json) => JsonGrammar.JsonEntity.Parse(json); } ``` That's it, we have a working JSON processor! We can now call `JsonEntity.Parse` on any valid JSON text and transform it into our domain, i.e. a tree of `JsonEntity` objects. ## Wrapping up Parsing doesn't have to be a daunting and unapproachable task. Functional programming is notorious for its simplicity and elegance and there's no task that can't be solved by throwing monads at it -- parsing being no exception. And luckily, we can do it in C# as well! If you're still thirsty for knowledge and want to see a slightly more complex example, check out [LtGt](https://github.com/Tyrrrz/LtGt), an HTML processor (with CSS selectors!) that I've written using Sprache. Should you wish to learn more about parsing in general, I recommend reading ["Parsing in C#"](https://tomassetti.me/parsing-in-csharp), an article by Gabriele Tomassetti. There are also other monadic parser combinator libraries in .NET that you can check out, most notably [Superpower](https://github.com/datalust/superpower), [Pidgin](https://github.com/benjamin-hodgson/Pidgin) and [FParsec (F#)](https://github.com/stephan-tolksdorf/fparsec). This article is largely based on my talk from .NET Fest 2019, "Monadic parser combinators in C#". You can find the original presentation and full source code for the JSON parser [here](https://github.com/Tyrrrz/DotNetFest2019).
tyrrrz
203,126
Code is for co-workers, not compilers
An argument for making code more readable for humans
6,831
2019-11-10T18:38:57
https://dev.to/fluffynuts/code-is-for-co-workers-not-compilers-580g
programming, style, refactoring
--- title: Code is for co-workers, not compilers published: true description: An argument for making code more readable for humans tags: programming,style,refactoring cover_image: https://thepracticaldev.s3.amazonaws.com/i/qu102h74vs6ssfq8sv1i.jpg series: pragmatic-programmer --- Whilst the primary intent of code is to convey the desired logic to a computer, we have to remember that we are likely to need other people to work with that code too. Co-workers, consumers of our apis, and, given enough time, us! Haven't you ever gone back to some code you haven't been near for a long time and almost wondered who wrote it? The style may seem alien -- and if you're dedicated to the path of continual learning, I would argue that there _must_ be a timeframe after which code you wrote _should_ feel clumsy and a little alien. Because if you're continually learning and getting better, that means that, at some point in the past, you weren't at the level you're at now -- and it can become painfully obvious! I have some code which I wrote probably a decade or more ago, which I still use today. Looking at the code, I can recognise that I've come a long way. I don't need to rewrite it or change it -- it does what it does sufficiently well. But it's interesting to look back at code you wrote a while ago and see how you have grown. You can also use this process as a measure of your learning path and growth: I would argue that when we are evolving quickly, code we wrote even a month ago can evoke the desire to refactor or even rewrite! Resist the urge for the latter though, unless there's a good reason to rewrite: if it works sufficiently well, refactor if you like, but move on -- create new things with your learnings. I would also caution that if you can look at code you wrote more than 6 months ago and find no fault, no better way to do things -- you may be stagnating. Perhaps not -- but it's still something I do to measure my "growth velocity" for lack of a better term. ## So what makes good code for co-workers? That's a loaded question, and I'm sure we all have our own preferences and ideas. I'll share mine here, and I'd appreciate dialogue in the comments. If you have a protip that can help me to write code which others can work with more effectively, I want to know about it! Without further ado, here are my guiding principles. ## Good code reads like a story I've always found that the code I understand more naturally has been written in a style which lays out what it does at a specific level with well-named methods that convey the logic at that level in plain English. I'm a native-speaker of English, so I would _naturally_ lean towards the language, but there are two thoughts I can give on that front anyway: 1. Adopt your native language for method and variable names. People you work with are more likely to understand and you are more likely to express yourself clearly. 2. Write in English if you'd like the widest audience and to open up the code for open-source work, since, whilst English may not be the primary language of the world, it's at least a secondary language for an overwhelming majority of the world. We can fight for the sovereignty of our native tongues -- and we wouldn't be wrong to do so -- or we can accept that there is a language which is most likely to be understood by the most people, and run with that. Who knows? In a century, it may be Mandarin! When I say that good code reads like a story, consider the following excerpt from [TempDbMySqlBase](https://github.com/fluffynuts/PeanutButter/blob/master/source/TempDb/PeanutButter.TempDb.MySql.Base/TempDbMySqlBase.cs), a class which is the base for the two flavors of TempDbMySql: [TempDb.MySql.Data](https://www.nuget.org/packages/PeanutButter.TempDb.MySql.Data/) and [TempDb.MySql.Connector](https://www.nuget.org/packages/PeanutButter.TempDb.MySql.Connector/) which only differ in the library they expose and use for connectors (Oracle's [MySql.Data](https://www.nuget.org/packages/MySql.Data/) and the opensource [MySqlConnector](https://www.nuget.org/packages/MySqlConnector/)): ```csharp protected override void CreateDatabase() { Port = FindOpenRandomPort(); var tempDefaultsPath = CreateTemporaryDefaultsFile(); EnsureIsRemoved(DatabasePath); InitializeWith(MySqld, tempDefaultsPath); DumpDefaultsFileAt(DatabasePath); StartServer(MySqld, Port); CreateInitialSchema(); } ``` This block may even read a bit like pseudo-code. The code makes it quite obvious the steps which must be performed to bring up a temporary mysql database server: 1. Find an open random TCP port to listen on 2. Create the contents of defaults file (`my.cnf`) for the server to work against, based on sane defaults and user overrides 3. Ensure that there's no data at the path that has been determined (or provided) for storing the data of this instance 4. Dump out the defaults file out to the data folder 5. Start the server on the high port which was found to be open earlier 6. Run in any constructor-provided scripts which are generally used to generate the initial schema of the database Each of these steps requires considerable work, and any one which seems interesting is a keypress (F12 in Visual Studio or Rider with the VS scheme) away if it's a place the reader would like to learn more about. Part of my attempt to make code read like a story is method names that end in words like `At` or `With`, so that the entire line reads a little smoother. ## Concepts are separated with whitespace Where we have lines of logic which performs specific functionality, it's nice to group those lines into a block which is separated from other code by whitespace, at least as a first pass. Consider this simple example: ```csharp using (var connection = OpenConnection()) using (var command = connection.CreateCommand()) { command.CommandText = $"create schema if not exists `{schema}`"; command.ExecuteNonQuery(); command.CommandText = $"use `{schema}`"; command.ExecuteNonQuery(); SchemaName = schema; } ``` It may not be immediately obvious that there are two commands being run here, and the current schema name is being stored for usage elsewhere. A little whitespace can help to clarify: ```csharp using (var connection = OpenConnection()) using (var command = connection.CreateCommand()) { command.CommandText = $"create schema if not exists `{schema}`"; command.ExecuteNonQuery(); command.CommandText = $"use `{schema}`"; command.ExecuteNonQuery(); SchemaName = schema; } ``` It's a small change which immediately draws the eye to three distinct operations within the block. Once you have two, three or four well-defined blocks within a method, consider pulling them out into well-named methods. Another form of whitespace is indentation: ```html <html><head><title>Hello, World!</title><style>h2 {text-decoration: underline;font-weight: bolder;} p {font-style: italic;} </style><script>window.onload=function(){alert('Hello, there!');};console.log("loaded");</script></head><body> <h2>This is my first html page!</h2><p>I hope you like it!</p></body></html> ``` vs ```html <html> <head> <title>Hello, World!</title> <script> window.onload = function() { alert('Hello, there!'); } console.log("loaded"); </script> <style> h2 { text-decoration: underline; font-weight: bolder; } p { font-style: italic; } </style> </head> <body> <h2>This is my first html page!</h2> <p>I hope you like it!</p> </body> </html> ``` ## Methods are kept short I don't like to impose a hard-and-fast rule for methods, but when we start getting up at around 10-15 lines, I'm looking for a way to simplify the text on screen. Sometimes I can't find one, but often I can. A method which requires the reader to scroll to see more of it is _definitely_ too long: there is no way for the reader to see the entire logic chain in one glance which makes it difficult to fully understand the flow of the method. ## Names tell you _what_ a thing is or does Ever seen code like this? ```csharp public async Task<bool> Get(string u, string p) { using (var x = WebRequest.Create(u)) using (var y = await x.GetResponseAsync()) using (var s = response.GetResponseStream()) using (var w = new FileStream(p, FileMode.CreateNew)) { var t = long.Parse(y.Headers["Content-Length"]); var i = 0; var a = 8192; var d = new byte[a]; while (i < t) { var c = t - i; if (c > a) c = a; var b = await s.ReadAsync(d, 0, (int)c); await w.WriteAsync(d, 0, b); w.Flush(); i += b; } } return true; } ``` What do `x`, `y`, `c`, `a` and all of these variables do? What does the method mean, when it says it will `Get`? It just downloads data from an url to a file on disk, but to actually know what this code does, you'd have to read the entire thing and hold that information in your head. There are a few moving parts here, but this code would suck a lot less if it just had some readable names: ```csharp private const int DEFAULT_CHUNK_SIZE = 8192; // 8kb public async Task<bool> Download(string linkUrl, string outputPath) { using (var disposer = new AutoDisposer()) { var req = WebRequest.Create(linkUrl); var response = await disposer.Add(req.GetResponseAsync()); var readStream = disposer.Add(response.GetResponseStream()); var writeStream = disposer.Add(new FileStream(outputPath, FileMode.CreateNew)); var expectedLength = long.Parse(response.Headers["Content-Length"]); var haveRead = 0; var thisChunk = new byte[DEFAULT_CHUNK_SIZE]; while (haveRead < expectedLength) { var toRead = expectedLength - haveRead; if (toRead > DEFAULT_CHUNK_SIZE) toRead = DEFAULT_CHUNK_SIZE; var readBytes = await readStream.ReadAsync(thisChunk, 0, (int)toRead); await writeStream.WriteAsync(thisChunk, 0, readBytes); writeStream.Flush(); haveRead += readBytes; } } return true; } ``` (In addition, I added a class called `AutoDisposer` that disposes things when it was disposed, in the reverse order in which it was asked to track them. So I can cut down on the number of `using` statements, but get the same functionality). It's arguably _not_ perfect code -- but it's better than the first attempt. Whenever I'm struggling to find the name for a property, variable or POCO for holding data, I ask "What information does this hold? What _is_ it?". Whenever I'm struggling to find the name for a method or class, I ask "What does it _do_?". Asking the last question also helps me to be honest when I'm starting to make a class or method which does too much, in other words, when I should be considering more than one class or method. If there's an `And` in the name (eg `FindRecordAndSoftDelete`), then it's a clue that perhaps I need two methods. ## Comments I like to keep comments sparse. Comments tend to rot, that is, to go out of sync with the code that they are near. Like code I just saw recently at work: ```csharp switch (retries) { case 1: return Retry.Immediately(); // retry in 5 minutes case 2: return Retry.In(TimeSpan.FromMinutes(2)); // and so on -- all the comments after this lied // but had told the truth at some point in time. } ``` In addition, comments which tell you the obvious just waste the reader's time: ```csharp // initialize the database context dbContext.Init(); // fetch the first user record return dbContext.Users.First(); ``` There are, however, _valuable_ comments: 1. "WHY" comments: when the reason(s) as to _why_ the code does what it does are not obvious. I went trawling through my open-source work and the only comments I could find along this line are within empty `catch` blocks, explaining that I'm intentionally suppressing errors. However, in a business use-case, there are often little idiosyncrasies that have to be taken into account, like perhaps certain operations can't ever happen on a Sunday because a third-party integration is always down for maintenance. 2. API documentation. This is where you want to use the documentation that fits your ecosystem, eg xmldoc for .net or jsdoc for JavaScript. API documentation is usually surfaced by editing tools as consumers are writing code against your libraries and really help to make the developer experience slick, when they are consistently helpful. Remember though that the onus is on you to keep these comments up-to-date every time you're working in that area of code. The only comment worse than a missing, helpful one is a misleading one. Also, if you're tempted to write a short, one-line comment above a block of code to describe what it does, why not just extract a method? You'll end up: 1. documenting what the code is doing 2. abstracting out that little block to make it easier to read the overall logic of your current method 3. helping to prevent comment-rot: people are (in my experience) more likely to rename a method than to update comments ## Wrapping it up A friend of mine used to say: "write your code like the next person who is going to work on it is a psychopathic killer, and he knows where you live!". Whilst perhaps he was taking it a bit far, the point remains: _Code is for co-workers, not compilers_ Machines are not that fussy about the instructions they receive -- they just execute them. And compilers don't care if your methods have short, indecipherable names, or longer, more descriptive names. Even in the case of languages like JavaScript where the source is delivered to the user to be executed at her machine, we have code minifiers to optimise that experience -- so rather write code you'd be glad to come back to and maintain than code which loses its meaning the moment you close the editor. I also highly recommend watching this talk by Kevlin Henny: {% youtube ZsHMHukIlJY %}
fluffynuts
203,132
Running laravel queue worker on two different applications that share the same database
Prerequisites PHP LARAVEL NOTE: If you already understand laravel queues, and you're...
0
2019-11-10T21:06:19
https://dev.to/ajimoti/running-laravel-queue-worker-on-two-different-applications-that-share-the-same-database-ff3
laravel, queue, php
--- title: Running laravel queue worker on two different applications that share the same database published: true description: tags: laravel, queue, PHP cover_image: https://cdn-images-1.medium.com/max/1600/0*JuDOoqYySHEoMX-x.jpg --- #Prerequisites 1. PHP 2. LARAVEL **NOTE:** If you already understand laravel queues, and you're not interested in stories, go straight to the second heading. "Queues are used to delay time-consuming tasks until a later time" - [larashout.com](https://larashout.com) #Why use Laravel Queues? Queues are set up for better user experience, they are used to make time-consuming tasks run in the background while the user gets to move on to something else, not having to wait till the end of these tasks. For example, say you are building an application that allows a store owner send a promotional email to all their users; and it takes approximately two(2) seconds to send said email to one user, imagine how long the store owner would have to wait to send the email to a thousand users. **Quick maths:** The total estimated time would be the number of users (say N), multiplied by two seconds (N x 2s), so we have 1000 x 2s = 2000s. Meaning the store owner would have to wait for 2000seconds (approximately 33.3 minutes) to reach a thousand users before getting a response, which is not great. Now think of how long it'd take to reach five thousand or ten thousand users. This is where queues come in, Laravel queues pick all the action that you need to be carried out, store them on a driver with a predefined delay time set by the developer, returns a response to the user, then proceeds to dispatch the queues on the server without interfering with the user's activities. Let me rephrase the previous paragraph to fit our store example, Laravel queue picks the customers email addresses selected by the store owner, stores them on a drive as a job with a predefined delay time set by the developer, returns a response to the store owner, then proceeds to dispatch the emails in the background at the set predefined time. The dispatch is done on the server without interfering with other activities the store owner might be doing on the application. Below is a code sample that enables the store owner to send promotional emails using a queue. ``` $emailJob = new SendPromotionalEmail()->delay(Carbon::now()->addMinutes(5)); dispatch($emailJob); ``` Before the above code can work, there has to be a queue worker running. Here is what a queue worker does: A queue worker checks the driver for any pending jobs, and execute them. In this case, SendPromotionalEmail() is a job that would be stored on the drive waiting to be executed by the queue worker. To run a queue worker, run the following command on your terminal. ``` php artisan queue:work ``` If you have read to this point, and still don't understand what queues are or what they are meant to do, you should read the queues section of the Laravel documentation for a better understanding and how to set them up. #Diving straight to the Point You might be wondering why you would want to run a queue worker on two different applications that share the same database, I recently ran into a situation like this, and spent hours trying to debug the issue before realizing the problem. **PS:** In this example below, I used `database` as the queue driver. Let's use the ride-hailing company Uber as an example, Say we built an application like Uber using laravel, but this time we built two different applications, the riders' version, and the drivers' version, and made both share the same database instead of using an API. (this example is a bad idea, but it's the best fit for this topic). Building applications this big will require running queues in the background to carry out the time-consuming tasks, so we would have a queue worker running on both applications using the command: ``` php artisan queue:work ``` The above command will constantly check the jobs table on the database and executes any pending job it finds. Now here is the problem, since we have this command (the queue worker) running on both applications, there are times the riders queue worker will try to execute jobs that belong to the drivers' application, and vice versa depending on which queue worker hits the database first. And here's what happens whenever a queue worker tries to run a job that doesn't belong to it, the job would be executed but fails instantly because of its codebase differs from the job payload. To solve this, we would have to customize our queue worker to only process particular queues on each of our application and to do this the jobs should be dispatched with a name using something like this `->onQueue('name_here');` and the queue worker should run on both apps with their respective names. Going by the explanation above, here's what the code would look like on the riders app ``` $emailJob = new EmailNotification()->delay(Carbon::now()->addMinutes(5)); dispatch($emailJob)->onQueue('UberRiders'); ``` Now that we have categorized the rider jobs, the queue worker should run with a flag indicating that it only runs the UberRiders jobs, so we have something like this: ``` php artisan queue:work database --queue=UberRiders ``` instead of using the below default command: ``` php artisan queue:work ``` The same thing would be done on the drivers laravel app, so we have something like this: ``` $emailJob = new EmailNotification()->delay(Carbon::now()->addMinutes(5)); dispatch($emailJob)->onQueue('UberDrivers'); ``` And the queue worker looks like this ``` php artisan queue:work database --queue=UberDrivers ``` For practical examples about laravel queues, you can read about queues, and how they work [here](https://www.larashout.com/laravel-queues-step-by-step-guide). #Conclusion The best way to go about this is to build a central API and have both applications connect to them, that way you wouldn't have to bother yourself about running different queue workers on the same database, but if you find yourself in this type of issue, this should help you fix the issue.
ajimoti
203,206
Stockpile of Resources
There is an endless supply of articles, tutorials, frameworks, etc out there on the internet. If noth...
3,656
2019-11-10T23:06:48
https://dev.to/jaredharbison/the-little-things-3egk
There is an endless supply of articles, tutorials, frameworks, etc out there on the internet. If nothing else, this is a catch-all for me to keep track of the resources I've come across that I imagine will help me out in the future. I'm currently thinking I'll refresh this single post often with my favorites, rather than turning it into a series. I would rather it evolve over time than expand. ********************************************************* <h1><center> Let's get started! </center></h1> ********************************************************* [Figma](https://www.figma.com) was approachable and perfect for copying SVG for the logo animation on my portfolio site. It's not as intense as [Gimp](https://www.gimp.org/), but I only scratched the service of the cool features on Figma and I expect to be using it quite a bit. [Zurb Motion UI](https://zurb.com/playground/motion-ui) is a SASS library for creating flexible CSS transitions and animations. It's really straight-forward and curated. I'm drawn to it for the assistance it provides in series animations. [GitPitch](https://gitpitch.com) sidetracked me for a few hours this week while researching slideshow options for my blog. I ultimately went with [Speaker Deck](https://speakerdeck.com/) to take advantage of the Dev.to liquid tags but I hope to put GitPitch to use ASAP. [Git Magic](http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html) is something I just stumbled upon while looking into how to solve a Git mistake I made months ago. This may not have been updated in a couple years so just double-check anything you end up using! [Markdown Guide](https://www.markdownguide.org/) may seem like a no-brainer, but for me the best tutorial or guide is often just the official docs. I also have no shame in my lack of memorized markdown, and intend to revisit this often. [Shields.io](https://shields.io/category/activity) is another cool GitHub adjacent tool I found this week. These shields will help spruce up your README.md files! Speaking of README.md files, I was really into this generator I found over the last couple weeks. I even suggested a LinkedIn prompt and the team has already implemented it! {% link https://dev.to/kefranabg/generate-beautiful-readme-in-10-seconds-38i2 %} [CSS Animation for Beginners](https://thoughtbot.com/blog/css-animation-for-beginners) has been a go-to article for me over the past couple weeks. I noticed it was published in 2014 so I almost skipped over it until I realized it has been updated in 2019! Once I used most of the tips, the following article made so much more sense and helped me a ton on a couple projects this week. {% medium https://medium.com/engineerbabu/a-detailed-guide-to-css-animations-and-transitions-b544502c089c %} ********************************************************* <h1><center> That's all folks! </center></h1> ********************************************************* [](https://) [](https://) [](https://) [](https://)
jaredharbison
203,277
What's New in Angular 9?
Angular 9 is the smaller, faster, and easier to use and it will be making Angular developers life eas...
0
2019-11-11T05:23:52
https://dev.to/anilsingh/what-s-new-in-angular-9-56j9
angular, typescript, angular9
Angular 9 is the smaller, faster, and easier to use and it will be making Angular developers life easier. The fully version of Angular 9 will be released in the month of October/November 2019. Added Angular 9 Features - 1. Added undecorated classes migration schematic in the core. 2. The formControlName also accepts a number in the form 3. Now allow selector-less directives as base classes in View Engine in the compiler. 4. Added support selector-less directive as base classes in Ivy and also make the Ivy compiler the default for ngc. 5. Convert all ngtsc diagnostics to ts.Diagnostics Explore in detail- https://www.code-sample.com/2019/08/whats-new-in-angular-9-angular-9-new.html
anilsingh
203,305
Qt Installer Framework: TypeError cannot read property name
Qt Installer Framework: TypeError can...
0
2019-11-11T07:30:48
https://dev.to/matthijs990/qt-installer-framework-typeerror-cannot-read-property-name-3p8b
{% stackoverflow 58791830 %}
matthijs990
203,352
WEB COMPONENTS VS. FRAMEWORKS: A Podcast
Happy Friday, Hackers! Today we've got some food-for-thought content for you! Davy and Danny discuss...
0
2019-11-11T11:01:59
https://dev.to/hackflix_dev/web-component-vs-frameworks-a-podcast-732
javascript, webdev, typescript, todayilearned
Happy Friday, Hackers! Today we've got some food-for-thought content for you! Davy and Danny discuss when to use PWAs vs. Native App or if you need an app at all and just a really responsive website. Continue to watch the FULL VIDEO on a discussion about web components vs. frameworks and much more! WATCH THE FULL VIDEO ON YOUTUBE: https://youtu.be/hfekHpHrFPM Click below for Danny's article that went VIRAL: https://itnext.io/using-the-dom-like-a-pro-163a6c552eba Danny's GitHub: https://github.com/DannyMoerkerke/custom-element Related Articles: https://dev.to/ionic/apple-just-shipped-web-components-to-production-and-you-probably-missed-it-57pf 🔥 PREVIOUS VIDEO • PODCAST ON HACKING A GROWTH MINDSET AND REACH YOUR GOALS → https://www.youtube.com/watch?v=YzNSTGCJqJI&t=233s 🔥 ➡️Share this Video: https://youtu.be/hfekHpHrFPM 📺Subscribe To Our Channel and Get More Great Tips http://bit.ly/Subscribe2Hackflix ➡️ Ever wanted to learn about the tech industry and software engineering? Hackflix is your HACK to learning the tips and tricks, motivational stories and educational podcasts to guide you for your future in the industry. We aim to learn, inspire and share! So what are you waiting for? Check out all of our latest videos on https://hackflix.dev/ ✅Make sure to Like, Favourite and Share this video and Subscribe if you haven't done so already at: https://bit.ly/Hackflix 🎬Watch our BRAND NEW series! Teach Me Anything in Less than 10 minutes!: ➡️Event loop in JavaScript & Web Workers https://www.youtube.com/watch?v=zf6N5OcfxnU&t=11s ➡️Scaling Systems And Organising Teams https://www.youtube.com/watch?v=HDMnUeX7pvU&t=18s // Other Great Resources: https://hackflix.dev/ ⬇️Tweet us a Question! https://twitter.com/hackflix_dev 👤Connect with us: https://www.facebook.com/hackflix.dev https://www.instagram.com/hackflix.dev https://twitter.com/hackflix_dev https://www.linkedin.com/company/hackflix & For our developers out there: https://dev.to/hackflix_dev
hackflix_dev
203,387
Mongodb replace external id after $lookup
Hi, im going crazy... I want to replace the userId inside the comments with the real user after the...
0
2019-11-11T10:43:52
https://dev.to/d0xzen/mongodb-replace-external-id-after-lookup-je7
help
Hi, im going crazy... I want to replace the userId inside the comments with the real user after the $lookup, i tried in many ways, i tried to group but i cant really reach what i want This is my field inside the page collection: <pre> "comments" : [ { "user_Id" : ObjectId("aaa"), "content" : "aaaa", "rep" : [ { "user_Id" : ObjectId("bbb"), "comment" : "bbbb", }, { "user_Id" : ObjectId("ccc"), "comment" : "cccc", } ] }, { "user_Id" : ObjectId("ddd"), "content" : "ddd", "rep" : [ ] } ] </pre> Users collection: <pre> "users" : [ { "_id" : ObjectId("aaa"), "name" : "user1", "email" : "test1@test.com", }, { "_id" : ObjectId("bbb"), "username" : "user2", "email" : "test2@test.com", } ] </pre> What result i was looking for: <pre> "comments" : [ { "user" : { "_id" : ObjectId("aaa"), "name" : "user1", "email" : "test1@test.com", } "content" : "aaaa", "rep" : [ { "userId" : { "_id" : ObjectId("bbb"), "username" : "user2", "email" : "test2@test.com", }, "comment" : "bbbb", }, { "user" : { "_id" : ObjectId("aaa"), "name" : "user1", "email" : "test1@test.com", }, "comment" : "cccc", } ] }, { "user" : { "_id" : ObjectId("bbb"), "username" : "user2", "email" : "test2@test.com", }, "content" : "ddd", "rep" : [ ] } ] </pre> What i did so far: <pre> db.pages.aggregate([ { $match: { _id: ObjectId('abcbc') } }, { $project: { comments: 1, } }, { $lookup: { from: 'users', localField: 'comments.user_Id', foreignField: '_id', as: 'users' } } ]).pretty() </pre> Right now it gives me the correct users but it give me comments with all my comments and users with all matched users how can i replace the userId with the real user object inside rep too? If i change inside the $lookup as into 'comments.user' it'll replace everything.
d0xzen
203,394
The Git Supremacy
After addressing the elephant in the room (Git vs GitHub). Let's look at how to start working with bo...
0
2019-11-11T12:55:46
https://medium.com/@mayank.uiet7/the-git-supremacy-fd469fb06777
github, git, beginners, tutorial
After addressing the elephant in the room (Git vs GitHub). Let's look at how to start working with both. ###Scenario 1: **You are going to work on a repository that already exists.** + **FORK** 1. Forking is copying the contents of the main repository and pasting it in a new repository on your _GitHub_ account. 2. So basically **copying happens on the GitHub account (only)** and you communicate with this newly created repository on *your GitHub account* from the locally running git. 3. Any change you do henceforth will update this new repository created on your GitHub account and the main repository (from which you forked) will remain unaffected. + **CLONE** 1. Cloning is copying the contents of the main repository on your local machine. 2. So basically **copying happens between GitHub and your local machine over the network(internet)** i.e. you communicate to the original remote branch from the git in your local machine. 3. Any change you do henceforth will update the main repository from which you took a clone. --- >Cloning via https vs ssh >We'll cover this on another post, but in short >Clone using https: >1. You can clone from any repository just by doing directly >`git clone <link of repository>` >2. You will have to input your username and password every time you want to communicate with GitHub (remote). >Clone using ssh: >1. You will have to put your public ssh key in your git hub account and then clone by `git clone <link of repository>` >2. You don't have to input your username and password to communicate with remote (GitHub). --- ###Scenario 2: **You are starting your own project, working on your local machine and want to create a GitHub repository of the same.** + **Step 1**: `git init <name of folder>` _Go to your desired folder and type this command_ What this will do is create a folder as specified and also create a __*.git*__ sub-directory. Basically it *initializes your git repository locally*. _*.git* directory contains all the information i.e. history, remote address, etc. about your repository on your local machine_ >*git clone < >* **=** *git init < >* **+** copy files from remote repository > Now you work on your files and save them as you do. + **Step 2**: `git add .` or `git add <name of files>` What this command does is that it adds all your files to the staging area. **work/.git/index** Basically from here you can preview your commit i.e. add all the files you've modified to the commit or segregate the work into more accurate commits by adding file wise. > What is a staging area will be covered on another post but in short imagine it is an area where you can put only those files which you need in your commit out of all the files you've worked on. + **Step 3**: `git commit -m <commit message>` What this command does is adds all your staged files into your repository as a commit (which has a unique hash to access) with a specific message for better understanding of anyone who looks at it **work/.git/objects** Basically here all your changes will be added to the local git repository along with all the other commits you've done before. > Again, we will go into detail in another post and will cover blobs and trees etc. this is a beginner level post. + **Step 4**: `git push origin <branch name>` What this command does is push/copies your local repository onto the remote repository i.e. a network command. Therefore all your files will be uploaded and stored *to the remote server(GitHub, GitLabs, etc.)* **from your local machine** with the help of the version control tool called *Git*. >Once you've pushed your changes and your friend or collaborator wants to update his/her local repository they can do one of the following: + `git fetch --all` This command downloads all the data from _remote repository_ into your **local repository** i.e. a network command + `git pull origin <branch name>` This command downloads all the data from _remote repository_ and tries to merge or overwrite the data in your **working directory** i.e. a network command ><figcaption>Here is a pictoral summary of everything I've said above</figcaption> >![git lifecycle](https://nceas.github.io/sasap-training/materials/reproducible_research_in_r_fairbanks/images/git-flowchart.png)
mayankarya
203,434
Data Privacy Impact Assessment Module
Helical is a unified cybersecurity management solution. This solution provides Automated assessment t...
0
2019-11-11T12:15:45
https://dev.to/helicalinc/data-privacy-impact-assessment-module-4005
Helical is a unified cybersecurity management solution. This solution provides Automated assessment to evaluate the readiness and maturity of privacy controls per relevant regulatory requirements like as GDPR and CCPA. Read more: https://helical-inc.com/hlcproducts/data-privacy-regulatory-assessment/
helicalinc
203,507
Some useful TypeScript snippets
Pick If you need to construct a new type based on some properties of an interface you alwa...
0
2019-11-11T15:53:25
https://dev.to/mbrtn/some-useful-typescript-snippets-2278
typescript, productivity, codenewbie, tutorial
## Pick If you need to construct a new type based on some properties of an interface you always can get the `Pick` utility type. ```typescript interface MyInterface { id: number; name: string; properties: string[]; } type MyShortType = Pick<MyInterface, 'name' | 'id'>; ``` Now `MyShortType` only have `name` and `id` properties extracted from `MyInterface`. ## Omit One more useful transformer type as `Pick`, but `Omit` works to "another direction" and excludes properties from the given interface. Let's have a look at the example: ```typescript interface MyInterface { id: number; name: string; properties: string[]; } type MyShortType = Omit<MyInterface, 'name' | 'id'>; ``` This time `MyShortType` only has `properties` property, because the others are omitted from `MyInterface`. ## keyof This is my favorite. Very useful for writing getters: ```typescript interface MyInterface { id: number; name: string; properties: string[]; } const myObject: MyInterface = { id: 1, name: 'foo', properties: ['a', 'b', 'c'] }; function getValue(value: keyof MyInterface) { return myObject[value]; } getValue('id'); // 1 getValue('count') // Throws compilation error: Argument of type '"count"' is not assignable to parameter of type '"id" | "name" | "properties"'. ``` ## Record If you tried to write types to a plain object like this, you know it looks like a mess then: ```typescript const myTypedObject: {[key: string]: MyInterface} = { first: {...}, second: {...}, ... } ``` Better to achieve this by using shiny `Record` type: ```typescript const myTypedObject: Record<string, MyInterface> = { first: {...}, second: {...}, ... }; ``` That's a bit neater to use Record rather than ugly `{[key: string]: ...}` construction isn't it? ## Bonus: Optional Chaining This is a very useful feature. It is new in TypeScript since it released in 3.7. Almost every React Component has ugly code like this: ```jsx <React.Fragment> {apiResult && apiResult.data && apiResult.data.params && apiResult.data.params.showOnline && (<div>✅ Online</div>)} </React.Fragment> ``` Now you can do it like that (thanks to all the coding gods!): ```jsx <React.Fragment> {apiResult?.data?.params?.showOnline && (<div>✅ Online</div>)} </React.Fragment> ``` I hope these little snippets will help you a bit 🙃.
mbrtn
203,541
How many people dream about code?
At least once a week I have intense stress dreams about fixing bugs in my teams code base. Then I wa...
0
2019-11-11T16:24:53
https://dev.to/cloakedstudios/how-many-people-dream-about-code-1ika
At least once a week I have intense stress dreams about fixing bugs in my teams code base. Then I wake up in a panic and realize everything is fine. Anyone else experience this? I don't see myself as being under a lot of stress.
cloakedstudios
203,545
who into NLP? 📚
my ObservableHQ NLP collection
0
2019-11-11T16:34:15
https://dev.to/tarasnovak/who-into-nlp-1hco
discuss, nlp, jsnotebooks, collections
--- title: who into NLP? 📚 published: true description: my ObservableHQ NLP collection tags: #discuss #nlp #jsNotebooks #collections --- For the NLP & Observable HQ [#jsNotebooks 📚](https://twitter.com/search?q=%23jsNotebooks&src=typed_query) fans out there: https://observablehq.com/collection/@randomfractals/nlp ![NLP JS Notebooks](https://thepracticaldev.s3.amazonaws.com/i/fimm7feu8ftbcjjrlm40.png) P.S: yeah, we killz in dataViz, NLP, d3, you name it! :) https://observablehq.com/@randomfractals/nlp-tag-tree?collection=@randomfractals/nlp # Do you have NLP || jsNotebooks 📚 to share? ...
tarasnovak
203,581
Why successful blockchains should be built on the BEAM.
Learn how Arc Block (founding sponsors of the Erlang Ecosystem Foundation) use the Erlang VM to build a blockchain framework and why the BEAM is perfect for building dApps.
0
2019-11-11T17:22:53
https://dev.to/erlang_solutions/why-successful-blockchains-should-be-built-on-the-beam-4na7
dapps, blockchain, erlang, beamvm
--- title: Why successful blockchains should be built on the BEAM. published: true description: Learn how Arc Block (founding sponsors of the Erlang Ecosystem Foundation) use the Erlang VM to build a blockchain framework and why the BEAM is perfect for building dApps. tags: dApps, blockchain, Erlang, BEAM VM, cover_image: https://i.imgur.com/ugFvVzB.png --- ### Who are ArcBlock, and why do they love the BEAM? ArcBlock is on a mission to take the complexity out of blockchain and fast track its adoption into everyday life. To do this, they've developed an all-inclusive blockchain development platform that gives developers everything they need to build, run and deploy decentralized applications (dApps) easily. At the heart of their platform is the BEAM VM. They're such big believers and supporters in the Erlang Ecosystem that they joined the Erlang Ecosystem foundation as a founding sponsor. In this guest blog Tyr Chen, VP of Engineering at ArcBlock, will discuss why they love the BEAM VM and the benefits of using it as a cornerstone for anyone wanting to build dApps. ### An introduction to the BEAM and blockchain Erlang is one of the best programming languages for building highly available, fault tolerant, and scalable soft real-time systems. The BEAM is the virtual machine - and the unsung hero from our viewpoint. The benefits of the BEAM apply to other languages run on the VM, including Elixir. No matter what high-level programming language people are using, it all comes down to the BEAM. It is this essential piece of technology that helps achieve the all-important nine-nines of availability. Today, the BEAM powers more than half of the world's internet routers and we don't think you have to look much further than that for validation. Below are some of the benefits of the BEAM that make it perfect for building blockchains. ### Network Consensus Our decision to leverage the BEAM as a critical component for building decentralized applications (dApss) was an easy one. To start, blockchain and decentralized applications, need to achieve a consistent state across all the nodes in the network. We accomplish this by using a state replica engine (also known as a consensus engine). Consensus is important as this mechanism ensures that the information is added to a blockchain ledger is valid. To achieve consensus, the nodes on the network need to agree on the information, and once consensus happens, the data can be added to the ledger. There are multiple engines available, including our current platform choice, Tendermint, to support the state replica engine. ### The BEAM + dApps Apart from the consensus engine, the BEAM is the perfect solution to satisfy several other critical requirements for decentralized applications. For decentralized applications to work in our development framework, we need to have an embedded database to store the application state and an index database for the blockchain data. While this is happening, we also need the blockchain node(s) to have the ability to listen to peers on the network and "vote" for the next block of data. For these requirements, the system needs to be continuously responsive and available. Now, it's also important to note that in addition to being continually responsive, we also need to account for CPU-tasks. In particular, our blockchain platform and services cannot stop working when the system encounters CPU-intensive tasks. If the system becomes unresponsive, a potentially catastrophic error could occur. ### Hot Code Reloading Besides BEAM's Scheduler, another feature we love is hot code reloading. It lets you do virtually anything on the fly without ever needing to take the BEAM down. For example, our blockchain application platform ships with lots of different smart contracts that developers can use to make their decentralized applications feature-rich. However, with blockchain, you have a distributed network and need to ensure that every node behaves the same. In most cases, developers have to update and reboot their nodes to get the latest software enabled, which causes potential issues and unnecessary downtime. With ArcBlock, we utilize the hot code reloading feature of BEAM to let the nodes enable/disable smart contracts on the fly across the entire network. This is simply done by sending a transaction that tells the system it should upgrade the software at a specific time. When that happens, ArcBlock will tell the BEAM to install the new code, and then every node in the network will have the latest features ready to go. ### Speed is Relative The BEAM uses the "actor model" to simulate the real world, and everything is immutable. Because of this, there is no need to lock the state and prevent the race condition. Of course, everything comes with a cost. The simplicity and beauty of immutability in the BEAM could lead things to run slower. To mitigate potential slowness, ArcBlock leverages Rust to help the CPU on intensive tasks such as the Merkle-Patricia tree for states. And once again, the BEAM demonstrates its value by providing an easy way to communicate with the outside world using Rust to boost the performance to another level. ### Garbage Collecting Don't let the name fool you. Garbage collecting is critical. Erlang uses dynamic memory with a tracing garbage collecting. Each process has its own stack and heap, which is allocated in the same memory block and can grow towards each other. When the stack and the heap meet, the garbage collector is triggered, and memory will be reclaimed. While this explanation is a bit technical, the process for garbage collecting in BEAM is done at a process level, ensuring that there will never be a "stop-the-world-and-let-me-clean-up-my-trash" type of garbage collection. Instead, it ensures that processes will continue without any type of interruption. ### OTP Last but not least, Erlang provides a development suite called OTP that gives developers an easy way to use well established best practices in the BEAM world. For any enterprise or blockchain application platform, building around industry standards is a must, and OTP makes it easy to write code that utilizes all the goodness available to developers in BEAM. ### Fault Tolerance There is a reason we saved this for last. This is by far the feature ArcBlock rely upon the most on the BEAM, it is what elevates the BEAM over many competitor technologies when it comes to blockchain. Although tens of thousands of transactions are happening simultaneously; any error that occurs in certain parts of the system won't impact the entire node. The errors will be self-healing, enabling the node to resist bad behaviour or specific attacks. For anyone who is delivering a service to users, or supports a production application, this is a critical feature. By including fault tolerance by default, we can ensure that anyone running on the ArcBlock platform can remain online and available. We believe that the BEAM, while designed many years ago, was intended for blockchain. It gives developers and blockchain platforms like ArcBlock, all the necessary features, and capabilities to run a highly concurrent, fault tolerant system that makes developers' lives easier. Keep calm and BEAM on. ### Learn more Tyr Chen, VP of Engineering at [ArcBlock is the guest host of our webinar on Wednesday, November 27](https://www.erlang-solutions.com/resources/webinars.html). Register to take part, and even if you can’t make it on the day, you’ll be the first to get a recording of the webinar once it is completed.
erlangsolutions
203,593
Jobless is better than in the wrong job. Insecurity is better than secure-but-with-caveats
Friday I took a decision, I did something I have never done before in my life: I put values ahead of...
0
2019-11-11T17:41:29
https://dev.to/samuelemattiuzzo/jobless-is-better-than-in-the-wrong-job-insecurity-is-better-than-secure-but-with-caveats-175f
life, choices, values
Friday I took a decision, I did something I have never done before in my life: I put values ahead of money, fear and security. I decided that being jobless was better than being in the wrong job, furthering a really bad cause. I had accepted a new job in the gambling sector. For a month, that did my head in. It's not something I can endorse, nor I can ignore. It's not a sector I believe should even exist. After a month of internal struggles, fighting with the always-in-my-head-disappointed-father thought, I have spent a day thinking. And I did turn around. I'd rather be 2 months jobless than have a pay that comes from that. What happened after this? I had spent the week bedridden due to illness. It all cleared out within an hour after voiding the contract I had signed. I am now looking forward to December and January without a job. I have never taken more than 5 days off work over a year in my entire life, so this is now scary and exciting! Be true to yourselves, be yourselves, be a good human. Also this is my first ever post on DEV.TO!
samuelemattiuzzo
203,727
Three layers of productivity and my recommends
I think there are three layers in the productivity. The first one is personal productivity, the seco...
0
2019-11-11T21:15:58
https://dev.to/yuno_miyako/three-layers-of-productivity-and-my-recommends-19d7
I think there are three layers in the productivity. The first one is personal productivity, the second one is team productivity, and the last layer is the architecture(or organization) productivity. # Personal Productivity My recommends for boosting **personal productivity** in coding are TDD, knowing data structure and algorithm, and design pattern. TDD stands for Test Driven Development and it improve my code and speed. Knowing data structure, especially hash map, reduce my time for problem resolution. And design pattern is applicable through any platform or any framework. Listener pattern can be used in mobile development, web development, and simple application anywhere. # Team Productivity Agile or Scrum is for boosting team productivity. Kanban board is super useful for task management. Retrospective makes us look back our process and improve it. # Architecture Productivity Micro Service is the best architecture for big products. Many team can work in autonomous with freedom and responsibility. Each team can use any technology and any programming language for the specific problem they want to solve. They can innovate small. It also reduce communication overhead.
yuno_miyako
203,759
Create and Host a Svelte App in 5 minutes or less
Before you start Yes! You can have a Svelte based app up and running in as fast as 90 seco...
0
2019-11-12T17:13:12
https://triptych.writeas.com/create-and-host-a-svelte-app-in-5-minutes-or-less?pk_campaign=rss-feed
svelte, github, hosting, javascript
--- title: Create and Host a Svelte App in 5 minutes or less published: true date: 2019-11-11 22:44:10 UTC tags: svelte, github, hosting, javascript canonical_url: https://triptych.writeas.com/create-and-host-a-svelte-app-in-5-minutes-or-less?pk_campaign=rss-feed --- ## Before you start Yes! You can have a [Svelte](https://svelte.dev/) based app up and running in as fast as 90 seconds! Before we start the timer, let's get a few things set up (in case you don't already have them ready to go). * Get a [Github](https://github.com) account * Get a [Netlify](https://netlify.com) account * Install [Visual Studio Code](https://code.visualstudio.com/) * Install [Node.js](https://nodejs.org/en/) on your system ## Decide on a name What will you call your project? This will determine how you fill in the next few steps. ## Let's go! 1) Launch Visual Studio and create a directory with the name you chose. I picked StrawberryIcecream. ![](https://i.snap.as/McW3xyQ.png)2) Open a terminal window. ![](https://i.snap.as/DBWyTNz.png) 3) Type the following in the terminal: `npx degit sveltejs/template StrawberryIcecream` (You could skip the create directory part in step 1, but I do this just to keep everything separate). ![](https://i.snap.as/DWzyUwA.png) 4) Try out your app. Type in the following in the terminal: ``` cd StrawberryIcecream npm install npm run dev ``` You should see something like this in the terminal ![](https://i.snap.as/egSO1B0.png) And this in the browser ![](https://i.snap.as/gtiL5Qf.png) 5) Now go to **[https://github.com/new/](https://github.com/new/)** (We are doing this so Netlify will have a place to find your files, and you can update them any time and Netlify will update your app!) ![](https://i.snap.as/2ebVuSH.png) 6) Put in your project name. ![](https://i.snap.as/PKpCA05.png) 7) Hit **Create repository** (Make sure the repo is “public”) 8) Now you need to get your files from your desktop to your repo. You should see something like this: ![](https://i.snap.as/JB4cKIz.png) 9) Click **uploading an existing file** and you'll see something like this. ![](https://i.snap.as/9qxtDkg.png) 10) Now go find your **StrawberryIcecream** folder on your system and drag and drop the files to your repo. Be sure to **NOT** include the `node_modules` folder. ![](https://i.snap.as/weurmW7.png) 11) Hit **Commit changes** to push them to your new repo. This completes the process and basically “stamps” your files with a time stamp so you can make changes to them later. ![](https://i.snap.as/aKFnb27.png) We are almost done! 12) Sign in to Netlify. 13) Choose **New site from Git** 14) Choose **Github** under _Continuous deployment_ 15) It will ask you to authenticate with Github. You say yes. 16) Now pick a repo. I'm picking my `StrawberryIcecream` one. ![](https://i.snap.as/nKeV88o.png) 17) Now here's a potentially tricky part. You need to tell Netlify how to build your app, and what directory to deploy from. You will see **Basic build settings**.\* In the **build** field type `npm run build`\* In the **publish** directory type `/public` 18) Hit **Deploy site**. You should see something like this: ![](https://i.snap.as/dINE6ZX.png) 19) After a few seconds you should see something like this: ![](https://i.snap.as/1WN3E7L.png) 20) Your site is deployed! Click the url and you will see your Svelte App! If you want to change the name of the site go to **Site settings** and hit **Change site name** I changed mine to [https://strawberryicecream.netlify.com/](https://strawberryicecream.netlify.com/) Now, here's the cool thing. You can make changes to your local files, you and push them up to your github repo (you can even drag and drop them again if you don't want to use GIT commands!) and after you commit the changes, the site will automatically update! 1) Just make a change in Visual Studio ( go to `src/main.js`): ![](https://i.snap.as/O5ua5gP.png) 2) Go to your github repo ( mine is [https://github.com/triptych/StrawberryIcecream/tree/master/src](https://github.com/triptych/StrawberryIcecream/tree/master/src) ) 3) Hit **upload files** ( assuming you don't want to set up GIT to do it that way) 4) Pick your `main.js` file you changed. Drag and drop it up to the site and **commit** the change. 5) Wait about 60 seconds. 6) Your site is updated!! ![](https://i.snap.as/ANW82tq.png) So, with barely any set up, or special commands, you can get a Svelte site up and running in minutes!I hope you got something out of this article. Please share and stuff!
triptych
203,858
Day 13 - Digging Deeper into CSS - Grid Layout
Well, I got busy the past 3 days and got sick but I am back. For some reason, this was a much easier...
3,222
2019-11-12T04:05:12
https://dev.to/jojonwanderlust/day-13-digging-deeper-into-css-grid-layout-3im7
html, css, codenewbie
Well, I got busy the past 3 days and got sick but I am back. For some reason, this was a much easier concept to learn. What is the Grid layout? On [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout/Basic_Concepts_of_Grid_Layout), it is defined as "a two-dimensional grid system to CSS". What that means is that with this system you have access to the column and the row unlike Flexbox where you have access to the column or the row. This is not an all inclusive list of all the properties as the goal of this is to deepen my knowledge of concepts I didn't understand enough the first time. For a full list, you can check out [CSS Tricks](https://css-tricks.com/snippets/css/complete-guide-grid/). As with Flexbox, there is the idea of the "Grid Container" and the "Grid Items" meaning certain properties are applied to the container and other to the items. As you are creating the grid, you may set the value for the column or the row only and let's say that you have 8 <code>div</code>. You set the columns to <code>grid-template-columns: 200px 200px 200px;</code>. What you will get is a grid with 3 columns and however many rows necessary to fit the remaining <code>div</code>. This is what is called "Implicit Track" and "Explicit Track". Implicit track - How the grid organizes when the column or row is not set. Explicit track - How the grid organizes when the column or row is set **The FR Unit** This is a unit for the Grid Layout that creates flexible units without calculating percentages. This is amazing for responsiveness. It represents a fraction of the available space in the Grid Container. **Minmax() Function** Used to limit the size of items when the grid container changes size. This is especially useful when you use <code>grid-auto-rows</code> and <code>grid-auto-columns</code>. **Auto Placement** When you create your <code>grid-template</code>, the default way for the items to be laid out is in a row. For example, if you have 6 <code>div</code> in 3 columns, it will be laid out as such. <code>1 2 3 4 5 6</code> You can change the layout with <code>grid-auto-flow</code> without changing the source code. If you give it a value of column, the items will be laid out like this now. <code> 1 3 5 2 4 6</code> We know that we can use <code>grid-auto-columns</code> and <code>grid-auto-rows</code> to set the size of implicit grid items. However, if you want to set how they are placed, you can use the <code>grid-auto-flow</code> property. The available values are <code>row</code>, <code>column</code> as seen above and <code>dense</code>. **Auto-Fill & Auto-Fit** <code>auto-fill</code>: This allows you to automatically insert as many rows or columns of your desired size as possible depending on the size of the container. You can create flexible layouts when combining <code>auto-fill</code> with <code>minmax()</code>. <code>auto-fit</code>: works almost identically to<code>auto-fill</code>. The only difference is that when the container's size exceeds the size of all the items combined, <code>auto-fill</code> keeps inserting empty rows or columns and pushes your items to the side, while <code>auto-fit</code> collapses those empty rows or columns and stretches your items to fit the size of the container. **Alignment** <code>justify-content</code> moves the entire grid (all the grid-items) within the grid container. This property aligns the grid along the row axis. <code>align-content</code> moves the entire grid (all the grid items) within the grid container. This property aligns the grid along the column axis. <code>justify-items</code> aligns grid items along the inline (row) axis. <code>align-items</code> aligns grid items along the block (column) axis. **Anonymous Items** This is when you have text that isn't in a <code>div</code> or another element within a Grid Container. The text will react as a grid item as if it was wrapped in a container. So far I have used both Flexbox and the Grid Layout in a project and I found for the page layout, I used the Grid Layout and for the rest of the page I was using Flexbox. I'm due for another project tomorrow and I am going to be more mindful of the choices I make. The streak continues...
jojonwanderlust
203,880
Docker Containers Explained by Renting Office Space
If you have ever visited a coworking space, then you can understand Docker via this visual tutorial.
0
2019-11-12T04:53:17
https://dev.to/kbk0125/docker-containers-explained-by-renting-office-space-p0o
webdev, beginners
--- title: Docker Containers Explained by Renting Office Space published: true description: If you have ever visited a coworking space, then you can understand Docker via this visual tutorial. tags: #webdev, #explainlikeimfive, #beginners cover_image: https://i1.wp.com/blog.codeanalogies.com/wp-content/uploads/2019/11/drew-beamer-dbKwY7Ijsvw-unsplash.jpg?w=2190&ssl=1 --- In the last 15 years, it has become exponentially easier to deploy and manage web applications. First, services like [Amazon Web Services](https://aws.amazon.com/) allowed you to easily rent a portion of a server via virtual machines. Rather than, you know, buying the whole server and running it from your closet. And now, you can use services like [Docker](https://www.docker.com/), which use containers to make it even cheaper and easier to manage dynamic web apps. But, new web developers often get the two underlying technologies (virtual machines and containers) mixed up. Although they both make it much easier to deploy web applications, they have some significant differences. In fact, it’s kind of like the difference between renting your own office space and renting coworking space (using a company like WeWork). So, this tutorial will use the office example to show how they are different. In order to understand this tutorial, you must first understand the concept of AWS and virtual machines. [Check out my separate guide to AWS](https://blog.codeanalogies.com/2018/07/31/amazon-web-services-aws-explained-by-operating-a-brewery/) if you need to review that first. Now, imagine that you are the owner of a small software company (5 employees) and you are looking for office space. ### The History of Office Space and Servers Let’s rewind 20 years to the year 2000. If you were searching for office space as a small software startup, your options were likely very limited and cost prohibitive. You could: 1. Buy a small office in a strip mall (or something like that) 2. Sign a multiyear lease with a significant upfront payment You would face similar difficulty with getting your product online. You would need to buy an entire server, store it somewhere, and make sure it stayed online 24/7. Very inefficient. But, over the last 20 years, your choices for renting office space have changed! As tech startups became more popular, you could rent a portion of an office on a one year lease, or even a month-to-month lease. And, as coworking spaces became trendy in the 2010s, you could just rent a desk or a closed room within a space shared by many companies at once! ![](https://i1.wp.com/blog.codeanalogies.com/wp-content/uploads/2019/11/OfficeSpaceDrawing.png?fit=730%2C163&ssl=1) At the same time, this was also creating a different relationship between the office building itself, and the team of people required to run the building. When you purchased the entire office at once, you needed to also manage all the services needed to run the space- snacks, cleaning, furniture etc. If the physical space is like **hardware** , the office services are like **software**. And the office manager is the **operating system (OS)**, since they determine how the office works. ![](https://i0.wp.com/blog.codeanalogies.com/wp-content/uploads/2019/11/hardware-v-software.png?fit=730%2C714&ssl=1) But, as we have made office space more accessible, we have also greatly increased the complexity of office services. In other words, there are a greater number of **operating systems** that must work together. For example, let’s say your company begins to rent a floor of a larger office building. Now, there must be a general office coordinator to manage all the floors, and you must have your own office coordinator to manage the services for your floor! Okay, let’s be honest, it’s a small company so that means the CEO is probably the office coordinator for their floor. ![](https://i2.wp.com/blog.codeanalogies.com/wp-content/uploads/2019/11/officeversionofVMs.png?fit=730%2C713&ssl=1) Now, each CEO needs to figure out the operating system for their office- which snacks, when it will get cleaned, and any furniture that is needed. It’s a heck of a lot better than needing to buy real estate, at least. ### The Difference Between Virtual Machines and Containers Let’s tie this back to virtual machines and containers. Much like our second scenario, virtual machines add their own operating system on top of the existing operating system on the server. And, they must use a layer of middleware called a hypervisor to allow each virtual machine to share the hardware capacity. It kind of looks like this: ![](https://i2.wp.com/miro.medium.com/max/2971/1*VtMtEmgKUBXMMLLCk-vt7w.png?w=730&ssl=1)*[Image Credit](https://medium.com/flow-ci/introduction-to-containers-concept-pros-and-cons-orchestration-docker-and-other-alternatives-9a2f1b61132c)* So, there are three levels of software that must work together alongside the files in your application: 1. Host OS 2. Hypervisor 3. Guest OS There are certainly some advantages here- for example, each virtual machine can run its own operating system, which adds more flexibility. But, it also adds a series of resource-intensive software layers. Let’s return to the office space example to learn about using containers. Imagine the year is now 2015, and coworking spaces like WeWork have become popular. At these coworking spaces, you simply need to start paying a month-to-month lease per desk. The property managers take care of snacks, cleaning, furniture and everything else. In other words, you are able to benefit from the space’s **existing operating system**! Here’s what that looks like: ![](https://i2.wp.com/blog.codeanalogies.com/wp-content/uploads/2019/11/officeascontainers.png?fit=730%2C651&ssl=1) Great! Now each CEO can focus on just running their company. In fact, this is the key distinction between containers and virtual machines- as you can see in the diagram above, containers share the host operating system. That means they do not need to run their own OS, or work with a hypervisor to distribute hardware resources across multiple operating system. This means that containers tend to be much more scalable than virtual machines- you can easily deploy new containers in a standardized environment with fewer points of failure. It’s also more cost-effective since you do not need to pay to run all the extra software. ### Advantages/Disadvantages of Containers Keep in mind, there are hundreds of servers on AWS that are running the exact same OS all around the world. That means that you can easily manage containers with your web app all around the world, with very little overhead or custom setup. There is one major disadvantage to containers- security vulnerabilities. Each container has root access to the server, so problems that start with one container on a server can then affect others as well. Let’s return to the original point of this article- Docker allows developers to create and manage containers. So, in our analogy, the Docker service isn’t the building itself- that’s hardware (servers) and Amazon Web Services. And it isn’t an individual company that rents space- those are like containers. In this case, Docker is like the coworking space management company- they make it possible for you to rent office space (or server space) in a new way! Interested in using other similar tutorials? Check out the [CodeAnalogies blog](https://blog.codeanalogies.com) for explanations of other common webdev topics.
kbk0125
203,939
disable view on laravel nova attach
to disable view on attach item add this to the resource that want to be disable to view public func...
0
2019-11-12T07:44:31
https://dev.to/anditsung/disable-view-on-laravel-nova-attach-oll
laravel, nova
to disable view on attach item add this to the resource that want to be disable to view ``` public function authorizedToView(Request $request) { if($request->query->get("viaResource")) { return false; } return true; } ``` "$request->query->get("viaResource")" to find if the resource is accessed from other resource
anditsung
203,979
5 Mistakes to Avoid When Building Your First Product
Mistake #1 - Solving a problem that no one has (or that no one is willing to pay for) Paul...
0
2019-11-12T08:30:32
https://www.zakmiller.com/posts/mistakes-to-avoid-when-building-your-first-product/
startup, beginners
## Mistake #1 - Solving a problem that no one has (or that no one is willing to pay for) [Paul Graham wrote](http://www.paulgraham.com/startupideas.html) about how trying to come up with a startup idea is a pretty bad way to come up with a good idea. You generate a bunch of plausible-sounding ideas (that superficially resemble products you’re familiar with) that are doomed to fail. To paraphrase Mark Twain, the difference between a plausible-sounding idea and a good idea is the difference between the lightning bug and the lightning. ## Mistake #2 - Writing off an idea because someone else has already done it Facebook wasn’t the first social network, Google wasn’t the first search engine, and the iPhone wasn’t the first smartphone. More importantly, many markets are not monopolies and can easily support multiple products. The fact that someone built something similar to what you’re thinking about is a good thing. They proved it’s an economically viable idea, and you can learn from their successes and failures. The alternative (building something no one has made into a successful business before) is much riskier. ## Mistake #3 - Not having a marketing plan If you build it, they won’t come. Go ahead, spend the next six months building something and put it out on the internet. No one will know it’s there unless you do something about it. For a product idea to be a good idea there has to be an economically viable way to gain new customers. Maybe that’s inbound traffic through blog posts (content marketing), maybe that’s Google Ads, or maybe that’s outbound cold calls. Regardless, without a viable channel, you don’t have a product idea - you have a fun project. Check out [Traction](https://www.amazon.com/Traction-Startup-Achieve-Explosive-Customer/dp/1591848369) to learn more. ## Mistake #4 - Not setting your sights low enough With large markets comes lots of money. Thus, lots of competition, a large (required) marketing budget, and a relatively small number of viable business ideas (it’s hard to make something that everyone wants). It’s much easier to build something that’s perfect for a few thousand people. Better yet, once you have a target niche in mind it’s much easier to find ideas that will delight your customers. ## Mistake #5 - Not being remarkable People are busy and won’t care about your little app. Cut through the noise and give them an experience they’ll remark about. [Seth Godin has written](https://seths.blog/2007/01/how_to_be_remar/) about this. ## Conclusion A good way to find out if your product has legs is to build an MVP as quickly as possible and try to sell it (this is the premise of [The Lean Startup](http://theleanstartup.com/), which you should read). The longer you put that off, the more likely you are to be wasting your time. Even better, know that people will pay for it before you start to build it (get them to fund the development!).
zakmiller
204,002
Challenge yourself with a JS Coding Challenge
As a mentor via CodingCoach.io I'm currently working my way through the 21 Days of Code coding challe...
0
2019-11-12T09:11:24
https://dev.to/jquinten/challenge-yourself-with-a-js-coding-challenge-20ma
javascript, codenewbie, learning, codingchallenge
As a mentor via [CodingCoach.io](https://codingcoach.io/) I'm currently working my way through the *[21 Days of Code](https://coding-challenge.lighthouselabs.ca/) coding challenge by Lighthouse Labs* with my mentees. I thought to share our findings and resolutions here, as well as offer some perspective on these events. Firstly, I kind of like this iteration (we've previously did a similar coding challenge side by side). We're following a storyline involving the Mayor of Codeville and face different kinds of challenges. It's nice to have some context and not simply solving problems. I think this is a really engaging way of keeping interest and motivation. The challenges ramp up nicely, they start out with simple problems, but they get more complex per day. Fortunately, in the online code editor (which can be a real nightmare) they've done a good job in emulating an development environment. You can output `console` statements and repeatedly test your code, where error handling is helpful. What I also like is the encouragement of *Test Driven Development*. When you run your code, you also get the output of the unit test assertions. This helps beginners in pinpointing where the code might be faulty and the practice of coding against tests. So how do we work together? Basically, everybody tries to solve the challenges on their own, with the help of the entire internet (Google, Stack Overflow, the discussion forum). I share my resolutions to the daily challenges and try to explain which steps I took to come to this resolution (this helps me in writing more readable code as well). If someone is stuck, I try to help break down the problem or we work together in a Codepen on the broken code and I explain what went wrong. I am offering off-the-shelf resolutions for the daily challenges on a [github repository](https://github.com/joranquinten/lighthouse-21-days-of-code) I've created. The point is having fun, so if you feel like you're stuck, you can just copy & paste a resolution. There's not really something to win or earn, except internet points, so you can only fool yourself with copy & pasting all resolutions. We're a over the halfway point of the challenge now, so if you're interested and haven't joined, there's still time! And if you want to catch up, feel free to take a look at the resolutions. BTW I'm in no way affiliated to [Lighthouse Labs](https://www.lighthouselabs.ca/). My mentee happened to stumble upon this and we thought it was an excellent practice.
jquinten
204,024
Implementing Simple PCA using NumPy
I am open to job offers, feel free to contact me for any vacancies abroad. In this article, I will i...
0
2019-11-12T20:57:58
https://dev.to/akaame/implementing-simple-pca-using-numpy-3k0a
python, numpy, datascience, machinelearning
_I am open to job offers, feel free to contact me for any vacancies abroad._ In this article, I will implement PCA algorithm from scratch using Python's NumPy. To test my results, I used PCA implementation of scikit-learn. ```python from sklearn.decomposition import PCA import numpy as np k = 1 # target dimension(s) pca = PCA(k) # Create a new PCA instance data = np.array([[0.5, 1], [0, 0]]) # 2x2 data matrix print("Data: ", data) print("Reduced: ", pca.fit_transform(data)) # fit and transform ``` This results in: ``` [[-0.55901699] [ 0.55901699]] ``` ### Centering Data Points Make origin the centroid of our data. ```python data = data - data.mean(axis=0) # Center data points print("Centered Matrix: ", data) ``` ### Get Covariance Matrix Get covariance matrix of our features. ```python cov = np.cov(data.T) / data.shape[0] # Get covariance matrix print("Covariance matrix: ", cov) ``` ### Perform Eigendecomposition on Covariance Matrix Eigendecomposition extracts eigenvalues and corresponding eigenvectors of a matrix ```python v, w = np.linalg.eig(cov) ``` ### Sort Eigenvectors According to Eigenvalues Most numerical libraries offer eigenvectors pre-sorted, However, this is not the case for NumPy. Therefore, we need to argsort the eigenvalue vector to get sorting indices and perform sorting on columns of eigenvalues. ```python idx = v.argsort()[::-1] # Sort descending and get sorted indices v = v[idx] # Use indices on eigv vector w = w[:,idx] # print("Eigenvalue vektoru: ", v) print("Eigenvektorler: ", w) ``` ### Get First _K_ Eigenvectors Our aim in PCA is to **construct a new feature space**. Eigenvectors are the **axes** of this new feature space and eigenvalues denote the **magnitude of variance** along that axis. In other words, a higher eigenvalue means more variance on the corresponding principal axis. Therefore, the set of axes with the highest variances are the most important features in this new feature space since they hold most of the information. By getting the first K columns of eigenvector matrix, which have the K biggest eigenvalues, we form what is called a projection matrix. The dot product of our data matrix and projection matrix, _which sounds pretty cool but it is actually pretty straightforward_, is the reduced feature space, the result of PCA. ```python print("Sonuc: ", data.dot(w[:, :k])) # Get the dot product of w with the first K columns of eigenvector matrix # (a.k.a) projection matrix ``` Which also results in: ``` [[-0.55901699] [ 0.55901699]] ``` This is it. Any corrections or suggestions are welcome. ### Extras [Square vs. Non-Square: Eigendecomposition and SVD](https://math.stackexchange.com/questions/583938/do-non-square-matrices-have-eigenvalues) Original Turkish article at {% medium https://medium.com/@sddkal/cpp-eigen-k%C3%BCt%C3%BCphanesi-ile-temel-bile%C5%9Fen-analizi-pca-7cdaf37155ef %} My go to [article on PCA](https://sebastianraschka.com/Articles/2015_pca_in_3_steps.html)
akaame
204,196
Explain Open Source like I'm five
Explaining the concept of open source in an easy-to-understand way.
0
2019-11-12T15:44:50
https://dev.to/peter/explain-open-source-like-i-m-five-264e
explainlikeimfive
--- title: Explain Open Source like I'm five published: true description: Explaining the concept of open source in an easy-to-understand way. tags: explainlikeimfive --- How would you explain the concept of open-source to a five year old?
peter
205,203
The Accessibility Tree
You can also read this post on my blog. Disabled users can and do use your page with a variety of as...
0
2019-11-14T04:04:14
https://blog.benmyers.dev/accessibility-tree/
webdev, a11y
*You can also read this post [on my blog](https://blog.benmyers.dev/accessibility-tree).* Disabled users can and do use your page with a variety of assistive technologies. They use screenreaders, magnifiers, eye tracking, voice commands, and more. All of these assistive technologies share a common need: they all need to be able to access your page's contents. The flow of page contents from browser to assistive technology isn't often talked about, but it's a vital aspect of enabling many disabled users' access to the internet. It's taken a lot of experimentation and innovation to get to where we are now: the *accessibility tree*. This tree shapes how disabled users understand and interact with your page, and it can mean the difference between access and exclusion. As web developers, it's our job to be aware of how the code we write shapes the tree. Let's take a journey through browser internals, operating systems, and assistive technologies. Our first stop: a crucial lesson learned from earlier screenreaders about information flow. ## The Ghost of Screenreaders Past The earliest screenreaders were built for text-only DOS operating systems, and they were pretty straightforward. The text was all there in the device's screen buffer, so screenreaders just needed to send the buffer's contents to speech synthesis hardware and call it a day.<sup>1</sup> Graphical user interfaces proved trickier for screenreaders, however, since GUIs don't have any intrinsic text representations. Instead, screenreaders like Berkeley Systems' outSPOKEN had to resort to intercepting low-level graphics instructions sent to the device's graphics engine.<sup>2</sup> Screenreaders then attempted to interpret these instructions. This rectangle with some text inside is probably a button. That text over there is highlighted, so it's probably selected. These assumptions about what was on the screen were then stored in the screenreader's own database, called an *off-screen model*. ![outSPOKEN menu](https://thepracticaldev.s3.amazonaws.com/i/dfr4hvqqfhpzl375km4p.jpg) Off-screen models posed many problems. Accounting for the alignment and placement of UI elements was tricky, and errors in calculations could snowball into bigger errors. The heuristics that off-screen models relied on could be flimsy — assuming they've even been implemented for the UI elements you want in the first place!<sup>3</sup> Guessing at what graphics instructions mean is clearly messy, but could something like an off-screen model work for webpages? Could screenreaders scrape HTML or traverse the DOM, and insert the page contents into the model? Screenreaders such as JAWS tried this approach, but it, too, had its problems. Screenreaders and other assistive technologies usually strive to be general purpose and work no matter which application the user is running, but that's hampered by including a lot of web-parsing logic. Also, it left users high and dry whenever new HTML elements were introduced. For instance, when sites started using HTML5's new tags such as `<header>` and `<footer>`, JAWS omitted key page contents until an (expensive) update could be pushed out.<sup>4</sup> What did we learn from off-screen models? Assistive technologies that build their own off-screen models of webpages or applications can be error-prone and susceptible to new, unfamiliar elements and controls. These issues are symptoms of a bigger problem with the approach: **when we try to reverse engineer meaning, we end up swimming upstream against the flow of information.** Let's go back to the drawing board. Instead of having assistive technologies make guesses about screen contents, let's have applications tell assistive technologies exactly what they're trying to convey. ## Accessibility APIs and Building Blocks If you want applications such as browsers to be able to expose information to assistive technologies, you'll need them to speak the same language. Since no developer wants to have to support exposing their application's contents to each screenreader and speech recognition software and eye tracker and every other assistive technology individually, we'll need assistive technologies to share a common language. That way, those who are developing browsers or other applications need only expose their contents once and any assistive technology can use it. This *lingua franca* is provided by the user's operating system. Specifically, operating systems have interfaces—*accessibility APIs*—that help translate between programs and assistive technologies. These accessibility APIs have exciting names such as Microsoft Active Accessibility, IAccessible2, and macOS Accessibility Protocol. How do these accessibility APIs help? They give programs the building blocks they need to describe their contents, and they serve as a convenient middleman between a program and an assistive technology. ### Building Blocks Accessibility APIs provide the building blocks for applications to describe their contents. These building blocks are data structures called *accessible objects*. They're bundles of properties that represent the functionality of a UI element, without any of the presentational or aesthetic information. One of these building blocks could be a `Checkbox` object, for instance. ![An orange LEGO brick is labeled with properties of a Checkbox object. The name is "Show tips on startup", checked is true, focusable is true, and focused is false](https://thepracticaldev.s3.amazonaws.com/i/brdf1qdj6un39s57q0ps.png) You could also have a `Button` object: ![A green LEGO brick is labeled with properties of a Button object. The name is "Submit", pressed is false, focusable is true, and focused is true](https://thepracticaldev.s3.amazonaws.com/i/swc387l1zdjodgrqkdoe.png) These building blocks enable all applications to describe themselves in a similar way. As a result, a checkbox is a checkbox, as far as assistive technology is concerned, regardless of whether it appears in a Microsoft Word dialog box or on a web form. ![A diagram shows a pop-up with an unchecked "Show tips at startup" checkbox and an OK button. It also shows a web form with a checked "Unsubscribe" button and a Submit button. Arrows connect the two checkboxes to an orange LEGO brick and the two buttons to a green LEGO brick.](https://thepracticaldev.s3.amazonaws.com/i/705wxmauv04iu5vboms4.png) These building blocks, by the way, contain three kinds of information about a UI element: * **Role:** What kind of element is this? Is it text, a button, a checkbox, or something else? This information matters because it lays out expectations for what this element is doing here, how to interact with this element, and what will happen if you do interact with it. * **Name:** A label or identifier, called an *accessible name*, for this element. Buttons will generally use their text contents to determine their name, so `<button>Submit</button>` will have the name "Submit." HTML form fields often get their name from associated `<label>` elements. Names are used by screenreaders to announce an element, and speech recognition users can use names in their voice commands to target specific elements. * **State and other properties:** Other functional aspects of an element that would be relevant for a user or an assistive technology to be aware of. Is this checkbox checked or unchecked? Is this expandable section currently hidden? Will clicking this button open a dropdown menu? These properties tend to be much more subject to change than an element's role or name. You can see all three of these in just about any screenreader announcement: ![VoiceOver announcement, which reads "checked, Unsubscribe, checkbox"](https://thepracticaldev.s3.amazonaws.com/i/rufnevflhoszchrranus.png) ### Accessibility APIs As a Middleman An application assembles these building blocks into an assistive technology-friendly representation of all of its contents. This representation is the *accessibility tree*. The application then sends this new tree to the operating system's accessibility APIs. Assistive technologies poll the accessibility APIs regularly. They get information such as the active window, programs' contents, and the currently focused element. They can use this information in different ways. Screenreaders use this information to decide what to announce, or to enable shortcuts that allow the user to jump between different elements of the same type. Speech recognition software uses this information to determine which elements the user can target with their voice commands and how. Screen magnifiers use this information to judge where the user's cursor is, in case they need to focus elsewhere. This middleman relationship works both ways. Accessibility APIs enable assistive technologies to interact with programs, giving their users more flexibility. For instance, eye-tracking technology can interpret a user's gaze dwelling on an element as a click. The eye tracker can then send that event back through the accessibility API so that the browser treats it like a mouse click. Putting all of these pieces together, the flow of information from application to assistive technology goes: 1. The operating system provides accessible objects for each kind of UI element. 2. The application uses these objects as building blocks to assemble an accessibility tree. 3. The application sends this tree to the operating system's accessibility API. 4. Assistive technologies poll the accessibility API for updates, and receive the application's contents. 5. The assistive technology exposes this information to the user. 6. The assistive technology receives commands from the user, such as special hotkeys, voice commands, switch flips, or the user's gaze dwelling on an element. 7. The assistive technology sends those commands through the accessibility API, where they're translated into interactions with the application. 8. As the application changes, it provides a new accessibility tree to the accessibility API, and the cycle begins anew. Or, for a much more TL;DR version: ![A diagram detailing the flow of the accessibility tree from application, through the accessibility API, to the assistive technology, and the flow of events from assistive technology, through the accessibility API, to the application.](https://thepracticaldev.s3.amazonaws.com/i/kiwswwi18wq2p86ss6nh.png) ## From the DOM to the Accessibility Tree We've taken a pretty sharp detour into operating system internals. Let's bring this back to the web. At this point, we can figure that your browser is, behind the scenes, converting your page's HTML elements into an accessibility tree.<sup>5</sup> Whenever the page updates, so, too, does its accessibility tree. How do browsers know how to convert HTML elements into an accessibility tree? As with everything for the web, there's a standard for that. To that end, the World Wide Web Consortium's Web Accessibility Initiative publishes the [*Core Accessibility API Mappings*](https://www.w3.org/TR/core-aam-1.1/), or *Core-AAM* for short. Core-AAM provides guidance for choosing which building blocks the browser should use when. Additionally, it advises on how to calculate those blocks' properties such as their name, as well as how to manage state changes or keyboard navigation. The relationship between DOM nodes and accessibility tree nodes isn't quite one-to-one. Some nodes might be flattened, such as `<div>`s or `<span>`s that are only being used for styling. Other elements, such as `<video>` elements, might be expanded into several nodes of the accessibility tree. This is because video players are complex, and need to expose several controls like the *Play/Pause* button, the progress bar, and the *Full Screen* button.<sup>6</sup> Some browsers let you view the accessibility tree in their developer tools. Try it now! If you're using Chrome, right-click on a page element and click *Inspect*. In the pane that opened up with tabs such as *Styles* and *Computed*, click the *Accessibility* tab. This might be hidden. Congrats! You can now see that element in the accessibility tree! If you're using other browsers, you can instead follow [Firefox's Accessibility Inspector instructions](https://developer.mozilla.org/en-US/docs/Tools/Accessibility_inspector) or [Microsoft Edge's instructions.](https://docs.microsoft.com/en-us/microsoft-edge/devtools-guide/elements/accessibility) Poke around on different sites and see what kinds of nodes you can find and which properties they have. ![Facebook's homepage's accessibility tree, as viewed in the Chrome Developer Tools](https://thepracticaldev.s3.amazonaws.com/i/wt3oy2dbkr1mrcs6gilk.png) ## But Why Do We Care? Why should web developers care about the accessibility tree? Is it any more than just some interesting trivia about browser internals? Understanding the flow of a webpage's contents from browser to assistive technology changed the way I view the web apps I work on. I think there are three key ways that this flow impacts web developers: 1. It explains discrepancies between different assistive technologies on different platforms. 2. Browsers can use accessibility trees to optimize how pages are exposed to assistive technologies. 3. Web developers have a responsibility to be good stewards of the accessibility tree. ### Explaining Discrepancies We know that there are three key players in the flow of web contents to assistive technologies: the browser, the operating system accessibility API, and the assistive technology itself. This gives us three possible places to introduce discrepancies: - Operating system accessibility APIs could provide different building blocks. - Browsers could use assemble their accessibility trees differently. - Assistive technologies could interpret those building blocks in different ways. These differences are, honestly, minute most of the time. However, bugs that affect certain combinations of browsers and assistive technologies are prevalent enough that you should be testing your sites on many different combinations. ### Browser Optimizations When constructing accessibility trees, many browsers employ heuristics to improve the user experience. For instance, many developers use the CSS rules `display: none;` or `visibility: hidden;` to remove content from the page. However, since the content is still in the HTML, those using assistive technologies would still be able to get to it, which could have undesirable consequences. Browsers instead use these CSS rules as flags that they should remove those elements from the accessibility tree, too. This is why we have to resort to [other tricks to create screenreader-only text.](https://cloudfour.com/thinks/see-no-evil-hidden-content-and-accessibility/#showing-additional-content-for-screen-readers) Additionally, browsers use tricks to protect users from developers' bad habits. For instance, to counter the [problems that can be caused by layout tables](https://webaim.org/techniques/tables/), both Google Chrome<sup>7</sup> and Mozilla Firefox<sup>8</sup> will guess at whether a `<table>` element is being used for layout or for tabular data and adjust the accessibility tree accordingly. ### Tree Stewardship Being aware of the accessibility tree and how it impacts your users' experience should make one thing clear: to build quality web applications, we must be responsible stewards of our applications' accessibility trees. After all, it's the only way many assistive technology users will be able to navigate and interface with our page. If our tree is rotten, there's not really anything these users can do to make our page usable. Fortunately, we have two tools for tree stewardship at our disposal: semantic markup and ARIA. When we use semantic markup, we make it much, much easier for browsers to determine the most appropriate building blocks. When we write `<input type="checkbox" />`, for instance, the browser knows it can put a `Checkbox` object in the tree with all of the properties that that entails. The browser can trust that that's an accurate representation of the UI element. The same goes for buttons and any other kind of UI element you might want on your page. Semantic markup will work for the majority of our needs, but there are times when we need to make tweaks here and there to our application's accessibility tree. This is what ARIA is for! In my next post, I'll explore how ARIA's whole purpose is to modify elements' representation in the accessibility tree. ## Conclusion Decades of trial and error in building screenreaders and a wide variety of other assistive technologies have taught us one big lesson: assistive technology will work much more reliably when information flows directly to it rather than be reverse engineered. Browsers do a lot of heavy lifting to make sure our pages play nicely with assistive technologies. However, they can't do their job well if we don't do our job well. ## Footnotes 1. Please forgive the oversimplification. 2. Rich Schwerdtfeger, *BYTE*, [Making the GUI Talk](https://developer.paciellogroup.com/blog/2015/01/making-the-gui-talk-1991-by-rich-schwerdtfeger/) 3. Léonie Watson & Chaals McCathie Nevile, *Smashing Magazine*, [Accessibility APIs: A Key To Web Accessibility](https://www.smashingmagazine.com/2015/03/web-accessibility-with-accessibility-api/) 4. Marco Zehe, [Why accessibility APIs matter](https://marcozehe.wordpress.com/2013/09/07/why-accessibility-apis-matter/) 5. It probably comes as no surprise that the accessibility tree is built in parallel to the DOM. One of the things I realized as I was writing this post is that creating structured representations of a page that enable programmatic interfacing with the page is really browsers' bread and butter. Your browser does exactly this to manage page contents (via the DOM) and element styles (via the CSS Object Model), so why not throw in accessibility tree creation while you're at it? 6. Steve Faulkner, The Paciello Group, [The Browser Accessibility Tree](https://developer.paciellogroup.com/blog/2015/01/the-browser-accessibility-tree/) 7. [Chromium source code](https://chromium.googlesource.com/chromium/blink/+/master/Source/modules/accessibility/AXTable.cpp) 8. [Firefox source code](https://dxr.mozilla.org/mozilla-central/source/accessible/generic/TableAccessible.cpp)
bendmyers
204,248
Business Logic of an Application - My Experience as Newbie Programmer
This post appears as an entry on my personal blog also: https://mydev-journey.blogspot.com/2019/11/bu...
0
2019-11-12T17:11:22
https://dev.to/zoltanhalasz/business-logic-of-an-application-my-experience-as-newbie-programmer-1oh0
csharp, dotnet, sql, beginners
This post appears as an entry on my personal blog also: https://mydev-journey.blogspot.com/2019/11/business-logic-of-application-my.html If you read my other blog posts, especially the first ones, I explain there that my background is 15 years of corporate controlling (management accounting). And 2019 was for me a transition year to - writing accounting/business software. I accumulated some SQL/C# skills in the last years, and using my business knowledge and logic, I try to build some real tools for my previous workplace's accounting department. I am a self-employed freelance contractor for them. I can state at the beginning that having an accounting/controlling background is very useful to understand the logic of my apps. Their goal is to automatize the work of accountants, to reduce to minimum the need of using Excel files and many emails for data collection, approval etc. So, my business background proves to be helpful! Afterwards, comes the design and programming part. First I design the database, using an abstract model of the data that can be used in the app. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/4ca5x65sdtuk39mv90tc.jpg) Understanding table structures and relationships in excel, and the workflow, I can build the SQL tables and relationships, keys, indexes, queries and reports that can be integrated to the application. I do this without having a formal training on how to build business applications, just following simple logic, then some database theory, SQL knowledge, normalization, etc. Then I design the code and start writing it. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/1o3bvpp7tal3qz0xpv0r.jpg) First, the models of the app will be representing these entities, shown above in the database section: SQL tables and relationships, and are the foundation of the application. Example: Business Units, Employees, Expenses etc. The models are written in POCO classes. I use Dapper, not EF for my projects. I don't use code first, rather database first, but my models are more or less equivalent to the datatables. Then comes the application layer, with MVVM - a viewmodel that takes care of data coming from and going to the database, manipulating user inputs, checking, validating, iterating, etc... This I write in C#. Then there is the View part of the application - written in WPF - for Windows, which is a quite simple interface containing user screens with tabs, tables (gridview), inputs, and reports plus some exports/imports/reporting of data. I try to make the user interface simple and easy to navigate, with a minimal risk to do stupid things. The users are happy, because they can use a centralized database app, and thus we can eliminate a lots of emails and excel files, which are very prone to mistakes. As I progress with my application, I go back sometimes to redesign the database (add fields, indexes, tables), and also the code of MVVM above. This is an iterative process for me, like continuous improvement. Last step is to populate the DB with data, and check. This is the most time consuming part, and can feedback to database design and coding. What do you think? Would you do something differently? Is accounting background a disadvantage, or advantage? What should I focus on when writing such applications? Feel free to give me feedback on the above topics. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/v4xuv95kzk3lqydsc6xa.jpg)
zoltanhalasz
204,610
Designing For Data Protection - Episode 106
The practice of data management is one that requires technical acumen, but there ...
0
2019-11-13T02:59:29
https://www.dataengineeringpodcast.com/data-protection-regulations-episode-106/
<p>The practice of data management is one that requires technical acumen, but there are also many policy and regulatory issues that inform and influence the design of our systems. With the introduction of legal frameworks such as the EU GDPR and California's CCPA it is necessary to consider how to implement data protectino and data privacy principles in the technical and policy controls that govern our data platforms. In this episode Karen Heaton and Mark Sherwood-Edwards share their experience and expertise in helping organizations achieve compliance. Even if you aren't subject to specific rules regarding data protection it is definitely worth listening to get an overview of what you should be thinking about while building and running data pipelines.</p><p><a href='https://www.dataengineeringpodcast.com/data-protection-regulations-episode-106/'>Listen Now!</a></p>
blarghmatey
204,676
Lwing : Send Stylish Messages on Whatsapp, Messenger and More
ShowDEV Lwing!
0
2019-11-13T07:03:48
https://dev.to/bauripalash/lwing-send-stylish-messages-on-whatsapp-messenger-ans-more-550g
showdev, javascript, unicode
--- title: Lwing : Send Stylish Messages on Whatsapp, Messenger and More published: true description: ShowDEV Lwing! cover_image: https://thepracticaldev.s3.amazonaws.com/i/varz9iv8zaoufw8w9jix.png --- Let's Make It Quick! 🚀 ## 🔥 What is Lwing? lwing (Pronounced as "el-wing") is basically a unicode text styler. It takes your simple english input and converts it to some mathematical unicode charecters to make them look stylish and different. ## 🤔 Why The Name "Lwing"? I don't know, at first I named it lacewing but mistakenly in website title I named it *lwing* . Now it's named as Lwing! ## 📱 Where Can I use Lwing generated text? You can use lwing generated text mostly everywhere including Whatsapp , Facebook , Messenger, SMS and even at printing! ## 🌐 I want to use it Now! ### Just head over to <https://lwing.ml> 😊 ## 💻 What Tech Stack does Lwing use? * Svelte * HTML * CSS * JS / NodeJs * Netlify ## 🔥 🤔 How fast is Lwing? ### 😎 Fast Like Sonic Boom‼️ Here's Lighthouse Report : ![](https://thepracticaldev.s3.amazonaws.com/i/h8399vt2txhq13jryaz6.jpg) ## 😊 Want to Contribute? Here's GitHub Repository: {% github bauripalash/lwing %} Feel free to Fork, Make Improvements and Send Pull Requests 😊😊 --- You can also Upvote Lwing on Product Hunt at <https://www.producthunt.com/posts/lwing> here [![](https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=174320&theme=light)](https://www.producthunt.com/posts/lwing?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-lwing) --- If you want to support my work , you can donate via Paypal at <https://paypal.me/bauripalash> or via Paytm (If Indian citizen) at <https://p-y.tm/9V-oX9y> 😊😻!! Thank You ❤️
bauripalash
204,682
TIL: docker commit
When I need to create a new custom Docker image, I usually start with a base imag...
0
2019-11-13T10:40:08
https://flaviabastos.ca/2019/11/13/til-docker-commit/
commandline, todayilearned, docker
--- title: TIL: docker commit published: true date: 2019-11-13 05:00:00 UTC tags: command line,til,docker canonical_url: https://flaviabastos.ca/2019/11/13/til-docker-commit/ --- When I need to create a new custom Docker image, I usually start with a base image (alpine, debian, python, etc, depending on the project), running it in the interactive mode and install the tools and dependencies I will need. Once I get my container the way I want, I create a Dockerfile with all the commands I ran inside my container. It works, but I just learned that this might be unnecessary extra work. All you need is [docker commit](https://docs.docker.com/engine/reference/commandline/commit/) The process starts the same way: running a base image with interactive access and installing tools and dependencies. THEN, you run: `docker commit container_ID image_name:tag` The _container ID_ can be found by running `docker ps` on a separate tab/window. And the _image name_ and _tag_ is whatever name and tag you want to give it. Now, having a `Dockerfile` has its advantages, such as better way to keep version control, documentation and maintenance, but for prototyping or really small projects, `docker commit` seems to be very useful. > _The post _[TIL: docker commit](https://wp.me/pa0b0y-5P) _was originally published at _[flaviabastos.ca](https://flaviabastos.ca/)
flaviabastos
204,815
CORS .NET Core API, NGINX
Hello, After 2 weeks, still having a CORS issue when my vuejs frontend (running on gnix) try to reac...
0
2019-11-13T12:19:01
https://dev.to/yveralonen/cors-net-core-api-nginx-lm7
vue, csharp, nginx, cors
Hello, After 2 weeks, still having a CORS issue when my vuejs frontend (running on gnix) try to reach my .net core api (running on kestrek with a nginx reverse proxy). However, I've followed all the examples and advices I've got. Where am I wrong ? :(
yveralonen
204,854
What is Fragmentation?
Now that you've seen the extent of fragmentation in devices, platforms, browsers, and screen resolu...
2,993
2019-11-13T13:28:50
https://www.browserstack.com/blog/what-is-fragmentation/
devicefragmentation, fragmentationinos, testing, softwaretesting
<a href="https://bit.ly/2X9HKHL" target="_blank"><img src="https://www.browserstack.com/blog/content/images/2019/11/Infographic_Fragmentation-01--2-.png" class="kg-image"></a> Now that you've seen the extent of fragmentation in devices, platforms, browsers, and screen resolutions, learn how to take them into account while creating your own cross-device test strategy. <strong>Recommended reading</strong>:</p> <p><a href="https://browserstack.com/blog/cross-browser-testing-for-compatibility/?utm_source=referral&utm_medium=dev.to&utm_campaign=fragmentation"</a><img src="https://www.browserstack.com/blog/content/images/2020/02/Blog-Banner@2x.png"></a></p>
arnav1712
204,952
Visualizing Hacktoberfest 2018
I know this post is about the last year, I would like if you want a 'Visualizing Hacktoberfest 2019'...
0
2019-11-13T16:33:38
https://app.scope.ink/
hacktoberfest, productivity, github, opensource
I know this post is about the last year, I would like if you want a 'Visualizing Hacktoberfest 2019' :) The objective of this event is to increase contributions in Open Source projects. If you make four pull requests to an Open Repository in GitHub, you win a free t-shirt and stickers. GitHub hosted more than 96M projects in the last year. This included over 200M pull requests. As someone who is creating visualizations at Scope.ink about productivity and the real impact of tasks in a repository, this is very interesting to me. I want to show you some interesting visualizations on this event and give some useful data 😃 Are collaborations in October increasing? What I have learned making visualizations, is that in Open Source repositories the largest users don’t contribute more than three times. I think the reason for this being that after this user fixes the problem, they abandon the project entirely. I believe HacktoberFest motivates people to collaborate. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/ruocy0qpg9uknh6qfu4q.png) Contributions in Open Source Project sort by authors. So I decided to check how many pull requests were being opened throughout a year in some popular repositories: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/uqr96cmgsoi0zfa1ab6k.png) Number of Pull Requests through 2018 in four different repositories We can see that October is the top month for pull requests contributions. However, while making these visualizations, I found a case that shocked me: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/hf1dbrpktj90wa69lotc.png) Number of Pull Requests in freeCodeCamp repository The freeCodeCamp repository had more than 14000 pull requests! But… Are these contributions good? Seeing the previous visualizations we cannot understand if those contributions were useful or not. First, we should look at how many were merged or accepted, understanding that these are the pull requests which have passed a review: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/9keuj07i98coqvvexlux.png) Unmerged and merge pull requests in the 2018 Hacktoberfest, some repos But freeCodecamp remained the exception. So many pull requests remained unmerged or marked as invalid. There were useless pull requests such as: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/5c6bmuev16dd8uz7okru.png) Real contribution in freeCodecamp repository My theory is that freeCodeCamp made an article calling developers to earn a free t-shirt. Not every pull request was useless I think freeCodeCamp was a curious case. In other repositories, we have seen that the number of useless pull requests was not as excessive. In this visualization, we can see the modified files in 100 pull requests from the ghost’s repository. There is a great variety, which indicates to us that there were contributions of all types. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/3tr1e573bmisx3nygo8b.png) Files changed in 100 pull requests
maluzzz
205,017
Model View Controller: The 3 dimensions of programming
It has been about a year since I decided that I want to switch over to computer science, and since th...
0
2019-11-13T18:38:43
https://dev.to/jihoshin28/model-view-controller-nmp
beginners
It has been about a year since I decided that I want to switch over to computer science, and since then it has been an exciting journey learning about how computers worked. All my life I have been a fairly casual computer user, and for me computers have always been something like magic to me. I would click on the button and it would take me to a website, or I would press my keyboard and it would make my character jump on the computer. So when I participated in a coding bootcamp at UC Berkeley extension and I was told that computers primarily are nothing more than a large sequence of 1's and 0's, it was like being told about the quantum universe, when I hadn't even understood basic physics yet. Eventually I learned about basic concepts concerning web development, and started feeling my way through this mysterious world of computers using a lot of console.log. One of the topics which has since stuck with me, and really make sense of what a web page is was the idea of Model-View-Controller, which I want to explore more in depth today. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/insqszgotlo67cjsjodt.png) First let's explain this diagram to understand the MVC model, which is the basic logic behind most webpages. We'll start at the top with the model. What the model is, is the fundamental basis for everything in computers which is just data. Everything on a web application from checking your online status (boolean), all your important tweets (string), and how many likes you got for those tweets (integer), are all instances of data. These are the building blocks or actual matter consisting the web application. The way I like to look at it is that it is the physical matter of computers. One important note to make about the model, is that because it is the actual data, it is not dependent on the view or controller. Second we have the view. Anytime we look at a computer screen we are looking at some representation of data. Whether it is through a simple CLI program that just involves text, or a complicated web page that uses graphical interface, we have the 'view' of our data. What this part of the MVC provides is what can be seen as a visual filter for our data and is responsible to choose what parts of our data are represented and how. The view can be independent of the model and controller (f.e. a webpage displays basic text saved in a database) or it can be controller itself and therefore dependent on the model (like a table or calendar where users can directly manipulate). Last but not least we have our controller. The controller is, as the diagram shows, what the user interacts with in order to interact with or change our data in our model. An application would be meaningless for a computer to just display a static piece of data. Instead, the point of an application is for users to be able to manipulate that data and interact with it in some way meaningful to the user. A controller can allow our user to interact with the model/data in many different ways, a typical example being the button that a user can click to communicate to our model. What actually represents a controller on an application can be tricky to grasp, because technically our changes to the data must represented in our view and model, and is dependent on them. I like to see it as the tool responsible for 'movement of data'. One useful convention to define what a controller can do is represented by the CRUD, which stands for create, read, update, and delete (all movement of data). You can read more about that here. (https://stackify.com/what-are-crud-operations/) At face value, MVC is a very basic concept, but I think that is what makes it such a useful overarching model to describe programming in general. In my current bootcamp at Flatiron, our cohort was given this file to work on. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/5k6sgchh6lwyi0m4jxvy.png) The whole class and I just stared at the folder structure for about 15 minutes not even sure where to start. But as with anything simple, we just need to break it down into its parts. After reviewing the MVC model, I referred back and noticed a whole section of the file structure at the top was just exactly that structure in our app folder, which is the actual web application structure we'd expect. All of the logic of that complicated folder structure can just be summed up into that table at the top. So now we understand what the application part is doing. But as I look back to the objective of our lab, which was to use ActiveRecord to make data models and associations, I could make sense and organize our tasks into categories. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/brghne6s4faj2uhm1io7.png) First we have the ActiveRecord migrate functions, which actually make our data tables with SQL. This part of the project functions like our model, since we are determining how the data will be organized and what categories of information there will be. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/rplvxhbptzida5fbxlab.png) Next, we have our models, where we can determine how this data model will relate with other data models based on things like matching keys, and what belongs to what. There are also other functions we can construct to do special things with our data. I believe just like the world is structured in 3 dimensions, the MVC model provides a useful way to understand how data is structured. The computer program is not just data, but the interaction of data with uses of controllers and views. Conversely, (like how I used to perceive computer applictions) is not just a magical GUI interface, but is based in data that is defined in our system. These things all ultimately work together to compose of a computer application. Understanding that structure of many different roles and how they work together to provide one application provided by MVC, helps us understand and make map our sense of what programming is.
jihoshin28
205,070
Criando layouts responsivos e adaptativos com React e Styled-Components
Fala Techs! Sabemos que nos dias atuais, para criar sites e web apps precisamos sempre est...
0
2019-11-26T03:30:31
https://dev.to/carloscne/criando-paginas-responsivas-e-adaptativas-com-react-e-styled-components-1gje
react, javascript, css, design
###Fala Techs! Sabemos que nos dias atuais, para criar sites e web apps precisamos sempre estar preocupados com os diversos dispositivos e tamanhos de telas. Muitas vezes temos um profissional de UI que faz aquele layout maravilhoso para uma tela de 1440px de largura e aí quando vamos construindo o layout usando as medidas fixas em pixels, fica um pouco estranho (_Pra não dizer muito_) em telas com resoluções diferentes. E as fontes então? Nem se fala. Vou descrever aqui uma ideia que tenho usado para resolver esse problema. Existem muitas formas de fazer isso e aqui vou mostrar apenas mais uma delas. ###Só precisamos alinhar alguns conhecimentos prévios e conceitos: * Estou utilizando o _create-react-app_ para criar uma estrutura básica do React sem se preocupar com configuração do Webpack e afins. * Suponho que você saiba o que é o _ReactJS_ e _Styled-Components_. Se não souber, em uma pesquisa rápida você encontra todos os conceitos. Embora eles possam ser aplicados em _CSS_ puro também. **Layouts responsivos** são aqueles que se ajustam ao tamanho da tela do usuário. Eles não mudam a posição das coisas, simplesmente ajustam. **Layouts adaptativos** também se ajustam a tela do usuário, porém muitas vezes trocando a posição dos elementos e geralmente possuem _media queries_ para adaptar ao tamanho da tela. ####Bora começar! Vamos começar criando nosso projeto com o _create-react-app_. Após criar o projeto, acesse o diretório do projeto e instale o _styled-component_ como dependência. Se você quiser configurar manual, sem usar o _CRA_ fique a vontade. <a href="http://imgbox.com/dm02i7ue" target="_blank"><img src="https://images2.imgbox.com/6d/2b/dm02i7ue_o.png" alt="Estrutura de pasta create-react-app"/></a> Na pasta _src_ vamos deixar somente os arquivos App.js e index.js. Apague os outros arquivos e lembre-se de remover as referências desses arquivos deletados do projeto. <a href="http://imgbox.com/J5sRh5rd" target="_blank"><img src="https://images2.imgbox.com/77/c3/J5sRh5rd_o.png" alt="estrutura sem arquivos desnecessários"/></a> Só pra gente ter uma referência, vamos usar essa imagem como layout: <a href="http://imgbox.com/Ft3xpXOY" target="_blank"><img src="https://images2.imgbox.com/1b/db/Ft3xpXOY_o.jpg" alt="layout"/></a> Vamos também supor que o UI definiu que teremos fontes de tamanhos diferentes para Mobile com 24px, Tablet com 18px e Web com 16px. Com essas informações em mãos, vamos seguir nosso projeto. ####Reset CSS e configurações globais. Na pasta _src_ vamos criar outra pasta chamada styles e dentro da dessa pasta um arquivo chamado global.js (Essa organização que costumo usar em projetos pessoais. Se quiser usar de outra forma, sem problemas!). <a href="http://imgbox.com/7TlRmEY2" target="_blank"><img src="https://images2.imgbox.com/7b/07/7TlRmEY2_o.png" alt="image host"/></a> Vamos usar aqui o styled-components para criar um estilo global. Segue o código: ``` javascript import { createGlobalStyle } from "styled-components"; import px2vw from "../utils/px2vw"; export const Global = createGlobalStyle` * { margin: 0; padding: 0; box-sizing: border-box; } :root { font-size: ${px2vw(24)}; @media (min-width: 768px) { font-size: ${px2vw(18)}; } @media (min-width: 1024px) { font-size: ${px2vw(16)}; } } `; export default Global; ``` O que fizemos aqui foi zerar algumas propriedades e definir o _root_ do HTML com o tamanho das fontes que vamos usar. Observe que importei uma função que chamei de px2vw. Essa função converte pixels para **viewport width.** Como nosso layout será responsivo, preciso que ele se adapte em todos os tamanhos de tela e por isso vou usar o tamanho da viewport. Poderia ter pensado em usar percentual, mas o problema é que se você definir um percentual dentro de um outro elemento menor que a viewport, ele vai usar o tamanho daquele elemento e neste caso não resolveria o problema. Também vou usar essa função para os tamanhos de fonte pelo mesmo motivo: Se os elementos vão se ajustando a tela, as fontes também vão. Optei por não trabalhar usando também o viewport height porque normalmente trabalhos com a largura e não com a altura da tela e também **porque tive um outro problema desenvolvendo para Smart Tvs**. _Depois conto o que foi_. ####Função px2vw. Bora então criar nossa função. Na pasta _src_ do nosso projeto, vamos criar uma pasta _utils_ e lá dentro vamos criar o arquivo px2vw.js. Segue o código dele: ``` javascript const px2vw = (size, width = 1440) => `${(size / width) * 100}vw`; export default px2vw; ``` Para essa função, já deixei o valor do width padrão de 1440px, mas você pode usar qualquer outro ou ainda sempre receber como parâmetro da função ou ainda deixando ela mais genérica. ####Criando a página do projeto. Vamos agora criar uma página para exibir nosso layout. Dentro da pasta _src_ vamos criar uma pasta chamada _pages_ e dentro dela vamos criar outra pasta chamada _Home_. Dentro dessa pasta _Home_ vamos criar dois arquivos. Vou separar componentes de estilo e componentes de lógica. Vamos criar então os arquivos _Home.js_ e _HomeStyles.js_. HomeStyles.js: ``` javascript import styled from "styled-components"; import px2vw from "../../utils/px2vw"; export const Container = styled.div` display: flex; flex-wrap: wrap; justify-content: center; margin: ${px2vw(32)}; max-width: 100%; @media (min-width: 1024px) { flex-wrap: nowrap; } `; export const Box = styled.div` display: flex; width: ${px2vw(320, 320)}; min-height: ${px2vw(200, 320)}; flex-direction: column; padding: ${px2vw(20)}; margin: ${px2vw(20)}; background-color: ${props => props.bgColor}; height: 100%; @media (min-width: 768px) { width: ${px2vw(320, 768)}; min-height: ${px2vw(200, 768)}; height: 100%; } @media (min-width: 1024px) { width: ${px2vw(500)}; min-height: ${px2vw(300)}; height: 100%; } `; export const BoxTitle = styled.h3` color: #333; font-size: 2rem; text-align: center; @media (min-width: 1024px) { font-size: 1.5rem; } `; export const BoxText = styled.p` margin-top: ${px2vw(20)}; color: #666; font-size: 1.5rem; @media (min-width: 1024px) { font-size: 1rem; } `; ``` Construímos a estilização do nosso componente. Adicionei estilos de texto pra gente ver como se comporta quando o tamanho da fonte é alterado. Quando chamo a função px2vw para um outro tamanho de tela, passo esse tamanho como parâmetro. `min-height: ${px2vw(200, 320)};` Usei também _media queries_ para fazer nosso layout ser, além de _responsivo_ ser também **adaptativo**, ou seja, dependendo do tamanho da tela as "caixas" vão se ajustando conforme o layout de exemplo. Para cada *Box* passei também uma propriedade bgColor para controlar a cor de cada Box. Agora vamos ao nosso _Home.js_: ``` javascript import React from "react"; import { Container, Box, BoxTitle, BoxText } from "./HomeStyles"; export default function Home({ boxData }) { return ( <Container> {boxData.map(box => ( <Box key={box.id} bgColor={box.bgColor}> <BoxTitle>{box.title}</BoxTitle> <BoxText>{box.text}</BoxText> </Box> ))} </Container> ); } ``` E agora é só ajustar nosso componente App.js para importar nosso layout: ``` javascript import React from "react"; import Global from "./styles/global"; import Home from "./pages/Home/Home"; const lorem = "Lorem, ipsum dolor sit amet consectetur adipisicing elit. Laboriosam, sed iure blanditiis voluptatum nulla quidem minus quam tempora obcaecati necessitatibus inventore! Vitae totam quam pariatur facilis fugit maxime adipisci eaque."; const data = [ { id: Math.random(), title: "Box titulo 1", text: lorem, bgColor: "#D5CAFA" }, { id: Math.random(), title: "Box titulo 2", text: lorem, bgColor: "#EDA9A9" }, { id: Math.random(), title: "Box titulo 3", text: lorem, bgColor: "#F2EE8D" }, { id: Math.random(), title: "Box titulo 4", text: lorem, bgColor: "#9FEACD" } ]; function App() { return ( <> <Global /> <Home boxData={data} /> </> ); } export default App; ``` Prontinho! Agora é só dar um _npm run start_ ou _yarn start_ e ver o resultado redimensionando a tela. Veja: <a href="http://imgbox.com/SAlZxNRE" target="_blank"><img src="https://images2.imgbox.com/2a/a0/SAlZxNRE_o.gif" alt="image host"/></a> Esta é apenas mais uma das formas que você pode criar seus layouts _fluidos_ com responsividade e adaptabilidade. Se gostou deste texto ou até mesmo tenha um crítica ou sugestão, deixe nos comentários. É muito importante pra mim continuar me desenvolvendo e aprendendo. O código está disponível no Github, basta clicar [aqui](https://github.com/carlos-cne/layoutsArticle). Me adicionem também no [linkedin](https://www.linkedin.com/in/carlos-queiroz-dev/) e bora trocar ideia! **[English version is here](https://dev.to/carloscne/creating-responsive-and-adaptive-layouts-with-react-and-styled-components-1ghi)
carloscne
205,112
Code snippets to recreate those cool effects seen on famous(ish) sites
Today we will be looking at some snippets you can use to recreate those awesome effects you see on ot...
0
2019-11-13T22:55:27
https://dev.to/saijogeorge/code-snippets-to-recreate-those-cool-effects-seen-on-famous-ish-sites-4o16
html, css, design
Today we will be looking at some snippets you can use to [recreate those awesome effects](https://codemyui.com/tag/deconstruction/) you see on other sites. #Reebok Promo Image Transition - [Codepen](http://codepen.io/flacu/pen/BoLRPw/) [![R1ebok Promo Image Transition ](https://codemyui.com/wp-content/uploads/2015/09/reebok-ink-image-transition-effect-in-css.gif)](https://codemyui.com/reebok-ink-image-transition-effect-in-css/) #The gradient pull quote as seen on Polygon - [Codepen](http://codepen.io/mpopv/pen/pbYwvQ/) [![The gradient pull quote as seen on Polygon](https://codemyui.com/wp-content/uploads/2017/05/gradient-pull-quote-as-seen-on-polygon-com_.gif)](https://codemyui.com/gradient-pull-quote-seen-polygon-com/) #Facebook's scroll down to show sticky video popup - [Codepen](https://codepen.io/creme/pen/jOOZgEO) [![Facebook's scroll down to show sticky video popup](https://codemyui.com/wp-content/uploads/2019/11/Vanilla-Javascript-and-CSS-Sticky-Floating-Video-on-Page-Scroll.gif)](https://codemyui.com/vanilla-javascript-and-css-sticky-floating-video-on-page-scroll/) #Soldout Window sign from Electronic Object - [Codepen](https://codepen.io/SaijoGeorge/pen/YVGKBp/) [![Soldout Window sign from Electronic Object](https://codemyui.com/wp-content/uploads/2017/04/sold-out-sign-for-ecommerce-stores.gif)](https://codemyui.com/sold-sign-ecommerce-stores/) #Stripe's navigation header - [Codepen](http://codepen.io/devy_pl/pen/qaPjKd/) [![Stripe's navigation header ](https://codemyui.com/wp-content/uploads/2016/11/stripe-com-header-navigation-code.gif)](https://codemyui.com/stripe-com-header-navigation-code/) #The squiggle link effect as seen on TheOutline - [Codepen](https://codepen.io/geoffgraham/pen/bxEVEN/) [![The squiggle link effect as seen on TheOutline](https://codemyui.com/wp-content/uploads/2018/09/TheOutline-squiggle-link-hover-effect.gif)](https://codemyui.com/theoutline-squiggle-link-hover-effect/) #MomentsApp Video Loading - [Codrops](http://tympanus.net/Tutorials/VideoOpeningAnimation/) [![MomentsApp Video Loading](https://codemyui.com/wp-content/uploads/2015/09/momentsapp-video-loading-animation.gif)](https://codemyui.com/momentsapp-video-loading-animation/) Want more check out the [deconstruction gallery](https://codemyui.com/tag/deconstruction/) on CodeMyUI.
saijogeorge
205,207
Who Ate Docker's Lunch?
Yesterday, Mirantis acquired Docker Enterprise which includes the registry, the enterprise accounts a...
0
2019-11-14T05:01:35
https://blog.arpitmohan.com/Who-Ate-Dockers-Lunch
docker, kubernetes, devops, news
Yesterday, [Mirantis acquired Docker Enterprise](https://techcrunch.com/2019/11/13/mirantis-acquires-docker-enterprise/) which includes the registry, the enterprise accounts and basically everything of value owned by Docker Inc. The company is now left with a shell of its former business. Even though the sale amount is not public, it is widely understood to not be a large sum. *Docker was once a darling of the tech world. Today we are left wondering - Who ate their lunch?* ## What did Docker do well? **1. Remarkable developer UX** Solomon Hykes & co. took an old not-so-well-known technology "Linux Containers (LXC)" and created a beautiful developer experience around it. It was like old wine (LXC) in a new bottle. It allowed developers to leverage the possibility of creating re-usable, re-deployable binaries and it was incredible. Once a container was built, you could run `docker run` on any Unix system and it would just work. This was the promise of Java jars in the past, just on a more generic and wider scale. **2. Faster REPL cycles** Creating a layered structure for the Docker container (akin to Git), was another masterstroke. A developer could re-use various pre-built layers in other builds. This reduced build time for incremental builds dramatically. In the developer world faster REPL cycles lead to faster adoption; ALWAYS. And it happened. On the downside, this design created bloated Docker images. There were multiple hacks introduced to counter this force. But it remains one of the biggest challenges of the container world. **3. Run Anything, Anywhere** For better or for worse, most developer machines are not replicas of their production environment. For example, while I code on a Macbook, our production environment is a cluster of Debian machines. If you work in an enterprise, you might even be required to use Windows as your primary dev environment. This disparity creates a whole new set of headaches. It's hard to develop & debug for a system that you are not very well versed with. Allowing developers to run an OS inside another OS was a huge accomplishment. **4. Rise of "Devops"** The meaning of the word "Devops" is highly contentious. It means different things to different people. Docker was singularly responsible for getting developers to stop throwing code over the wall to sysadmins who then had to run & maintain the code in production. This led to the creation of a hybrid team where the dev & ops folks could work closely with each other. This could only happen by making ops more approachable to the devs (and vice-versa). As a dev, if I could run a Docker container on my local machine and be promised that it would behave in the same manner in production, it gives me a lot more confidence in my abilities to troubleshoot production issues. ## Where did Docker go wrong? **Most developer tool companies (IntelliJ, Terraform, etc) start out with a popular product that keeps them top-of-mind for developers.** But it's hard to monetize & build a long-lasting company based on a single product. As a company, you need to **build 2nd & 3rd tier products that ride on the popularity of the first one. This suite of products then come together and create a reckoning force.** Take for instance the successful developer tools company Hashicorp. Their first product that became popular was Terraform, a multi-cloud provisioning system. You could write a simple config file and provision computers across any cloud provider. They capitalized on its popularity and created a suite of products such as Consul, Vault, etc, each with enterprise plans in mind. This allowed enterprise teams to collaborate, cluster & monitor their production systems. **Docker**, on the other hand, **wasn't able to create a successful 2nd tier product.** If you look at their website, product offerings are limited. Docker Hub was required but not enough to sustain the company. Docker Swarm (which could have been their consolidation) was an inferior technology as compared to Kubernetes - the big daddy of orchestration today. **While the initial promise of "Build once, run anywhere" is great, managing production environments is a whole different beast to handle.** Running clusters of machines, security management, network partitions, redundancies at all levels is what keeps sysadmins constantly on their toes. **The experience of using Swarm in production is less than ideal. It just doesn't live up to the requirements.** In this sphere, Kubernetes did a much better job (even though their dev UX sucks) at running production workloads with little hassle. **Observability products such as Prometheus, New Relic, etc capitalized on Docker containers being harder to monitor because they were isolated binaries.** Another missed opportunity for Docker Inc. Being able to expose monitoring data out-of-the-box could have been a huge win. It could have also ensured that as a developer, I was tied into the ecosystem. All of these missed opportunities are hard problems to solve. They aren't solved overnight. But Docker had some time to solve this. It was the highly valued darling of the tech world, after all. At its peak, Docker had investors willing to invest in its future and developers dying to work for the company. Docker Inc did introduce consultancy services for enterprises. But the revenue from it was considered service revenues. And service revenue isn't as highly regarded as product revenue because repeatability & scalability factors aren't high in services. **Docker was great at building its technology but the fact remains that it always struggled hard with monetization. There is a lot to learn from Docker's pioneering vision as well as from its market struggles.** I wish to see the technology thrive and I'm optimistic that Mirantis will do justice to Docker Inc.
mohanarpit
205,278
What Is Trackby in Angular?
The “track by” expression to specify unique keys. The trackBy function takes the index and the curre...
0
2019-11-14T09:08:05
https://dev.to/anilsingh/what-is-trackby-in-angular-27j6
angular
The “track by” expression to specify unique keys. The trackBy function takes the index and the current item as arguments and needs to return the unique identifier for this item. As an Example, https://www.code-sample.com/2019/11/trackby-in-angular-ngfor-ng-repeat.html
anilsingh
205,292
Google Cloud Firestore Client Library
FireO A modern and simplest convenient ORM package in Python. FireO is specifical...
0
2019-11-14T09:57:02
https://dev.to/axeemhaider/google-cloud-firestore-client-library-4mg
firestore, python, orm, fireo
# FireO ### A modern and simplest convenient ORM package in Python. FireO is specifically designed for the Google's Firestore, it's more than just ORM. It implements validation, type checking, relational model logic and much more facilities. ## Installation ``` pip install fireo ``` ## Usage Example ```python from fireo.models import Model from fireo.fields import TextField class User(Model): name = TextField() u = User() u.name = "Azeem Haider" u.save() # Get user user = User.collection.get(u.key) print(user.name) ``` Appreciate our work by giving [stars](https://github.com/octabytes/fireo) Read More about [FireO](https://github.com/octabytes/fireo)
axeemhaider
205,435
Cómo Correr Migraciones Durante Despliegue de Aplicación Rails en Heroku
Se sabe bien que subir aplicaciones web a Heroku es sencillo. En muchos casos es solo cuestión de cor...
0
2019-11-14T15:53:02
https://otroespacioblog.wordpress.com/2019/11/10/como-correr-migraciones-durante-despliegue-de-aplicacion-rails-en-heroku/
rails, heroku, spanish
Se sabe bien que **subir aplicaciones web a Heroku es sencillo**. En muchos casos es solo cuestión de correr comandos en la terminal y con una que otra configuración solo con subir los cambios al repositorio basta. Y si bien tenemos esa facilidad, hay cosas que Heroku deja de hacer por conveniencia, ahorro o que sé yo. Tal es el caso de la ejecución de [migraciones](https://otroespacioblog.wordpress.com/2015/06/30/resolviendo-un-problema-tonto-pero-poco-comun-en-rails-y-sus-migraciones/) cuando se despliegan aplicaciones [Ruby on Rails](https://otroespacioblog.wordpress.com/2018/10/01/backend-handbook-para-aplicaciones-ruby-on-rails/) en este servicio. Normalmente, **en un despliegue a Heroku el comando para correr migraciones no es aplicado**. Durante el despliegue Heroku ejecuta todo lo necesario para que la aplicación quede disponible con la URL asignada pero lo demás no le concierne. O sea que si en un nuevo cambio agregamos dos campos nuevos a la base de datos y ejecutamos en una consola `heroku push origin master`, al terminar el proceso los campos no estarán en la base de datos Heroku Postgres y la aplicación probablemente va a dejar de funcionar. Para solventar esta situación hay varias formas. La mejor y actual es usar los _[Release Phases](https://devcenter.heroku.com/articles/release-phase)_ los cuales son una característica algo nueva. Solo para el registro algunas alternativas son/fueron: - [Usando un _buildpack_](https://github.com/gunpowderlabs/buildpack-ruby-rake-deploy-tasks): El mismo repo recomienda usar release phase. - [Con scripts de Bash](https://mentalized.net/journal/2017/04/22/run-rails-migrations-on-heroku-deploy/): También recomiendan usar release phase. ## ¿Qué es release phase? Básicamente es una característica de Heroku que permite correr tareas en momentos previos a que el despliegue culmine. > La jerga de Release Phase está muy ligada a cómo funciona el despliegue en Heroku. Si se quiere entender más, [está la documentación](https://devcenter.heroku.com/articles/release-phase). Con esto en mente, lo mejor es definir un archivo `Procfile` en el proyecto y en él tener el siguiente código: ``` # Procfile release: bundle exec rails db:migrate web: bundle exec puma -C config/puma.rb ``` Ya con la primera línea en ese archivo, en cada despliegue Heroku ejecutará el comando de migración y tendremos el proceso como se debe. Usando _release phase_ se pueden ejecutar otras tareas necesarias. En este caso la necesidad de correr migraciones queda suplida gracias a esta característica. *** _Este artículo fue publicado [primero en mi blog personal](https://otroespacioblog.wordpress.com/2019/11/10/como-correr-migraciones-durante-despliegue-de-aplicacion-rails-en-heroku/), [Otro Espacio Blog](https://otroespacioblog.wordpress.com/). Ahí escribo sobre todo lo que aprendo al programar y también sobre temas no relacionados a tecnología._
cescquintero
205,447
How to know if oauth2.0 authentication setup might be an overkill?
So, I'm assigned with a task to create APIs for an Instagram-like application. An...
0
2019-11-14T16:20:23
https://dev.to/chandlerbing016/how-to-know-if-oauth2-0-authentication-setup-might-be-an-overkill-2dd8
laravel, oauth20, oauth
--- title: How to know if oauth2.0 authentication setup might be an overkill? published: true tags: help, laravel, oauth2.0, oauth --- So, I'm assigned with a task to create APIs for an Instagram-like application. And, Laravel is the framework that we decided to go with. I'm setting up authentication and the last time I did it, it was just with long-lived access tokens (jwt). You know like once users authenticate they're issued a long-lived access tokens which they provide on every subsequent requests. In fact, I'm thinking about doing this again. But, I've recently learned that long-lived access tokens are bad. They can be stolen and misused. Short-lived access tokens must be used and should be renewed by a refresh token. So, how do I introduce this "refresh token" into this client-server stateless architecture? Kindly share your experience. Thank you.
swadhwa16
205,510
Serverless Security with Unikernels
How do you secure your serverless infrastructure? How do you secure the rest of your infrastructure?
0
2019-11-14T20:29:31
https://dev.to/eyberg/serverless-security-with-unikernels-eb0
unikernels, serverless, security, javascript
--- title: Serverless Security with Unikernels published: true description: How do you secure your serverless infrastructure? How do you secure the rest of your infrastructure? tags: unikernels, serverless, security, javascript cover_image: https://thepracticaldev.s3.amazonaws.com/i/a1eni5e86v8spllnqqwt.jpg --- Security is one of those topics where on one hand you see a lot of passionate developers that get upset whenever there is a new data breach (and those seem to be happening on the daily), yet on the other hand there is a very large skills gap on understanding how hackers (the bad kind) think, what makes them tick and most importantly - how they operate. I think it's important developers start thinking about security in a more holistic manner. Let me give you an example. I was talking to a vp of eng the other day that said they are rather good on security cause all of their instances are inside a VPC. I agreed that was a good approach versus exposing everything on the internet but then I brought up the [Capital One hack](https://ejj.io/blog/capital-one) and the [Door Dash hack](https://arstechnica.com/information-technology/2019/09/doordash-hack-spills-loads-of-data-for-4-9-million-people/) and many others that occurred just this year. You can bet that if someone was only alerted 4-5 months after an attack such as in the case of DoorDash the miscreants have been all up and down those servers. Now exploiting a SSRF (server side request forgery) vulnerability is one thing but escalating the attack into the point where you have landed a shell inside a vpc is where this thinking falls apart. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/8o40c4mvoohx09666jgh.gif) Why is that? At the end of the day attackers don't care about what exploit or what vulnerability they are using to get onto your server. The only care about getting onto your server to run *their programs*. For example cryptojacking attacks like the one that afflicted [Tesla](https://arstechnica.com/information-technology/2018/02/tesla-cloud-resources-are-hacked-to-run-cryptocurrency-mining-malware/ ) are very popular nowadays cause unlike ransomware you don't have to wait to get paid - it just starts making money immediately! At the end of the day it doesn't matter that Tesla had exposed kubernetes to the world the attackers just wanted to mine some monero - they could care less how they broke in. This is the point. I'm going to show you real life attacks on Google Cloud here in a bit but first I want to set some expectations. __The Problem with Multiple Processes (or 'Just Use Threads')__ Most attacks today rely on the capability of running *other* programs on a given server/instance/container/etc. If you can't do that because fork/execve and friends have been seccomp'd out the attack has gotten progressively harder cause now you have to start doing more exotic attacks using things like rop gadgets. However, at the end of the day the end desire remains the same - unfettered access to run whatever program the attacker wants so typically the end goal there is to pop a shell. ![child process](https://thepracticaldev.s3.amazonaws.com/i/b44d5wv83lf5h3ofcut6.png) Not being a day to day js developer I did a quick search on github to see how popular forking a new process might be. This picture shows that it definitely is not unpopular. The recent paper [A fork() in the road](https://www.microsoft.com/en-us/research/uploads/prod/2019/04/fork-hotos19.pdf) argues very well that we should not be using fork - at all in 2020. We have had native threads since ~2000 in Linux (yes, 20 years ago). In the past there wasn't a strong demand to get rid of it because Linux itself was designed to run on real machines - not virtual ones. This is important to point out cause how else would you run other programs on the same physical server? However, that proposition can now be re-examined at least for cloud computing use cases which are entirely built on virtual machines. For languages such as Java and Go you get threading out of the box so you can have as much performance as you have threads/cores available. For the interpreted language class such as Javascript and Ruby it's been common to stick X application servers behind a load balancer/reverse proxy to scale up. At the end of the day you get the exact same vCPU that you buy regardless. If you've got one vcpu user-land threads, async, and such might help you out some but forking off a half dozen worker processes won't - then you are just [fighting the operating system scheduler](https://www.i3s.unice.fr/~jplozi/wastedcores/files/extended_talk.pdf). __Serverless Security__ Serverless is clearly a desire for many developers today that don't wish to manage and run infrastructure. That makes sense as we keep pumping out tremendous amounts of software and devops salaries, at least in my neck of the woods (SF), are through the roof. Unfortunately, a lot of the status quo serverless offerings are built on top of popular cloud services leading to vendor lockin. Unikernels are a fresh set of eyes of looking at this problem space as they allow one to deploy the same set of code to any number of vendors using tried and true vms as their base artifact, albeit not the types of vms you might be used to. __Running Node the Old Way__ Let's show how you might normally provision this javascript webserver. (and yes I understand that this would be automated but it's the same thing -- work with me here) First we spin up an instance. Ok, nothing abnormal here. Then we ssh in. Wait - hold on. Right off the bat we are explicitly allowing the concept of users to jump into an instance and run arbitrary commands. In fact every single configuration management tool out there including terraform, puppet, and chef are explicitly built on this concept which is odious from the start. Ok, let's continue. Once we are on the instance we install node.js: ```bash eyberg@instance-1:~$ sudo apt-get install nodejs Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libicu57 libuv1 The following NEW packages will be installed: libicu57 libuv1 nodejs 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 11.2 MB of archives. After this operation, 45.2 MB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://deb.debian.org/debian stretch/main amd64 libicu57 amd64 57.1-6+deb9u3 [7,705 kB] Get:2 http://deb.debian.org/debian stretch/main amd64 libuv1 amd64 1.9.1-3 [84.4 kB] Get:3 http://deb.debian.org/debian stretch/main amd64 nodejs amd64 4.8.2~dfsg-1 [3,440 kB] Fetched 11.2 MB in 0s (42.9 MB/s) Selecting previously unselected package libicu57:amd64. (Reading database ... 37215 files and directories currently installed.) Preparing to unpack .../libicu57_57.1-6+deb9u3_amd64.deb ... Unpacking libicu57:amd64 (57.1-6+deb9u3) ... Selecting previously unselected package libuv1:amd64. Preparing to unpack .../libuv1_1.9.1-3_amd64.deb ... Unpacking libuv1:amd64 (1.9.1-3) ... Selecting previously unselected package nodejs. Preparing to unpack .../nodejs_4.8.2~dfsg-1_amd64.deb ... Unpacking nodejs (4.8.2~dfsg-1) ... Setting up libuv1:amd64 (1.9.1-3) ... Setting up libicu57:amd64 (57.1-6+deb9u3) ... Processing triggers for libc-bin (2.24-11+deb9u4) ... Processing triggers for man-db (2.7.6.1-2) ... Setting up nodejs (4.8.2~dfsg-1) ... update-alternatives: using /usr/bin/nodejs to provide /usr/bin/js (js) in auto mode ``` Notice something strange? That's right. It didn't matter that my user is a non-root user - I could immediately 'sudo' my way to doing whatever I wanted on the instance. That whole concept of 'least privilege' and 'user separation' that security devs like to talk about is by default on many servers not present. Unfortunately, as soon as we do that we realize that Debian 9 (the first instance that Google offered to give us comes with node version 4. ```bash eyberg@instance-1:~$ nodejs --version v4.8.2 ``` Now our options are to either trash this instance or download a tarball. Let's go for that other option (even though knowing that if someone else touches this instance it might cause problems down the road). ```bash eyberg@instance-1:~$ wget https://nodejs.org/dist/v12.13.0/node-v12.13.0-linux-x64.tar.xz --2019-11-14 18:08:59-- https://nodejs.org/dist/v12.13.0/node-v12.13.0-linux-x64.tar.xz Resolving nodejs.org (nodejs.org)... 104.20.23.46, 104.20.22.46, 2606:4700:10::6814:172e, ... Connecting to nodejs.org (nodejs.org)|104.20.23.46|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 14055156 (13M) [application/x-xz] Saving to: ‘node-v12.13.0-linux-x64.tar.xz’ node-v12.13.0-linux-x64.tar.xz 100%[================================================================================================================>] 13.40M --.-KB/s in 0.1s 2019-11-14 18:08:59 (114 MB/s) - ‘node-v12.13.0-linux-x64.tar.xz’ saved [14055156/14055156] ``` ```bash eyberg@instance-1:~$ unxz node-v12.13.0-linux-x64.tar.xz eyberg@instance-1:~$ tar xf node-v12.13.0-linux-x64.tar ``` __Let's jump into the code!__ What this next snippet does is pop a webserver on that offers two urls to list the contents of a directory. One is a lot safer than the other as we'll soon find out. (Again, I'm not a js dev so excuse the ugliness of the code.) ```javascript var http = require('http'); var fs = require('fs'); var url = require('url'); const { exec } = require('child_process'); var port = 80; http.createServer(function (req, res) { if (req.url == '/safe') { var files = fs.readdirSync('/'); res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(files + '\n'); } else { try { var cmd = 'ls'; var resbody = ''; var query = url.parse(req.url, true).query; // this is *unsafe* if (query.cmd) { cmd = query.cmd; } exec(cmd, (err, stdout, stderr) => { if (err) { resbody = err console.error(err) } else { resbody = stdout } res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(resbody + '\n'); }); } catch(e) { console.error(e) res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(e + '\n'); } } }).listen(port, "0.0.0.0"); console.log('Server running at http://127.0.0.1:' + port + '/'); ``` Now let's run our program: ```bash eyberg@instance-1:~/node-v12.13.0-linux-x64/bin$ ./node bob.js Server running at http://127.0.0.1:80/ events.js:187 throw er; // Unhandled 'error' event ^ Error: listen EACCES: permission denied 0.0.0.0:80 at Server.setupListenHandle [as _listen2] (net.js:1283:19) at listenInCluster (net.js:1348:12) at doListen (net.js:1487:7) at processTicksAndRejections (internal/process/task_queues.js:81:21) Emitted 'error' event on Server instance at: at emitErrorNT (net.js:1327:8) at processTicksAndRejections (internal/process/task_queues.js:80:21) { code: 'EACCES', errno: 'EACCES', syscall: 'listen', address: '0.0.0.0', port: 80 } ``` Oh no! We forgot ports under 1024 are 'privileged'. Well no problem here - cause sudo make me a sandwich right? We have gone from bad to worse. A sane setup would probably have a frontend proxy sitting in front of this that can drop privileges after getting setup and forwarding on the request but now you might need to call in your devops person huh? ```bash eyberg@instance-1:~/node-v12.13.0-linux-x64/bin$ sudo su root@instance-1:/home/eyberg/node-v12.13.0-linux-x64/bin# ./node bob.js Server running at http://127.0.0.1:80/ ``` Ok, let's hit it up: ```bash ➜ ~ curl -XGET http://34.68.46.143/ bob.js node npm npx ``` Well - that works but is it safe? ```bash ➜ ~ curl -XGET http://34.68.46.143/?cmd="touch%20tmp" ``` This first query passes in the command "touch tmp" which creates a new file in that directory - bad news bears. The %20 you might recognize as the url encoding for the space character. ```bash ➜ ~ curl -XGET http://34.68.46.143/ bob.js node npm npx tmp ``` As we can see, we can run arbitrary commands on our end server and worse it's running as root. This is a very oftenly abused software development pattern called 'shelling out'. There is almost never any good reason to do this and if you have code linters or static analysis setup on your ci there's a good chance it'll flag it or whoever is reviewing your PRs should. Now if we refactor the offending command injection into the '/safe'' equivalent we might get this instead: ```bash ➜ ~ curl -XGET http://34.68.46.143/safe bin,boot,dev,etc,home,initrd.img,initrd.img.old,lib,lib64,lost+found,media,mnt,opt,proc,root,run,sbin,srv,sys,tmp,usr,var,vmlinuz,vmlinuz.old ``` Leaking out your root filesystem probably isn't the best thing to do but at least you aren't injecting commands anymore. Now, this is just one 41 line program here but this *is* a full blown linux system. Let's see what else is on here before we retire this example. __Attack Surface__ Envision Normandy 1944. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/2w9wj5tct8yxzwb7yexm.jpg) The attack surface when we talk about linux systems is the amount of utter crap that we can attack. ```bash root@instance-1:~# find / -type f | wc -l 76369 ``` 76,000 files! Just to run a 41 line javascript program? I wonder how many shared libraries we have on this system? ```bash root@instance-1:~# find / -type f -regex ".*\.so.*" | wc -l 751 ``` 750?? If we check out node we can see there are only 8 explicitly linked to node - why do we want/need the rest? ```bash root@instance-1:~# ldd /home/eyberg/node-v12.13.0-linux-x64/bin/node linux-vdso.so.1 (0x00007ffeaf7e9000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fef28ab4000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fef28732000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fef2842e000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fef28217000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fef27ffa000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fef27c5b000) /lib64/ld-linux-x86-64.so.2 (0x00007fef28cb8000) ``` What about executables? ```bash root@instance-1:~# find / -type f -executable | wc -l 1339 ``` 1300?!? We can attack 1300 programs on this fresh instance? All we did was install node. Let's try that query again. ```bash root@instance-1:~# find / -type f -executable | xargs file | grep executable | wc -l 754 ``` Well we drilled it down close to halfway but still 750?? A heavily seccomp'd container infrastructure might prevent some of this behavior but then you are missing out on the whole serverless part of the idea and [container security](https://www.techrepublic.com/article/docker-containers-are-filled-with-vulnerabilities-heres-how-the-top-1000-fared/) [does not](https://snyk.io/blog/top-ten-most-popular-docker-images-each-contain-at-least-30-vulnerabilities/) [have a great track record](https://neuvector.com/docker-security/runc-docker-vulnerability/). Also, we haven't even begun to talk about why the linux kernel is +15MLOC - half of it is just drivers for hardware that doesn't exist in a virtual machine, then there's all the support for users, and IPC and scheduling and .... anyways, that's for a different blogpost. So we've now shown that merely setting up a node webserver can be a pain even when we aren't doing things like putting it into an init manager or dropping privileges or any other sane activity. Securing it becomes a whole new level of batshittery. __Serverless Unikernels__ Let's start fixing the problem now that we have identified it. Let's take this same node.js webserver and turn it into a unikernel using the [Nanos](https://github.com/nanovms/nanos) kernel and the [OPS](https://github.com/nanovms/ops) unikernel orchestrator. If it's the first time you've done this you might want to check out this [tutorial](https://dev.to/eyberg/stateful-serverless-with-unikernels-4ma7) first. Before we build the image - want to see the entirety of the filesystem first? I didn't show you the filesystem in the previous example cause no one wants to sift through 20+ pages of a tree listing. ```bash ➜ sec-article ops pkg contents node_v12.13.0 File :/node File :/package.manifest Dir :/sysroot Dir :/sysroot/lib Dir :/sysroot/lib/x86_64-linux-gnu File :/sysroot/lib/x86_64-linux-gnu/libc.so.6 File :/sysroot/lib/x86_64-linux-gnu/libdl.so.2 File :/sysroot/lib/x86_64-linux-gnu/libgcc_s.so.1 File :/sysroot/lib/x86_64-linux-gnu/libm.so.6 File :/sysroot/lib/x86_64-linux-gnu/libnss_dns.so.2 File :/sysroot/lib/x86_64-linux-gnu/libpthread.so.0 File :/sysroot/lib/x86_64-linux-gnu/libresolv.so.2 Dir :/sysroot/lib64 File :/sysroot/lib64/ld-linux-x86-64.so.2 Dir :/sysroot/proc File :/sysroot/proc/meminfo Dir :/sysroot/usr Dir :/sysroot/usr/lib Dir :/sysroot/usr/lib/x86_64-linux-gnu File :/sysroot/usr/lib/x86_64-linux-gnu/libstdc++.so.6 ``` Yep - that's all 20 files of it. Actually 6 of those are just directory entries. Ok, let's build the image first: ```bash ➜ sec-article cat build.sh #!/bin/sh export GOOGLE_APPLICATION_CREDENTIALS=~/gcloud.json ops image create -c config.json -p node_v12.13.0 -a main.js ``` ```bash ➜ sec-article ./build.sh [node main.js] bucket found: my-bucket Image creation started. Monitoring operation operation-1573756133681-59752a74faae6-6944eb54-9f7ee502. ............ Operation operation-1573756133681-59752a74faae6-6944eb54-9f7ee502 completed successfully. Image creation succeeded node-image. gcp image 'node-image' created... ``` Then we can boot it: ```bash ➜ sec-article ops instance create -z us-west1-b -i node-image ProjectId not provided in config.CloudConfig. Using my-project from default credentials.Instance creation started using image projects/my-project/global/images/node-image. Monitoring operation operation-1573756213461-59752ac110224-644f49fc-4440b475. ..... Operation operation-1573756213461-59752ac110224-644f49fc-4440b475 completed successfully. Instance creation succeeded node-image-1573756213. ``` ```bash ➜ ~ curl -XGET http://35.247.123.61/ Error: spawn ENOSYS ➜ ~ curl -XGET http://35.247.123.61/safe dev,etc,kernel,lib,lib64,main.js,node_v12.13.0,proc,sys,usr ➜ ~ curl -XGET http://35.247.123.61/cmd\="touch%20tmp" Error: spawn ENOSYS ``` So we can see we are safely not allowing any other processes to be spawned on the end machine. It isn't a matter of filtering the calls either - the system itself straight up doesn't have support for it. If you want more programs running just boot up another instance and if you want more performance out of the server look at other languages. There are other reasons why we advocating serverless unikernels like this besides security and in upcoming blogposts we'll start showing other superpowers of this style of infrastructure.
eyberg
205,540
2nd Interview Experience(Python-Dev)
Interview experience for a python developer position
0
2019-11-14T20:05:18
https://dev.to/mujeebishaque/2nd-interview-experience-python-dev-35p0
python, career, beginners
--- title: 2nd Interview Experience(Python-Dev) published: true description: Interview experience for a python developer position tags: #python #career #beginners cover_image: https://thepracticaldev.s3.amazonaws.com/i/q99kjqostk9h0jtlfvsa.gif --- ### Introduction I found the post on a local facebook page that shares IT jobs in my area. They were looking for a python developer with experience in Linux and RESTFUL API. It was an entry-level position. I'd say the whole process took 3 days. I got call on Monday, got invited for an interview on Wednesday. ### Interview Questions/Answers As always, the interview started with a brief introduction. You need something, tea/coffee/water? I said, No. Well, let's get started. 1 - Where are you working as of now? Answer - a local startup 2 - Why are you leaving them? Answer - Not leaving them, they are closing. The startup is dead. 3 - What was your role there? Answer - Python dev. (stared at the resume for quite some time) 4 - Which distro? Answer - Ubuntu. 5 - Experience in Python? Practical experience. Answer - 1.5 Years 6 - On the scale of 1-10 where would you rate your OOP skills? Answer. Around 9. I have not written much comprehensive code. Mostly, everything that needs to be done can be found online or there is always a library for it. Python makes things easy for you. 7 - What's the difference between abstract class and interface? Answer - I had forgotten this. I recollected things about interface and I answered about it but was not able to remember anything about abstract class. 8 - Sorry, what's an interface? Answer - PyQt5, I use it to create the interfaces. Oh, you mean, The interface? Ah, yeah, it's ........ 8a - Difference between encapsulation and abstraction? Answer - Abstraction hides unwanted details. Encapsulation hides the data and code in one Unit. 8b - What's polymorphism? Answer. Many forms of a function. It's like, when you need to have a separate definition for the same function as in parent class, you use polymorphism, it's called polymorphism, yeah. The word polymorphism means many forms. 9 - What about Flask? Flask vs Django? Answer. Flask is good. I prefer it because it's minimalistic. 10 - Do you know Flask? Answer - Not much, I've worked with it. But I primarily do Django. 11 - You can develop API in Django? Answer - yes. And I have made one, it's for quotes. * What do you like about Django? Answer - Admin Panel, Model Forms, Builtin-Database. 12 - hmmm, In your resume, it says that you've worked on an IoT product? What were your contributions? Answer - Not much, had to improve the code they already had, schedule a task, check for internet connectivity and run a task on startup. 12a - Which distro for the IoT product? Answer - Stretch. Raspbian. and I tried arch, it wasn't working for some reason and due to time constraints I quit doing experiments and went with raspbian. 12b - How many products you sold? Answer - None. No one bought it. 13 - Which database do you like? Answer - SQLite and MySQL 14 - Most proficient in? Answer - Mysql 15 - Here's a table(he drew it on a paper). remove redundancy or just check for redundancy for names. We need unique. Answer: I'd use stored procedures or maybe not, let's see. SELECT NOT DISTINCT names from Table; No--- maybe not, let's try another way. SELECT COUNT(names) FROM Table having COUNT(names) > 1; 16 - What's the difference between stored procedure and a function? Answer = IDK. (There's a difference though) 17 - RestAPI work with which data format? Answer - Json. 18 - Do you have any questions? Answer - Can you please let me know if I am not selected through email as early as possible. HR Dept. don't send an email of rejection mostly, they just don't respond. yeah, I will talk to HR about this. Have a good one. Me: Thanks, thank you for your time.
mujeebishaque
205,548
Bdbdbd
Dbbshs
0
2019-11-14T19:49:26
https://dev.to/barkanyid/bdbdbd-51b6
Dbbshs
barkanyid
205,616
Vanilla JavaScript Infinite Scroll using WordPress REST API
This pen is a real example of how to build an Infinite Scroll in Vanilla JavaScript. I've used Fetch API, Intersection Observer API, and WordPress REST API to fetch posts. Feel free to fork, use, and modify this code.
0
2019-11-14T22:42:22
https://dev.to/castroalves/vanilla-javascript-infinite-scroll-using-wordpress-rest-api-2pjl
javascript, wordpress, showdev, tutorial
--- title: Vanilla JavaScript Infinite Scroll using WordPress REST API description: This pen is a real example of how to build an Infinite Scroll in Vanilla JavaScript. I've used Fetch API, Intersection Observer API, and WordPress REST API to fetch posts. Feel free to fork, use, and modify this code. published: true tags: javascript, wordpress, showdev, tutorial cover_image: https://thepracticaldev.s3.amazonaws.com/i/hxnk9wj1p74ts8kw8epm.jpg --- <p>This pen is a real example of how to build an Infinite Scroll in Vanilla JavaScript. I've used Fetch API, Intersection Observer API, and WordPress REST API to fetch posts. Feel free to fork, use, and modify this code.</p> {% codepen https://codepen.io/castroalves/pen/YdGyKY %}
castroalves
205,756
Heroku vs DigitalOcean: An Experiment in the Making
The Experiment's Table of Contents Part 1: An Experiment in the Making Part 2: Getting Started with D...
3,266
2019-11-15T03:36:24
https://medium.com/@standingdreams/heroku-vs-digital-ocean-an-experiment-in-the-making-ce375e7976d
devops, heroku, digitalocean, mediatemple
The Experiment's Table of Contents Part 1: An Experiment in the Making Part 2: [Getting Started with DigitalOcean](https://dev.to/standingdreams/heroku-vs-digitalocean-getting-started-with-digitalocean-29j0) --- I have been a loyal customer of [MediaTemple](http://bit.ly/2hnP7tF) for years. They made my hosting needs extremely easy. I loved the customer service. I hosted sites for friends and family with ease. I just loved it. However, as I got further into my development career and wanted more, I felt my tier was very lacking. I was paying $20 a month for a GridContainer. While it served my basic needs, it left me with no access to the root where all the fun happens. Being a true tinker, this was unacceptable. At the company I worked at, we use Heroku and [DigitalOcean](https://m.do.co/c/0d5110e21375) on a few sites. I loved features that came with Heroku but I loved the control that came with DigitalOcean. Like any great developer, I turned to Google for some good comparisons between DigitalOcean and Heroku. I found a few blog posts and Quora posts but nothing answered my questions exactly. I wanted someone to spell it out for me. You know…make my life easier. Isn't that what Google is for?? Since I couldn't find the answers, I figured I'd do a little experiment to answer my own question. Plus it gave me a chance to play with new stuff. (TOYS!!!) > Heroku is a PaaS - i.e. Platform as a service, letting you run apps on their platform their way. DigitalOcean is an IaaS, or Infrastructure as a Service, which gives you raw servers to compose as you want to run your app. The differences are mainly around pricing and control vs effort. *An answer on [Quora](https://www.quora.com/What-is-better-Heroku-or-Digital-Ocean) by [Russell Smith](https://www.quora.com/profile/Russell-Smith-1)* For starters, I am not a DevOps guy. I'm a front-end developer with some backend chops. I have worked in both, DigitalOcean and Heroku, before but it was like someone's grandmother poking around an iPhone: slow and cautious with lots of questions and fear I would burn the internet down with every click. I do know that Heroku is a PaaS (Platform as a Service) and DigitalOcean is a IaaS (Infrastructure as a Service). The comparison between the two only comes up because people rely on both in some capacity for hosting for some sort of site, web app or API. However, because they are basically different, there are less solid answers as to which you should turn to when you're looking for a new hosting solution. SO here we are! My goal is to take [my portfolio](https://www.standingdreams.com) that is ran off of [Perch CMS](https://grabaperch.com/ref/roger6/) and [a legacy WordPress](http://www.standingdreams.com/lifestyle) blog, host them on both services for one month each and record my findings. AGAIN…I am by no means a DevOps guy. I will be depending the services' documentation, customer service reps, [StackOverflow](https://stackoverflow.com/users/4240495/douglas-rogers) and Google for all of my needs. Since I have credits with DigitalOcean (and because my coworker called heads for DigitalOcean on a coin toss), DigitalOcean will be first up. I'll simply post findings, frustrations, tip and tricks, rants and raves about my experience for the next person that is looking at Heroku and [DigitalOcean](https://m.do.co/c/0d5110e21375) for their hosting needs. Wish me luck. 👍🏾
standingdreams
205,859
The golden age of SaaS
Most people overestimate what they can do in one year and underestimate what they can do in ten...
0
2019-11-15T10:54:24
https://dev.to/happydragos/the-golden-age-of-saas-591i
saas, developer, startup, growth
--- title: The golden age of SaaS published: true description: tags: #saas #developer #startup #growth --- Most people overestimate what they can do in one year and underestimate what they can do in ten years. Most likely, this Bill Gates quote refers to building businesses. It can take a lot of effort and persistence. Building a SaaS business is a 10-15 year journey. I would like to extend this to developers. Most developers overestimate what they can build in 3 months and underestimate what they can do in one year. When I started writing code for Archbee in December last year, I thought if I go full-time and perform my best I can build a decent product (compared to competitors) in 3 months. After 3 months, I had a product made of tangled code, barely working, low feature, buggy, not even friends wanted to use. Almost 6 months in, first paying customers, I almost had to beg not to leave due to still buggy and no features. Good people, but still, the product didn't deserve it. 9 months in, some early signs of traction, the product in ok shape, but 55k lines of still tangled & messy code. But looking way better from the outside. I took the next month rewriting almost everything and not worrying about new features as the product was at 95% parity to my competitors. And some extra ones they didn't have 💪. 11 months in, I saw some companies choosing my product over big competitors. Mature competitors. VC-funded competitors. Bootstrapped competitors. This is a reminder to developers that they can go build businesses for themselves. You don't need connections, venture capital or anything else. The only requirement is a strong mind. It's mental more than anything else. When you do decide you want to get investment, your company will be a beast in their eyes. Because you deserve it. You went through all the adversity necessary to earn the respect of investors and make them believe in you without saying one word. Your MRR and MoM growth tells the whole story. If you decide to keep bootstrapping, you're still in very good shape having been in the trenches for so long. The longer you stay bootstrapping the more valuable you become because you don't need anybody and anything. Do it.. there is nothing you'll regret more if you don't. Imagine yourself. 45-50 years old, knowing you've been through the golden age of SaaS and you did fucking nothing. ❤️
happydragos
236,604
A step towards a faster Web: Early flushing in c#.net
A simple demonstration to Early flushing in dotnet. Flushing, Early flushing, head flushin...
0
2020-01-12T08:40:26
https://dev.to/uzumakinarut0/early-flushing-in-c-net-c2
dotnet, csharp, webdev, webperf
## A simple demonstration to Early flushing in dotnet. Flushing, Early flushing, head flushing or Progressive HTML is when the server sends the initial part of the HTML document to the client before the entire response is ready. All major browsers start parsing the partial response.If done correctly, the browser won't sit idle after requesting your page, rather, it can start process other important things in the meantime, like requesting static assets which would be used later on the site. It could give a significant perceived performance gain. In this given example, I have used `Thread.Sleep(200)`. This could be the time where your page does heavy database calls and other computation. ### HomeController.cs ```cs public ActionResult Index() { PartialView("/Views/Shared/_HeadPart.cshtml").ExecuteResult(ControllerContext); Response.Flush(); Thread.Sleep(200); return PartialView("/Views/Home/Index.cshtml"); } ``` ### _HeadPart.cshtml ```html <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <script src="https://code.jquery.com/jquery-3.4.1.min.js"></script> <title>Hey - My ASP.NET Application</title> </head> ``` ### Index.cshtml ```html <body> <div class="row"> Hey there, How u doin'? </div> </body> </html> ``` ### Results With Early Flushing: ![With Early flushing](https://thepracticaldev.s3.amazonaws.com/i/avn03f90frg6acf7e8c3.PNG) ### Results Without Early Flushing ![Without Early flushing](https://thepracticaldev.s3.amazonaws.com/i/9ae3w1samk2zx8g7t707.PNG)
uzumakinarut0
205,905
Tips to Safeguard Your Magento E-Commerce Store
This post showcases the essential security tips that will help protect the Magento eCommerce store.
0
2019-11-15T13:17:35
https://dev.to/apptechblogger/tips-to-safeguard-your-magento-e-commerce-store-173
ecommerce, ecommercedevelopment, magentodevelopment
--- title: Tips to Safeguard Your Magento E-Commerce Store published: true description: This post showcases the essential security tips that will help protect the Magento eCommerce store. tags: ecommerce, ecommerce development, magento development --- A recent study by a leading research group found that in the Alexa Top One Million websites, 12,708 e-commerce websites are underpinned by Magento. Considering that there are a total of about 467 technologies in the world that enable the development of e-commerce stores, the aforementioned translates into a market share of 14.31 percent for Magento! So, it is clear to see that this particular framework is a highly appreciated one in the ecosystem, and those who are even vaguely acquainted with Magento would tell you that its popularity is not surprising at all. And sure enough, it is a terrific resource — empowering businesses across the globe with a world of productive functionalities. But the fact that it is so sought after makes Magento especially susceptible to attacks from hackers. The platform is offered with a lot of security-related features as well; in fact, it is an inherent part of the package. Unfortunately, hackers can often get creative when they target business, and there’s only so much the platform can protect one’s business against hacking attempts. Thus, the best thing to do is to make sure that you follow industry best practices, for this will help make sure that you can protect your e-commerce platform as well as the wealth of data it beholds in the best possible manner. And to make it a little easier for you, we put together a list of some of the essential security-related tips for your Magento e-commerce store. 1. Two-factor authentication: Instead of depending on only one password to pretty much protect your entire business, it is vital that you make use of two-factor authentication (2FA) to enhance the Magento website’s safety. The platform itself offers 2FA capabilities that can be used to make sure that only assigned people and those with the proper authority can access the backend of your Magento store. And the best part is that there exist several other such Magento extensions that offer 2FA, which means store owners can rest a little relaxed about the store’s security. 2. Encrypted SSL connections are essential: Any company operating an e-commerce business must make sure that the data sent by the website is secure, which can be achieved employing SSL (Secure Sockets Layer) encryption for the connection. You see, an unencrypted connection means data, including login credentials, bank account data, and more, can be easily stolen. 3. Use a strong password: This one may seem obvious, but it is so important that it finds mention on this list. And then there’s also the fact that an unbelievable number of people end up ignoring this aspect. The point is that while you are creating a password for the store’s Admin, make sure that it is at least ten characters long, includes both special characters as well as numbers, and uses alphabets in capital and lowercase. [Magento website development](https://www.rishabhsoft.com/magento-development-services), when done following best practices such as the ones mentioned above, can enable a business to grow at an unprecedented rate and that too without a hitch.
apptechblogger
205,963
Use CSS Subgrid to layout full-width content stripes in an article template
CSS Grid was missing one important piece when it's Level 1 specification was released into the world:...
0
2019-11-15T19:44:54
https://bryanlrobinson.com/blog/use-css-subgrid-laying-out-full-width-article-stripes/
css, cssgrid, webdev, tutorial
--- title: Use CSS Subgrid to layout full-width content stripes in an article template published: true date: 2019-11-15 00:00:00 UTC tags: css, cssgrid, webdev, tutorial canonical_url: https://bryanlrobinson.com/blog/use-css-subgrid-laying-out-full-width-article-stripes/ cover_image: https://bryanlrobinson.com/images/subgrid-topper.png --- CSS Grid was missing one important piece when it's Level 1 specification was released into the world: Subgrid. The Level 2 specification is still in "Working Draft" status, but we have our first browser implementation. Mozilla has the Subgrid syntax in Firefox Nightly. Now that we have a live implementation, I've begun playing with the subgrid. Subgrid allows for elements that are grandchildren of the Grid container to take part in the initial grid definition. If you want a primer on all things Grid Level 2, [read Rachel Andrew's excellent article on it](https://www.smashingmagazine.com/2018/07/css-grid-2/). For a fun layout using its power, [read Michelle Barker's CSS{IRL} post](https://css-irl.info/subgrid-is-here/). In this article, we'll be exploring one specific use case: augmenting a Grid-infused article layout. This article layout will allow for certain sections of content to break out into full-width areas. > If you're looking for more information on this design pattern, it's covered in [my CSS Grid course here](https://store.codecontemporary.com/practical-css-grid). ## Setting up our Grid Let's dive in to the code. Our first task will be setting up our article's basic Grid. In this case, we'll take advantage of the power of Grid's "named grid lines" to make our lives easier down the road. ```css .article-body { display: grid; grid-template-columns: [fullWidth-start] 1rem [left-start] 1fr [article-start right-start] minmax(20ch, 80ch) [article-end left-end] 1fr [right-end] 1rem [fullWidth-end]; } ``` This setup gives us `1rem` gutters on the left and right hand side, followed by "[variably squishy](https://blog.logrocket.com/examining-squishiness-in-intrinsic-web-design-1005d30dda0c/)" gutters of `1fr`. Finally, we have a center column with a minimum size of `20ch` and a maximum of `80ch`. The `ch` unit giving us a comfortable reading line-length for the center column. We can use the names in the braces (`[]`) to then place content in any of the areas we've created via `grid-column: fullWidth;`. Struggling to visualize that? I don't blame you. Here's a quick graphic to illustrate: ![Graphic illustrating how the lines are formed and named with a few boxes illustrating where boxes assigned to various lines will show up.](https://bryanlrobinson.com/images/subgrid-visualization.png) ## Placing content on our grid Now that our grid is set up, we need to place content on it properly. Any elements within the `article-body` element will fill grid areas across the horizontal axis. This will look absolutely busted (a colloquialism meaning REALLY broken). Let's fix that by putting any direct child into the `article` grid column. ```css .article-body > * { grid-column: article; } .full-width { grid-column: fullWidth; background-color: lightblue; /* For that full-width feeling! */ } ``` Now our general content will be in a nice constrained column and any element with `class="full-width"` will go in a full-width stripe. This is handy enough without going any deeper. Anything inside that element can no be styled as a full-with item with all the white space it wants. But what if you want to have an element centered the same way inside the stripe? You'd need to create a new grid context and create columns of the proper size. In our example, we could do that, but in some circumstances dealing with "variable squishiness" may make that impossible. Even though it's possible in our case, the code would have us repeating ourselves in odd ways. To create additional layouts inside our full-width stripe that take part in the initial grid declaration, we need subgrid! ## Enter Subgrid The `subgrid` specification gives us access to the initial grid declaration's columns and/or rows. I remember hearing some debate around the syntax and whether it should inherit the entire grid or let the author control columns or rows. I'm so glad they landed where they did. The syntax feels inspired. In our use case, we just need the columns. ```css .full-width { display: grid; grid-template-columns: subgrid; } ``` That's it! we now get all of the columns that were declared on `.article-body`. Let's use those named lines and create some classes that we can use for various types of content inside a full-width stripe. ![Boxes placed inside the full-width element using the grid-column declarations below](https://bryanlrobinson.com/images/subgrid-visualization-child.png) ```css .fullWidth-center { grid-column: article; } .fullWidth-right { grid-column: right; text-align: right; } .fullWidth-left { grid-column: left; } ``` When we put it all together, we can create some interesting layouts with minimal effort! Here's the [finished Codepen](https://codepen.io/brob/pen/qBBxydZ?editors=0100). You'll need to be running the [Firefox Nightly](https://www.mozilla.org/en-US/firefox/channel/desktop/) build to see the finished product. {% codepen https://codepen.io/brob/pen/qBBxydZ %} Speaking of needing a specific browser ... I can hear a few of you out there! "But Bryan!" you say. "Browsers don't support this yet! Even when the modern browsers are supporting this, old browser still need support! I guess we can't use this yet. Oh well!" No, no. You don't get off that easy. ## Supporting browsers that don't support subgrid ![A comparison of Firefox Nightly vs Firefox 70 and how this support query looks](https://bryanlrobinson.com/images/subgrid-comparison.png) Just like [supporting browsers that don't support Grid yet](https://bryanlrobinson.com/blog/your-code-should-fall-forward/), we can support browsers that don't support subgrid. Since we know the answer to the question "[Do websites need to look exactly the same in every browser (... dot com...)](http://dowebsitesneedtolookexactlythesameineverybrowser.com/) is "No," let's talk about what this design pattern can look like in older browsers. What if we started our stripes as full-width with centered content? In most situations that should be enough. It's a clean design pattern and then we can "fall forward" into newer, cooler design patterns. Let's talk about what we need to change. First, let's declare a base style for `.full-width`. We'll use one of my favorite unexpected design patterns: [the self-centering stripe with grid](https://bryanlrobinson.com/blog/use-css-grid-to-create-full-width-background-with-centered-content/). ```css .full-width { grid-column: fullWidth; /* Sets where the element is in the parent grid */ background-color: lightblue; /* Pretty light blue! */ display: grid; /* Sets a new Grid context */ grid-template-columns: minmax(20ch, 80ch); /* 1 column to match the [article] sizing */ justify-content: center; /* Center the content */ padding: 1rem; /* Keeps gutters in shape for mobile */ } ``` Now, we'll use the power of CSS Feature Queries to fall forward into subgrid support. In order to do this, we'll unset a few values from the previous code and put our subgrid code in the CSS. ```css @supports (grid-template-columns: subgrid) { .full-width { grid-template-columns: subgrid; /* changes columns from 1 to inheriting grid lines */ padding: 0; /* Unset padding... that's built into our columns */ } /* All the other selectors we need */ .fullWidth-center { grid-column: article; } .fullWidth-right { grid-column: right; text-align: right; /* Don't want to right align the text unless the element is right-aligned as well */ } .fullWidth-left { grid-column: left; } } ``` You now have an interesting layout in browsers that support subgrid and a perfectly lovely layout for browsers that don't. The future of web layout is amazing. What are some other design patterns you might use CSS Subgrid for? [Send me a message on Twitter](https://twitter.com/intent/tweet?text=Here%20is%20what%20I%20think%20about%20your%20subgrid%20article%20https://bryanlrobinson.com/blog/use-css-subgrid-laying-out-full-width-article-stripes/) and let me know your thoughts.
brob
206,006
Less confusing defaults
A few thoughts about less confusing default configuration.
0
2019-11-15T17:08:02
https://dev.to/rumkin/less-confusing-defaults-1h2m
programming, javascript, node, web
--- title: Less confusing defaults published: true description: A few thoughts about less confusing default configuration. tags: programming, javascript, nodejs, web --- The less confusing (and harmful) defaults for code and configuration are different and opposite. Here it is: # By Default 1. Run production code. 2. Use development configuration. Other should be specified implicitly. ## Why? Development code can skip some checks or allow users to override permissions. Production code is (well should be) free of such dangerous behavior. That's why production code should be run by default. In the same time development configuration usually specifies test database and API endpoints. And thus such configuration couldn't spent users funds or send a real messages and considered less harmful. ## How ### Debug/Dev mode ❌ Wrong: ```js const DEBUG = process.env.NODE_ENV !== 'production' ``` ✅ Correct: ```js const DEBUG = process.env.NODE_ENV === 'development' ``` ### Config ❌ Wrong: ```js const CFG = process.env.NODE_ENV || 'production' const config = require(`configs/${CFG}.js`) ``` ✅ Correct ```js const CFG = process.env.NODE_ENV || 'development' const config = require(`configs/${CFG}.js`) ```
rumkin
206,032
⚡ Announcing Byteconf GraphQL: a free GraphQL conference, streamed online
I'm super excited to announce the return of Byteconf, Bytesized Code's free live-streamed conference...
0
2019-11-15T18:17:34
https://dev.to/bytesizedcode/announcing-byteconf-graphql-a-free-graphql-conference-streamed-online-2a0b
graphql, showdev, webdev
I'm super excited to announce the return of Byteconf, Bytesized Code's free live-streamed conference series. **On January 31st, join us for Byteconf GraphQL, whether you're on your couch, at your office, or wherever you may find yourself at the beginning of the new decade.** I'm thrilled to be announcing the conference today, but it wouldn't be a complete event without a great group of attendees, and of course, an incredible selection of speakers. If you're a GraphQL enthusiast, consider submitting a talk via our newly-opened Call for Papers (see the link at the end of this email) for the event – Byteconf is a great place to hone your speaking skills, and we'd love submissions from speakers of all experience levels and backgrounds. If you're excited to attend Byteconf GraphQL, you can help us put on an amazing conference by doing two things: first, RSVP and get your ticket using the link below – we'll add you to the list for the conference and you'll be first to know when we announce new speakers and updates for the conference. Second, share it with your friends! If you're on Twitter, you can retweet our announcement post (below), or if you want to put your own spin on the event, we'd love to see your tweets and retweet them to our audience. {% twitter 1195356113472040962 %} Interested in speaking? [Check out the Byteconf GraphQL 2020 Call for Papers on Papercall.​](https://www.papercall.io/byteconf-graphql-2020) See you in January! By the way, if you're interested in what Byteconf is all about, check out our past conferences – every talk from every past Byteconf is available to watch on [our YouTube channel](https://www.bytesized.xyz/s/youtube). Rad! - [Byteconf JavaScript 2019](https://www.youtube.com/playlist?list=PLH_Crma-Dc9MVB5yfC1ZwNxv8r4vhvZUQ) - [Byteconf React Native 2018](https://www.youtube.com/playlist?list=PLH_Crma-Dc9OLKleEIrzuwOmxyGWuZbRW) - [Byteconf React 2018](https://www.youtube.com/playlist?list=PLH_Crma-Dc9PRM7KxKerImYGUY22wkR3Z) - [Byteconf Reason 2018](https://www.youtube.com/playlist?list=PLH_Crma-Dc9MmJmvjov5Yo8dfuQYn5Jhf)
signalnerve
206,044
Networking with Benefits
I was not born to talk to people. I was super introverted all the way through college. I was okay...
0
2019-11-15T20:25:41
https://dev.to/williamjfermo/networking-with-benefits-gn8
networking, datascience, linkedin
![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/2peknj1lqdoqbkb52445.png) I was not born to talk to people. I was super introverted all the way through college. I was okay with studying then go home and play videogames all day. It didn't take till I went to graduate school that I started to open up more and took to the challenge of wanting to meet other people. So out of self perseverance I did something I was uncomfortable with and that was to talk to people. I was starting a new graduate program and new no one. I attended meetups and started to meet people. Before I knew it I made friends. That same philosophy I am applying to being a data scientist I know nothing about the field other then what I've been studying but I want to know more about the industry. One way to do that is through networking. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/tmidervvpndtzqc1kt8g.jpeg) Every person I network with I get new knowledge that may help me reach my goal. They give me tips and even motivation that what I want is within reach I just need to keep fighting for it. Sometimes you need that extra pair of eyes to show you something about yourself you didn't know you had. To get myself more entrenched in learning about the industry I have been joining [meetup](https://www.meetup.com/) groups that are in the lines of what I am interested such as data science, machine learning, python, etc... Meetups are not only a great way to network but they also have lectures and lightning talks helping you gain more knowledge. Now that I have went over why to network now I will give you some tips on how I do it and how you can apply it when you need to network. You have to approach networking like you are trying to get a beautiful woman's/man's number. If you can do that. You can network. Now if you can't then here are some tips how I do it and you can do it too. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/b1i0qqzf76n8dizw2xdi.jpg) This is a picture of me in a gorilla costume I ran a 5k in. It was not comfortable and after the race I took off my mask and still looked like a mess. I talked to a female runner after the race still in gorilla gear and got her number. How that happened wasn't because how I looked it was because of confidence. Don't go up to anyone and be timid, be confident. I've tried a bunch different lines before but the best line to use is a simple "Hello". ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/82dm1j3mus4i6e9xqd1f.jpg) Learn to talk to people in groups. It will be rare you see someone just standing by themselves and waiting for you to talk to them. The people you will be wanting to talk to will be the same people others want to talk to. Waiting for them to be alone will often not happen, they will always be talking to someone. You have to somehow insert yourself into the conversation. Like when joining different dataframes you need to find that key. I try to over hear what people may be talking about then just start talking about what they are talking about to get in there. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/yz4t3e2p9b960u06rji2.jpg) Now that the hard part is over what do you talk about? I always tailor my conversation depending on where I am at if you are networking after a meetup or at a bar you can't approach them the same way. Don't go straight into give me a job. Make it conversational and ease into it. Questions I sometimes ask: What are highly sought after skills in your industry? How do you like your job? Are you hiring? Questions I get asked: Why data science? Where did you work before? Do you have a kaggle account? What are your plans after? If you think this contact is networkable get their card, linkedin, or even a phone number. Never forget to do this. Always close! After linkedin connect send a follow up message saying it was nice meeting them and personalize it with something you may have talked about. Now if you follow all these simple steps and still get a no. Just do it again. Rejection is part of the process. The important part is being able to brush it off and being able to not stop trying. If you follow these simple things you should be able to network professionally.
williamjfermo
206,131
Drag and Drop Tables with React-Beautiful-DND (Part I)
This week, I wanted to experiment learning a new React component and implementing it into my Effectiv...
0
2019-11-16T02:40:51
https://dev.to/milandhar/drag-and-drop-table-with-react-beautiful-dnd-54ad
react, npm, ux, webdev
This week, I wanted to experiment learning a new React component and implementing it into my [EffectiveDonate](http://effectivedonate.herokuapp.com) website. I began to think of what aspects of the site could use a cool new feature to improve its UX, and focused in on the Profile page. Previously, the Profile page allowed users to update their default themes (Health, Education, etc), and also to view the nonprofit projects that they had starred. The list of projects was organized in a [Semantic UI Table](https://react.semantic-ui.com/collections/table/), and enabled users to view key information about the projects, donate to the project, or delete the project from their stars. However, the table was sorted in chronological order, so that the user's most recent starred projects were all the way at the bottom of the table - not the best UX! While I could have easily just sorted the table in reverse chronological order as a quick fix, I wanted to give the user some more control. So I started to brainstorm some solutions in React to make the table more dynamic. I found this [List of Awesome React Components](https://github.com/brillout/awesome-react-components), and read through a list of several drag and drop components. Drag and drop would be a nice, clean way to let the user customize their starred projects! I eventually chose [React Beautiful DnD](https://github.com/atlassian/react-beautiful-dnd) - it had over 17k stars on GitHub, a nice instruction video and many examples. ![Original Profile Page](https://i.imgur.com/FlmF3l5.png) *The original profile page, with starred projects table in chronological order* ##What is React-Beautiful-DnD? React-Beautiful-DnD is a React package with a goal of creating drag and drop functionality for lists that anyone can use, even people who can't see. The main [design goal](https://medium.com/@alexandereardon/rethinking-drag-and-drop-d9f5770b4e6b) is physicality - they want users to feel like they are moving objects around by hand. It also has accessibility features, including drag and drop using just the keyboard. It also plays nicely with tables, specifically the Semantic UI React Table component, which sealed the deal for me to use it. ##Implementing React-Beautiful-DnD on my Website In order to make my `StarredProjectsList` component DnD-able, I followed a [video course](https://egghead.io/courses/beautiful-and-accessible-drag-and-drop-with-react-beautiful-dnd) on react-beautiful-dnd, and referenced this [example](https://github.com/Frankcarvajal/rbdnd-suir-table/blob/master/src/components/SemanticUIDnDTable.js) of a Semantic UI table component. Also I made sure to install the package with: `npm install react-beautiful-dnd --save`. While I recommend going through the two resources I listed above to thoroughly understand the process for implementing the component in your project, I'll give a few highlights of key components in the API here: ### DragDropContext This [component](https://github.com/atlassian/react-beautiful-dnd/blob/master/docs/api/drag-drop-context.md) is required to specify which part of your React tree you want to be able to use drag and drop. For me, I wrapped my entire Semantic UI `Table` component with `<DragDropContext />`. A required prop for this component is `onDragEnd`, a function that dictates how the list or table's state should change once the drag operation is complete. The opening tag for my `DragDropContext` is the following: `<DragDropContext onDragEnd={this.onDragEnd}>`. The `onDragEnd` method finds the index of the starred project I dragged, and splices it into the array of my `starredProjects` state. Below is my implementation of the method: ``` javascript onDragEnd = result => { const { destination, source, reason } = result; // Not a thing to do... if (!destination || reason === 'CANCEL') { this.setState({ draggingRowId: null, }); return; } if ( destination.droppableId === source.droppableId && destination.index === source.index ) { return; } const starredProjects = Object.assign([], this.state.starredProjects); const project = this.state.starredProjects[source.index]; starredProjects.splice(source.index, 1); starredProjects.splice(destination.index, 0, project); this.setState({ starredProjects }); } ``` ### Droppable A [`<Droppable/>`](https://github.com/atlassian/react-beautiful-dnd/blob/master/docs/api/droppable.md) is a container for `</Draggable/>` items. It can be dropped on by `</Draggable />`s. The only required prop for `<Droppable />`s is a string, `droppableId`. I wrapped my `<Table.Body/>` in the `<Droppable />` component, since that is the container of data on which I will be dragging rows. ### Draggable A [`<Draggable />`](https://github.com/atlassian/react-beautiful-dnd/blob/master/docs/api/draggable.md) is the React component that will actually be dragged around onto `<Droppable />`s. It must always be contained by a `<Droppable />`, but it can also be moved onto other `<Droppable />`s. The required props for `<Draggable />`s are: `draggableId` and `index`. Some important notes on these props: 1) the `draggableId` *must* be a string. I initially made mine an integer and was stumped when my table rows couldn't be dragged. But once I added the `.toString()` function to the prop, it was all good. 2) the `index` prop must be a consecutive integer `[1,2,3,etc]`. It also must be unique in each `<Droppable />`. Below is a snippet of my code where I wrap each `<Table.Row>` in a `<Droppable/>` after `map`ing each of the starred projects in state: ``` javascript {this.state.starredProjects.map((project, idx) => { return ( <Draggable draggableId={project.id.toString()} index={idx} key={project.id} > {(provided, snapshot) => ( <Ref innerRef={provided.innerRef}> <Table.Row ... ``` ##Children Function Another quirk about the `<Droppable />` and `<Draggable />` components is that their `React` child must be a function that requires a `ReactNode`. If this child function is not created, the component will error out. The function contains two arguments: `provided` and `snapshot`. I recommend reading the documentation for both [`<Draggable />`](https://github.com/atlassian/react-beautiful-dnd/blob/master/docs/api/draggable.md) and [`<Droppable />`](https://github.com/atlassian/react-beautiful-dnd/blob/master/docs/api/droppable.md) to fully understand what these two arguments do and what props they take. Also, the `<Draggable />` and `<Droppable />` components require an `HTMLElement` to be provided to them. This element can be created using the `ref` callback in React or the ['Ref'](https://bit.dev/semantic-org/semantic-ui-react/ref) Semantic UI Component. This [react-beautiful-dnd guide](https://github.com/atlassian/react-beautiful-dnd/blob/master/docs/guides/using-inner-ref.md) does a good job of explaining the purpose of the `ref` callback and how to avoid any errors. For an example of how I used the `provided` and `snapshot` arguments of the child function, as well as the `Ref` Semantic UI Component in my table, here is a snippet of the `<Droppable />` tag: ``` javascript <Droppable droppableId="table"> {(provided, snapshot) => ( <Ref innerRef={provided.innerRef}> <Table.Body {...provided.droppableProps}> ... ``` ![GIF of the working DnD Table](https://media.giphy.com/media/H1vU3P0uSo4isgRKtb/giphy.gif) *The working DnD table* ## Conclusion Overall, it was a fun and informative process to implement my Semantic UI Table with react-beautiful-dnd. I enjoyed learning the component's API and it was interesting to work with concepts that were new to me, like the children functions and `ref` callbacks. I definitely recommend viewing the [video course](https://egghead.io/courses/beautiful-and-accessible-drag-and-drop-with-react-beautiful-dnd) on react-beautiful-dnd, and also checking out the example code online. You can also reference my [table component file](https://github.com/milandhar/mod5-project-frontend/blob/master/src/components/StarredProjectsList.js) on GitHub to fully see how I implemented the DnD components. While I am satisfied with the UX that is available on the table component now, the next step is to make it persist on the backend so that when the user refreshes the page, the table re-renders in the new order. This should require a bit of creative manipulation on the backend, which I am excited to tackle next week :) Thank you for reading and let me know if you have any questions or comments!
milandhar
206,377
Um pouco de clean code com Clojure 🔮
Nomes significativos 😏 O nome de um símbolo ou função deve "responder a todas as grandes q...
0
2019-11-16T16:27:45
https://dev.to/wakeupmh/um-pouco-de-clean-code-com-clojure-4ok0
clojure, todayilearned
## Nomes significativos 😏 O nome de um símbolo ou função deve "responder a todas as grandes questões". Deve dizer por que existe, o que faz e como é usado. Se um nome exigir um comentário, ele não revela sua intenção. ❌ **Forma incorreta** ![](https://i.imgur.com/POPdTRE.png) ✅ **Forma correta** ![](https://i.imgur.com/VET69gn.png) ### Nomes de métodos 🤔 Os métodos devem ter nomes de verbos ou frases verbais, como *post-payment*, *delete-page* ou save. Os *acessadores*, *mutadores* e predicados devem ser nomeados por seu valor e prefixados com *get*, *set*. ## Funções 🧐 - **Primeira regra**: *funções devem ser enxutas*; - **Segunda regra**: *é que elas devem ser menores que isso.* Isso implica que os blocos dentro de instruções `if`, `else`, `while` e etc. devem ter uma linha. Provavelmente essa linha deve ser uma chamada de função. Isso não apenas mantém a função anexa pequena, mas também agrega valor ao documentário, porque a função chamada dentro do bloco pode ter um nome bem descritivo. ## Argumentos de funções 🤠 Uma função não deve ter mais de três argumentos. Mantenha o número de argumentos o mais baixo possível. Agora, quando digo para reduzir o tamanho de uma função, você definitivamente deve pensar em como reduzir o `try`-`catch`, pois ele já torna seu código muito maior. Minha resposta é criar uma função contendo apenas as instruções `try-catch-finally`. E separe os corpos de `try` /`catch` /`finally` block em funções separadas ![](https://i.imgur.com/dpRNcXV.png) **Isso torna a lógica muito clara.** Os nomes das funções descrevem facilmente o que estamos tentando alcançar. O tratamento de erros pode ser ignorado. Isso fornece uma boa separação que facilita a compreensão e modificação do código. ### Error-Handling 👨‍🏭👩‍🏭 a função deve fazer uma coisa. O tratamento de erros é outra coisa. Se uma função tem `try` keyword, então deve ser a primeira palavra-chave e não deve haver nada após os blocos `catch`/`finally`. 🚨**Se seu código precisa ser comentado, você está fazendo algo errado**🚨
wakeupmh
212,875
DIY Volumetric capture
If you are working with VR, AR than you mush have heard about volumetric capture. This is a post how...
0
2019-11-29T11:37:44
https://dev.to/ievastelingyte/diy-volumetric-capture-56ie
volumetricvideo, volumetric, 3dscanning, unity3d
If you are working with VR, AR than you mush have heard about volumetric capture. This is a post how to make your own DIY volumetric capture rig. You need to choose between 180 and 360 volumetric capture rig. For 180 degrees volumetric capture you can use 1 or 2 Azure kinect, Kinect v2 or Intel realsense cameras. For full 360 volumetric video you will need 4 sensors. Additional hardware: 1 x light stand per sensor, 1 x regular US power extensions, 1 x PC per 2 sensors. I recommend PC with specs - WINDOWS 10 operating system, CPU is Intel® Core™ i9 – 3.0 GHz or higher. Once you set your sensors you will get a very very noisy point cloud and the sensors wont be calibrated. Software will fix the noice and sensor calibration instantly.EF EVE ™ volumetric capture: https://ef-eve.com/volumetric-capture/ starts from $39 a month and has all the filters for point cloud cleaning and sensors calibration.
ievastelingyte
213,859
Data-* attributes. Hook your automated tests on proper selectors.
It is often a discussion about the best practice on selecting elements while writing UI automated tes...
0
2019-12-02T09:26:31
https://dev.to/auksainis/data-attributes-hook-your-automated-tests-on-proper-selectors-2nmp
testing, selectors, automation
It is often a discussion about the best practice on selecting elements while writing UI automated tests. Recently I started using ‘data-qa’ attributes instead of IDs. In this article, I share my experience and provide some examples on this topic. ##Why data-* attributes? If you need to update every single test after each code refactoring - it’s time to think about a better way of describing the page elements. From my experience, I used to add IDs, but is that the best way to select an item? Let’s talk about the most common selectors. Class attribute. The purpose of the class attribute is to style elements. This attribute can be changed very often and you’ll be forced to refactor your tests frequently. You can make your own class naming convention, however, the class purpose is different - you need to use things for what they are meant for. ID attribute. The purpose of ID is to have a unique reference that is not repeated on a page. So if you have a list on your page - ID is not such a good idea. Even if you use an array of the same IDs, it is no longer unique. Data-* attribute. According to the definition “The data-* attribute is used to store custom data private to the page or application.” This “ghost” attribute can simply be used for automated tests and does not affect the element selection in CSS and code logic in javascript. That’s why I decided to use data-* attribute. In my tests, I name this attribute “data-qa”, but you can choose any other name you like, just don’t forget to add the suffix after “data-” by defining any string as a value. By the way, if you seek efficiency and need those attributes in code without asking permission from the devs, then you need access to the project source. So you get the flexibility of the element not being related to the content or style, however, there is one ‘but’: when you need to test if the element name (e.g. button name) has changed, don’t forget to add ‘contains(text)’ to your data-* attribute. For example, your data-qa attribute is ‘homepage-btn-continue’ on the homepage and you want to be sure, that this button name was not replaced. ##Convention with examples Convention or in other words, agreement on how to describe various web page elements is very important. I like to say - if anyone can understand the defined element without additional explanation - you did the naming right. My goal was to make the tests readable and understandable directly from the code. It means another person should be able to easily understand which element will be clicked in a certain test by reading a test script. Data-qa attributes convention in our team consists of different cases. Below I will provide some of them: Do not use random names and numbers; If there is one element per page: Add a page name, element type (abbreviations can be used) and element name, for example: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/dyogau4vf7qamyeguycn.png) If there are more same name elements per page: Add page name, element type (abbreviations can be used), element name or a part of the name and additional element type. For example, two buttons are called “Explore Templates” on the same page. So I’ll add the section name to the end of the row: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/gnwnx16qkpo9rba7wqzx.png) If the element is in the modal window: Add modal window name, element type (abbreviations can be used) and element name, for example: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/f59izur5vlgqoojn26ly.png) So that is the fragment of the convention we use. These guidelines help us to have clear and descriptive selectors. ##Conclusion In this article, I have shared one of the possible ways to make a small improvement in your automated tests. Of course, you need to add those data-* attributes by yourself. For me, it is a great way to simplify automated UI tests. You can select elements easier and make your tests easily readable. What is your experience on this topic?
auksainis
214,111
How to Become an Angular GDE
Today, there are over 100 Angular Google Developer Experts (GDEs) across the globe. These folks are i...
3,571
2019-12-02T17:55:48
https://fluin.io/blog/how-to-become-an-angular-gde
angular, gde, webdev, community
Today, there are over 100 Angular Google Developer Experts (GDEs) [across the globe](https://developers.google.com/community/experts/directory). These folks are impactful for the community, well versed in the world of building great web experiences, and experts in the technologies that make up [Angular](https://angular.io). We receive applications every day with new folks wanting to join the program, and this is one of the best parts of my job. Every week I get to meet new people that are passionate about sharing with, empowering, and engaging with others via technology. I get to meet people that are much smarter than me, and people that are building some of the coolest technology on the planet. One of the toughest parts of my job is that there are literally *thousands* of people doing this at local meetups, regional events, and in local and online communities, but not everyone can be called a GDE. We save that designation for the TOP experts, who are pushing the limits of the technology, who are the best teachers and sharers, and those out there who are having the biggest impact on their communities. To help those that think the Angular GDE designation might might be for them, I want to break down the criteria, the process, and give some general tips. ![Angular GDEs visiting the Google office to provide feedback](https://firebasestorage.googleapis.com/v0/b/fluindotio-website-93127.appspot.com/o/posts%2Fhow-to-become-an-angular-gde%2FMVIMG_20190506_164645.jpg?alt=media&token=e664f3b5-dcf5-4cda-ba61-58be435b7d80) ## The Criteria Every GDE is different. Some are heavy content-authors, writing viral content that engages and enlightens millions of developers. Others are speakers, presenting at the biggest tech events. Others are pushing new communities forward, creating events and opportunities for people to come together and collaborate. In general, we're looking for people with accurate skills and knowledge about specific technologies like Angular and the web who combine this knowledge with great community impact and reach, leaving their mark on the world every month. We generally look at Angular GDEs based on their activities in 5 categories (not all GDEs are active in all categories). * Speaking * Organizing * Content * Developing * Mentoring We also look at your technology understanding and your ability to communicate about technology. You should be able to have a conversation and convey how you teach and explain Angular concepts, and show you have the technical grounding necessary to represent the technology. ## The Process **Step 1: Be Awesome** Pretty simple, right? Not exactly. It can be hard to be heard above the cacophony of internet noise. It's hard to write blog posts that people want to read, it's hard to create repositories developers want to fork and learn from. It's hard to get accepted at top conferences. All of these things are hard to do, but you need to be doing really well in at least one of these to be considered. **Step 2: Get Referred** The easiest way to submit an application is to be referred by an existing GDE. If the program is currently accepting new candidates, a GDE referral should let you fill out an application. You should look through the criteria and determine that you are good fit before applying, because we ask for a lot of details and the application process can take a lot of time. **Step 3: Fill out a CV** The CV process has you sharing the metrics for your activities. You'll be asked to include links and evidence of your impact. This will be used throughout the process to measure and understand the awesome things you are doing for the community. **Step 4: Interviews & Checks** Once you have filled our your CV, you might get rejected right away, or you might be asked to do one or several interviews. These interviews typically cover a lot of what you put in your CV, as well as how you deal with interactions with the others, thoughts on current events. You'll be asked questions and opinions about technology and Angular. This is your opportunity to explain your impact, and show your expertise. **Step 5: Determination** Once you have made it through all of the checks and interviews, the full story of your activities will be reviewed again, and you'll receive a determination. Typically if you have made it through several of the interviews but are rejected near the end, you'll be invited to try again in the future if your impact and skills look likely to continue to grow. If you are accepted, welcome to the program! **Step 6: Maintain** GDE status is generally awarded yearly, so make sure you are continuing the awesome things that you were doing when you became a GDE. ![Angular GDEs hearing about the upcoming program updates](https://firebasestorage.googleapis.com/v0/b/fluindotio-website-93127.appspot.com/o/posts%2Fhow-to-become-an-angular-gde%2FIMG_20191027_115618_1.jpg?alt=media&token=f3a19e16-283c-4a9e-a70e-39ec70d2a81d) ## Remember The GDE program isn't currently doing enough to represent local communities around the globe . There are millions of developers in places like China, Russia, and India but we only have relatively few GDEs there compared to places where we have lots of GDEs, like in the US. We often have to consider different expectations based on different regions and cultural differences. GDEs aren't only sharing technology with the world, they must be active listeners as well. We value a unique insight about technology or people far more than we value the act of taking existing concepts and explaining them to the world. We frequently ask our GDEs for thoughts and opinions on new APIs, and for feedback about our releases and efforts. Finally, if you want to become a GDE because you think it will change your life and your career, you might be doing it for the wrong reason. We like to say that we don't create GDEs, we merely recognize them.
stephenfluin
219,432
9 web development tips and tricks out of the blue
In this article, I gathered 9 tips and tricks for web developers that I used recently.
0
2019-12-16T14:39:00
https://dev.to/armelpingault/9-web-development-tips-and-tricks-out-of-the-blue-119i
javascript, webdev, tutorial, beginners
--- title: 9 web development tips and tricks out of the blue published: true description: In this article, I gathered 9 tips and tricks for web developers that I used recently. tags: javascript, webdev, tutorial, beginners --- In this article, I gathered 9 tips and tricks for web developers that I used recently. Even though the content is more oriented toward beginners, I hope it can still be a good reminder for more advanced developers. # 1. Get the current timestamp This is a little shorthand I found interesting, especially to understand what's going on behind the scene: ```javascript const ts = + new Date(); // Same as (new Date()).getTime() console.log(ts); // 1576178666126 ``` Demo: https://codepen.io/armelpingault/pen/eYmBzEG According to the [documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/valueOf): > The `valueOf()` method returns the primitive value of the specified object. The shorthand notation is prefixing the variable with a plus sign. and in case you need a reminder about what exactly is a [primitive value](https://developer.mozilla.org/en-US/docs/Glossary/Primitive): > In JavaScript, a primitive (primitive value, primitive data type) is data that is not an object and has no methods. There are 7 primitive data types: string, number, bigint, boolean, null, undefined, and symbol. # 2. Detect a long press I have a button that should trigger 2 different actions depending on the user interaction: one action on a normal click and another action on a long press on the button: ```javascript const btn = document.querySelector('button'); let timer; btn.addEventListener('mousedown', (e) => { timer = + new Date(); }); btn.addEventListener('mouseup', (e) => { if (+ new Date() - timer < 500) { console.log('click'); } else { console.log('long press'); }; }); ``` Demo: https://codepen.io/armelpingault/pen/rNaLpLE #3. Sort an array of objects This one is a classic, we often need to sort an array of objects according to the value of one property of the objects. ```javascript const arr = [ {name: 'Jacques', age: 32}, {name: 'Paul', age: 45}, {name: 'Pierre', age: 20} ]; arr.sort((firstEl, secondEl) => firstEl.age - secondEl.age); // [{name: 'Pierre', age: 20}, {name: 'Jacques', age: 32}, {name: 'Paul', age: 45}] // And if you want to sort in descending order: arr.sort((firstEl, secondEl) => secondEl.age - firstEl.age); // [{name: 'Paul', age: 45}, {name: 'Jacques', age: 32}, {name: 'Paul', age: 20}] ``` Demo: https://codepen.io/armelpingault/pen/zYxoBPW Also note that using the [sort()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) function, the array is sorted in place, and no copy is made. # 4. Communicate between components When I need 2 of my Javascript components to communicate with each other, I sometimes end up implementing a [Custom Event](https://developer.mozilla.org/en-US/docs/Web/API/CustomEvent/CustomEvent): ```javascript const el = document.createElement('div'); const btn = document.querySelector('button'); const event = new CustomEvent('sort', { detail: { by: 'name' } }); btn.addEventListener('click', () => { el.dispatchEvent(event); }); el.addEventListener('sort', (e) => { console.log('Sort by', e.detail.by); }); ``` Demo: https://codepen.io/armelpingault/pen/gObLazb # 5. Deep and shallow copy of an array of objects This topic deserves a whole article by itself, you will find thousands of articles covering it, but I thought I put another reminder here after a discussion I had a couple of days ago with one of my colleagues. When you need to create a copy of an array of object, it's tempting to do: ```javascript const arr1 = [{a: 1, b: 2}, {c: 3, d: 4}]; const arr2 = arr1; ``` However, this is just going copy the reference to the original object and if you modify `arr1`, then `arr2` will be modified too. ```javascript const arr1 = [{a: 1, b: 2}, {c: 3, d: 4}]; const arr2 = arr1; arr1[1].c = 9; console.log(arr2); // [{a: 1, b: 2}, {c: 9, d: 4}]; ``` Demo: https://codepen.io/armelpingault/pen/OJPbXEy Using the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) can help you create a shallow copy of your array. This will create a new array, but it will keep a reference to the objects inside the array. ```javascript const arr1 = [{a: 1, b: 2}, {c: 3, d: 4}]; const arr2 = [...arr1]; arr1.push({e: 5, f: 6}); arr1[0].a = 8; console.log(arr2); // [{a: 8, b: 2}, {c: 3, d: 4}]; ``` Demo: https://codepen.io/armelpingault/pen/GRgNqBg As you can see, adding a new element to `arr1` is not changing `arr2`. However, if I modify an object inside `arr1`, then we will also see the changes inside `arr2`. Now if I want to make a deep copy of the array, I can do like this: ```javascript const arr1 = [{a: 1, b: 2}, {c: 3, d: 4}]; const arr2 = JSON.parse(JSON.stringify(arr1)); arr[1].d = 6; console.log(arr1); // [{a: 1, b: 2}, {c: 3, d: 6}]; console.log(arr2); // [{a: 1, b: 2}, {c: 3, d: 4}]; ``` Demo: https://codepen.io/armelpingault/pen/BayQzOK In this case, I can see that whatever I change in `arr1`, it won't have any impact on `arr2`. Note that with [JSON.stringify](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify), object instances (including Map, Set, WeakMap, and WeakSet) will have only their enumerable properties serialized. # 6. Copy to clipboard This is a little snippet I use to copy the content of a `<textarea>` into the clipboard: ```javascript const textarea = document.querySelector('textarea'); const copy = () => { textarea.select(); document.execCommand('copy'); alert('Content popied to clipboard!'); }; textarea.addEventListener('click', copy); ``` Demo: https://codepen.io/armelpingault/pen/jOEVrQw There is also the [Clipboard API](https://developer.mozilla.org/en-US/docs/Web/API/Clipboard_API) that is offering more options to manipulate the clipboard. However, it's still an experimental feature and the [browser support is still pretty low](https://caniuse.com/#feat=mdn-api_clipboard). ![Clipboard API browser support](https://thepracticaldev.s3.amazonaws.com/i/vbfx0r65ddj0092dtnhg.png) # 7. One npm command to rule them all I am a very big fan of projects where the deployment command is just one simple line, whether it's for development or production. While working on a project recently, I realized that we needed two commands to start our development environment, one from our server folder and one from our client folder. Well, we definitely needed to merge those two commands into one command we could run from the root folder. ``` Projects | |-- client | package.json | |-- server | package.json | package.json ``` Thanks to the npm package [concurrently](https://github.com/kimmobrunfeldt/concurrently) which is allowing us to run several npm commands at the same time. ```javascript // After installing the npm package npm install --save-dev concurrently // You should now see it in your package.json "devDependencies": { "concurrently": "^5.0.0" } // Then add the "master" command to the root package.json "scripts": { "start": "concurrently \"npm start --prefix ./client\" \"npm start --prefix ./server\"", } ``` The `--prefix` option will run any npm command inside the sub-folder of your choice: ```javascript npm run build --prefix ./client ``` # 8. Optimize SVG file We work a lot with SVG icons on a daily basis, and those icons are usually provided by web design agencies or by the clients' web design department. A lot of them are actually created and generated with software like Adobe Illustrator. Even though they are great tools to create graphic designs, they unfortunately also create a lot of extra useless markup in your SVG files if you don't pay attention to your export settings. In order to optimize my SVG files, I use [SVG Optimizer](https://github.com/svg/svgo), which is pretty simple to integrate in your project: ```javascript // Install SVGO globally npm install -g svgo // Optimize all SVG file from -f "folder" to -o "folder" svgo -f assets/src/svg -o assets/dist/svg ``` Last time I ran it, I was able to measure an average gain of around ~30% of the size of all the files... not bad! # 9. Inspect WebSocket messages in Chrome For the last tip of this article, I am re-posting [an answer I wrote on StackOverflow](https://stackoverflow.com/questions/37413092/how-to-inspect-websocket-frames-in-chrome-properly/59273632#59273632 ) that I needed while working on the development of a real-time web application. 1. Open **Chrome Developer Tools**. 2. Click on the **Network** tab. 3. Click on the filter **WS** (for WebSockets). 4. Reload the page to make sure you see your connection in the **Name** column. 5. Click on **Messages**. Now you should see all your communications with your WebSockets, with 3 columns: Data, Length and Time. ![Chrome Developer Tools](https://thepracticaldev.s3.amazonaws.com/i/jr979us7dmt35xj8y10j.png) Any other web development tips and tricks you would like to share with the community, feel free to add them in the comment section ;)
armelpingault
214,283
Introducing our December 2019 sponsors
Our wonderful sponsors are vital to the health of dev.to and it is great to work with companies contributing so much to the ecosystem.
0
2019-12-06T15:00:58
https://dev.to/devteam/introducing-our-december-2019-sponsors-4po3
meta
--- title: Introducing our December 2019 sponsors published: true description: Our wonderful sponsors are vital to the health of dev.to and it is great to work with companies contributing so much to the ecosystem. tags: meta cover_image: https://p78.f0.n0.cdn.getcloudapp.com/items/9ZuNe404/Image+2019-12-02+at+7.14.16+PM.png?v=b4ec225246d3797e251426bbdbf9cbdc --- This month, we return DigitalOcean and CloudBees, and welcome Pusher as a Gold Sponsor. Thank you to each of these companies for being a valuable DEV Community partner and supporter. Please take a few minutes to explore their offerings and consider them for your company or next project. ## **DigitalOcean** [DigitalOcean](https://do.co/devto) is a much-loved cloud computing platform. I'm always impressed by how well-built _and well-documented_ the DigitalOcean core products are. It's easier said than done, and they've been able to stay ahead of the curve. DigitalOcean has been our longest-running supporter to date, and we're extremely thankful for their consistent support. Their commitment to the developer ecosystem is crystal clear. As an organization, it's awesome to see the evolution of their service and their remarkable ability to stay top of the game for years now. {% organization digitalocean %} ## **CloudBees Rollout** [CloudBees Rollout](https://rollout.io/?utm_source=devto&utm_medium=referral&utm_content=dec_sponsorship&utm_campaign=rollout_trial_devto) is a powerful code release toolset that brings feature flags, analytics, and testing to the process. Reduce risk by decoupling feature deployment from code releases. CloudBees as a whole provides solutions for CI, CD, and application release orchestration (ARO). They are the largest contributor to Jenkins, the very popular open-source automation server. Cloudbees is truly a company that lives DevOps and CI/CD to its core with its entire offering. We are very grateful to CloudBees who have continued to support us as a Gold sponsor for three months straight. {% organization cloudbees %} ## **Pusher** [Pusher](https://pusher.com/?utm_source=dev-to&utm_medium=sponsorship&utm_campaign=gold-dec-19&utm_content=recognitionthread) empowers developers to create collaboration and communication features in their web and mobile apps. Providing robust APIs that are flexible, scalable, and easy to integrate, Pusher can power realtime experiences in all your apps. We are using Pusher ourselves to build out several realtime features for [dev.to](/). Pusher is a super well-respected engineering organization and we're pumped to build on their infrastructure. {% organization pusher %} *** These organizations are helping us build the best community we can. I urge you to check out their offerings if you are in the market for their services. Happy coding ❤️
ben
214,314
Coroutine Delay
Handler().postDelayed({ doSomething() }, 3000) DispatchQueue.main.asyncAfter(dea...
0
2019-12-03T01:50:39
https://patrickjackson.dev/coroutine-delay/?utm_source=rss&utm_medium=rss&utm_campaign=coroutine-delay
android, ios, kotlinmultiplatform
--- title: Coroutine Delay published: true date: 2019-12-03 00:09:28 UTC tags: android,ios,kotlin-multiplatform canonical_url: https://patrickjackson.dev/coroutine-delay/?utm_source=rss&utm_medium=rss&utm_campaign=coroutine-delay cover_image: https://thepracticaldev.s3.amazonaws.com/i/1tbr43z1s5aca1wl5h8h.jpg --- ``` Handler().postDelayed({ doSomething() }, 3000) DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { doSomething() } ``` A common pattern is to delay execution of some code for a given amount of time. In Android and iOS there are several ways to do, such as the above snippet. How can we pull this into shared Kotlin Multiplatform code? ``` myCoroutineScope.launch { delay(3000) uiScope.launch { doSomething() } } ``` Here the `uiScope.launch` runs the `doSomething()` function on the main thread. If main thread is not needed `uiScope` can be left out. There is one trick here for MPP projects. On iOS this will cause a crash at runtime. On iOS you will need to use a custom Dispatcher. Chances are that you are already as a stopgap until multithreaded coroutine support lands. Here is a Dispatcher that supports `delay()` on iOS. Full source can be found in the [ReadingList sample app](https://github.com/reduxkotlin/ReadingListSampleApp/blob/master/common/src/iosMain/kotlin/org/reduxkotlin/readinglist/common/UI.kt) from the ReduxKotlin project: ``` import kotlinx.coroutines.* import platform.darwin.* import kotlin.coroutines.CoroutineContext \** * Dispatches everything on the main thread. This is needed until * multithreaded coroutines are supported by kotlin native. * Overrides Delay functions to support delay() on native */ @UseExperimental(InternalCoroutinesApi::class) class UI : CoroutineDispatcher(), Delay { override fun dispatch(context: CoroutineContext, block: Runnable) { val queue = dispatch_get_main_queue() dispatch_async(queue) { block.run() } } @InternalCoroutinesApi override fun scheduleResumeAfterDelay(timeMillis: Long, continuation: CancellableContinuation<Unit>) { dispatch_after(dispatch_time(DISPATCH_TIME_NOW, timeMillis 1_000_000), dispatch_get_main_queue()) { try { with(continuation) { resumeUndispatched(Unit) } } catch (err: Throwable) { logError("UNCAUGHT", err.message ?: "", err) throw err } } } @InternalCoroutinesApi override fun invokeOnTimeout(timeMillis: Long, block: Runnable): DisposableHandle { val handle = object : DisposableHandle { var disposed = false private set override fun dispose() { disposed = true } } dispatch_after(dispatch_time(DISPATCH_TIME_NOW, timeMillis 1_000_000), dispatch_get_main_queue()) { try { if (!handle.disposed) { block.run() } } catch (err: Throwable) { logError("UNCAUGHT", err.message ?: "", err) throw err } } return handle } } ``` The post [Coroutine Delay](https://patrickjackson.dev/coroutine-delay/) appeared first on [PatrickJackson.dev](https://patrickjackson.dev).
patjackson52
214,400
Agile Git Integration with GitWorkflows
In this article we’ll explore the use of feature branches based off of GitWorkflow to integrate...
3,313
2019-12-03T07:00:43
https://killalldefects.com/2019/12/03/agile-git-integration-with-gitworkflows/
devops, git, productivity, agile
--- title: Agile Git Integration with GitWorkflows published: true date: 2019-12-03 06:48:20 UTC tags: devops, git, productivity, agile series: Software Quality Defense In Depth cover_image: https://i1.wp.com/killalldefects.com/wp-content/uploads/2019/12/GitWorkflows.png?fit=768%2C313&ssl=1 canonical_url: https://killalldefects.com/2019/12/03/agile-git-integration-with-gitworkflows/ --- In this article we’ll explore the use of feature branches based off of [GitWorkflow](https://git-scm.com/docs/gitworkflows) to integrate features and fixes only when they are fully ready to go. While this is a less well-known workflow than others, it offers a significant degree of freedom and flexibility. We won’t be covering all of GitWorkflow in this article, but rather a simplified variant focused on working with a single persistent branch, a single integration branch, and individual feature branches. _Note: To those familiar with [GitFlow](https://datasift.github.io/gitflow/IntroducingGitFlow.html), I should note that although the names are similar, **these are two entirely different workflows**. This article will only cover the Git Core Team’s Workflow. If you’d like to compare and contrast, I recommend [this HackerNoo](https://hackernoon.com/how-the-creators-of-git-do-branches-e6fcc57270fb)[n](https://hackernoon.com/how-the-creators-of-git-do-branches-e6fcc57270fb)[article](https://hackernoon.com/how-the-creators-of-git-do-branches-e6fcc57270fb)_. ## The Need for a Dynamic Git Strategy Before I describe what this workflow is, let’s look at a few scenarios: - Imagine you’re responsible for an application inside of a larger organization. You add features to it and provide fixes to issues as they come up. Your organization likes to release frequently and you may or may not know which work items will be included until near the final pre-release testing cycle. - Alternatively, you’re working on the same sort of application and one of your features has an obscure bug found late in testing and needs to be pulled from the release. - Or imagine a developer implements a new feature which brings a testing environment to a halt and must be removed until remedied so that testing can resume. In any of these scenarios, a certain degree of development agility is needed. Your core need is to be able to _quickly remove features or fixes from a release without introducing major risks_. This is what the workflow is all about – being able to quickly add or remove features to a branch as needed. ## What about just using a Development Branch? So, why is this needed? Can’t we just have all developers code on a persistent development branch and periodically split off branches for major features, then merge them back in? Well, maybe, but if you discover an issue with a two week old feature commit in your development branch just before release, what are your options? - Delay the release to get a fix in - Make a new commit to disable the feature, hoping that you disabled it correctly and that it was the only new issue introduced - Release with the defect In my opinion, none of these are fantastic options. You’re forced to take unnecessary risks or push a date back when you might not need to. ## Branch Structure Overview With the Git Core Team’s workflow, you just custom order a release based on a combination of branches that doesn’t include the one that introduced the issue. Under this workflow you have **feature branches** which represent individual work items being changed. These are your individual fixes and new features. Each feature branch branches off of the **master branch**. This is nothing too unusual except what it represents. _Master represents releasable code._ This is code that is fully tested, reviewed by product management, and does not introduce any regression issues. Where the workflow gets interesting is with it’s concept of a **proposed updates** branch (often referred to as _pu_ for short). Proposed updates is a branch containing finished features or fixes that are ready to be evaluated in concert to form a finished build. _Note: GitWorkflows also advocates for a `next` branch and a `maint` branch. `next` equates to a pre-release environment while `maint` is based off of the last production release. You may or may not need these branches._ ## Testing and Integration So, all testing and product review occurs based off of the proposed updates branch. Once that integration testing branch is certified good, the feature branches for individual changes are merged into master after rebasing the feature branch onto master prior to merging them in. This does a couple of things for us: - It keeps version history nice and clean - It makes sure that feature branches don’t seep into each other so you’re not dependent on code you don’t mean to be - If the integration branch gets corrupted, we can regenerate it at will I could explain these more in detail here, but it’s best to show you the life of a single feature. ## Case Study: Adding a Dark Theme Because every application needs a dark theme, let’s use that as an example in our case study. Priya looks at the team’s agile board and decides to take on the “Add a Dark Theme” story with an ID of “FOO-123” in the ticketing system. She looks over the user story and acceptance criteria, decides she has enough information to get started, so she assigns it to herself and creates a branch. She creates this branch by switching to the `master` branch and then running `git checkout -b FOO-123` This will create a branch based on the work item’s identifier (FOO-123) off of the current branch and then switch to it. Priya then spends her time testing and making changes until she’s satisfied with her feature. Once she’s ready, she opens a merge request (AKA a pull request) from `FOO-123` to the `pu` (proposed updates) branch. Jerome does a code review on the branch and is happy with the work and approves the merge request so Priya merges it into `pu`. ![](https://i0.wp.com/killalldefects.com/wp-content/uploads/2019/12/image-1.png?w=770&ssl=1) The continuous integration server sees the change to the `pu` branch which triggers a build. The build succeeds and is later deployed to a test environment. From there, quality assurance reviews the feature and finds no issues. Product management looks at it and is likewise pleased. ## Removing items from Proposed Updates Unfortunately, Landon has been working on a new page that didn’t take the newly-introduced theme into account. The dark theme doesn’t render well on this page and so when Landon’s page is reviewed by product management and quality assurance, an issue is created. Because there’s only one day left in the development cycle and Landon’s new page is critical to the business, the team decides to remove the dark theme from this release and move it into the next one. ![](https://i0.wp.com/killalldefects.com/wp-content/uploads/2019/12/image-4.png?w=770&ssl=1) In order to do this, the team decides to recreate the `pu` branch off of the current `master` branch. They do this either by deleting and then re-branching `pu` off of the latest `master` commit or by hard resetting `pu` to match the latest `master` commit and then force pushing. _Note: Force push is dangerous and destructive and can blow away changes on the branch. In this case, we want to remove changes to the `pu` branch and start with a pristine state. If having a few qualified people perform a force push to reset the `pu` branch periodically is a turnoff for you, I recommend looking at deleting the `pu` branch and re-branching instead, but it may be additional overhead to do so._ Once the `pu` branch is available, all feature branches that _should_ be integrated into it are merged back into it and a new build is generated and deployed to the testing environment. I like to think of this as a highly configurable menu where you can choose whether or not each feature is part of a branch. ## Integrating into Master Next sprint, once the release has cleared, Priya makes some additional tweaks to FOO-123 to allow the dark theme to render properly on Landon’s new feature. Once she is ready, she submits another pull request and then merges into `pu` once it is approved. This time everything works well and FOO-123 is approved and cleared into the release. The feature now needs to be merged into the `master` branch. ![](https://i0.wp.com/killalldefects.com/wp-content/uploads/2019/12/image-3.png?w=770&ssl=1) While merging `pu` directly into the `master` branch _can_ work, it has a few downsides: - You need to check the `pu` branch to make sure that every feature integrated with it has also passed testing and is approved for the release. If you make a mistake, you added something untested to a production-ready branch. - Merging an entire integration branch into `master` makes it a lot harder to read and understand the individual commit history because features move from one branch and into another. While it may make sense looking at a single feature, it’s harder to read the actual overall branch history. Because of this, we need to **merge the feature branch into master instead of merging proposed updates into master**. This is the number one issue people have with understanding the Git Core team’s workflow, so it’s important to stress. Instead, we first switch to our feature branch and then `git rebase master` to rebase that feature onto the latest version of master. This helps keep our version history pristine and allows us to resolve merge conflicts inside of the branch instead of inside of the branch we’re integrating with. Once that’s done, and the merge is committed, we switch to the `master` branch and then merge the topic branch in via `git merge FOO-123` (the name of the branch). With the merge committed and pushed, it is now a formal part of the next release and the individual feature branch _can_ be deleted if you so choose. ## What if master has a problem? If code gets into the `master` branch that needs to be removed, you now have a problem. This workflow isn’t great at handling this task because `master` is supposed to be persistent. Because of this, I highly advise you to delay merging into `master` if you have any doubts since it’s a lot easier to merge into master later in the process instead of trying to remove something from or patch master. One of the things you can do is institute a rule where once work items are approved by product management, they are closed and merged into `master` and any adjustments will need a _new_ work item instead of reopening the old one. In my experience, this tends to produce good results, but your mileage may vary. If you _do_ need to make adjustments to `master` you have a few options as I see it: - Require a new work item to go through the pipeline and enter the `master` branch with whatever change you desire. - Revert `master` to a known good state, then force push and merge items past that state back in. - Use a revert commit to revert a specific commit to master and reintegrate later. None of these are great options: - You always want `master` to be releasable, so waiting for a new item to come through limits your agility. - Force push is something too dangerous to be doing on your `master` branch except in extreme circumstances, so it shouldn’t be part of your normal workflow. - Revert commits muddy up your version history and may not even work if subsequent commits modified the same areas of code. All in all, you’re best off spending the extra time to make sure that features are really good to go before integrating. ## Other Recommendations I strongly recommend that you tag releases on the `master` branch as the existence of a tag allows you to quickly create a `maint` branch for offering hotfixes as needed later on without having to guess or search for which commit hash corresponded to your last production release. * * * I advocate that you should reset your `pu` branch to `master` at the beginning of every sprint. This helps keep branch comparisons accurate when comparing a feature branch to the `pu` branch. This also makes it easy to identify what’s in the `pu` branch and still up in the air. * * * Some version control tools such as GitHub’s web user interface handle merge conflicts by merging the branch you’re trying to integrate with into your feature branch. This is a problem when trying to merge a feature branch into `pu` since you never want other features to enter feature branches without your knowledge. Instead, use command line or an external tool such as [GitKraken](https://www.gitkraken.com/invite/4tRysUoN) to resolve merge conflicts in `pu`. * * * If you don’t like the name `pu`, I recommend calling the branch `qa`. I personally go with `releases` instead of `master` and `qa` instead of `pu`, but the concepts are the same, regardless of which terminology you use. ## What about Maintenance Releases? This model is not well suited for long-running maintenance branches. If you are working in an infrequent release environment, you should consider something like GitFlow or another git methodology. In general, under the Git Core Team’s workflow, you’ll find yourself wanting to release more frequently from `master` as opposed to providing hotfixes into production. However, if you _do_ need to do a hotfix, you can do it by switching to the tag you created on the `master` branch for a given release and creating a new branch at that location representing the production patch. Alternatively, you could always have a `maint` branch and update that branch to point to the latest production release commit every release. Once your hotfix branch is created, you can branch off of this branch to create an individual feature branch for your production patch. You’ll then merge your feature branch into the hotfix branch, generate a build, get that build verified by QA and product, and then apply it to production. ![](https://i2.wp.com/killalldefects.com/wp-content/uploads/2019/12/image-5.png?fit=770%2C320&ssl=1) This different workflow exists to keep changes in `master` more recent than the production tag from reaching production. Once a production patch is applied, the feature should also be merged into `pu` and then `master` once tested in `pu`. Failing to do this will result in a regression issue where the applied production patch is no longer present in the next release. * * * While you _can_ handle maintenance issues this way, if you do have a habitual need for production patches and maintenance branches, you should consider releasing _more frequently_ so that defect resolution can be more easily achieved in planned releases and patches are reserved for those truly horrific high severity bugs. You should also take a good look at your development and testing practices if things severe enough to warrant production patches are a regular occurrence for your team. ## Conclusion This workflow isn’t for everyone, but if you want absolute control over what features are in which builds, this is a workflow to strongly consider. This is only one flavor of the GitWorkflow. If you’d like to learn more, I recommend reading the following materials: - [How the Creators of Git do Branches](https://hackernoon.com/how-the-creators-of-git-do-branches-e6fcc57270fb) - [The GitWorkflows Documentatio](https://git-scm.com/docs/gitworkflows)[n](https://git-scm.com/docs/gitworkflows) The post [Agile Git Integration with GitWorkflows](https://killalldefects.com/2019/12/03/agile-git-integration-with-gitworkflows/) appeared first on [Kill All Defects](https://killalldefects.com).
integerman
214,410
How to take a screenshot on Mac | A quick tutorial
Sometimes, while working on the internet we need to save some pictures while browsing to check them o...
0
2019-12-03T07:28:34
https://dev.to/gangapackers/how-to-take-a-screenshot-on-mac-a-quick-tutorial-18c9
takeascreenshotonmacbook, takeascreenshotonmac, howtoscreenshotonmac
Sometimes, while working on the internet we need to save some pictures while browsing to check them offline. Taking a screenshot is the easiest way to save pictures from the internet. There are some shortcuts for taking a screenshot on all operating systems. The shortcuts may be different on different operating systems. Such as if you are using the world’s most-used PC OS i.e Windows you need to press Windows key + PrtSc button. In the windows, the dedicated prtscr button is given to take a screenshot on windows pc. On the other hand, Mac is also a PC’s operating system that is developed by Apple inc. and this is a lot different from the windows. If you have used windows and switching to the mac you might have a lot of problems with using macintosh os, because it has their own shortcuts which are totally different from the windows. So you have to learn all these shortcuts to use mac. Consider you want to take a screenshot on macbook air you will not find any windows or PrtScr button on the keyboard. But don’t worry there are some other shortcuts to take a screenshot on mac. Here is a quick tutorial on how to take a screenshot on mac. Press Shift — Command — 4 together from the keyboard. Now you will see the capture screen. To capture full screen choose capture entire screen option from the given options. After selecting this option the pointer will immediately change to the camera icon. Now click with the camera icon on any part of the screen and screenshot will be captured. Now you will see a thumbnail of the screenshot in the corner of the screen. Click on the thumbnail to edit/copy/save it. If you don’t click on that it will be automatically saved to your desktop. If you want to know how to capture some specific area on your Mac desktop and how to take a screenshot on a different version of Mac os read the full article. How to screenshot on mac. Also, read the article on How to recover microsoft account using account.live.com/acsr
gangapackers
214,424
Warped Re-rendering | React performance optimization
Introducing a new approach to optimize your react app for isolating the re renders to the particular required subtree.
0
2019-12-03T10:16:13
https://dev.to/aftabnack/warped-re-rendering-react-performance-optimization-512i
react, hooks, optimization
--- title: Warped Re-rendering | React performance optimization published: true description: Introducing a new approach to optimize your react app for isolating the re renders to the particular required subtree. tags: reactjs, hooks, optimization --- In this post, I will be introducing a new optimization that will significantly improve your React app performance. In one of my particular case it reduced the amount of _react commits_ from **~200 to just ~2** (You can visualize these in the new React Profiler :fire: :fire:). It's a very specific case, but it proves the utility of the approach and illustrate it's benefits. **Most importantly, we shouldn't be [_lifting the state up_](https://reactjs.org/docs/lifting-state-up.html) if we are doing that only to set state from another component**. Let's understand this by looking at a contrived example. ## The problem I have a React app, where I have implemented a top level `<Loader />` component whose job is to either display the loading symbol or not. It looks something like this. ```jsx import React, { useState } from "react"; const AppContext = React.createContext(); export default function App() { const [isVisible, setShowLoader] = useState(false); return ( <AppContext.Provider value={{ setShowLoader }}> <div> {isVisible && <Loader />} Remainder of my app </div> </AppContext.Provider> ); } ``` In the above code, you can see that I have a Loader component at the top level, and I have passed it's setter down using the context. Now `setShowLoader` is used by various parts of my code to display the loader (primarily before API call) and hide the loader (post call is settled). By now the problem with this approach is obvious; Since we have this state at the top level component, every time I call `setShowLoader` the entire App will go into reconciliation. And since most of us don't do pre optimization, this was re rendering my whole app. ## Introducing Mitt We have a small utility that we have written in our codebase, which is basically a pub/sub model using which we can pass events & data anywhere to anywhere. We can use this to dispatch events from any component to any other component. Upon researching online, I found an excellent package that exists for this purpose. ```js import mitt from 'mitt'; const emitter = mitt(); // listen to an event emitter.on('foo', e => console.log('foo', e)) // listen to all events emitter.on('*', (type, e) => console.log(type, e) ) // fire an event emitter.emit('foo', { a: 'b' }) // working with handler references: function onFoo() {} emitter.on('foo', onFoo) // listen emitter.off('foo', onFoo) // unlisten ``` Now with this utility I can communicate between any components in my codebase. ## The solution Now that I know I can communicate from any part of my code to my top level Loader component, I can move my `isVisible` state into `<Loader />` component. With this, whenever I change my state, only my Loader component will re-render and my entire app re-render is prevented. My final code will look as follows. ```jsx import React, { useState } from "react"; import mitt from 'mitt'; const AppContext = React.createContext(); const events = mitt(); export const showLoader = val => { events.emit("showLoader", val); }; function Loader() { const [isVisible, setShowLoader] = useState(false); useEffect(() => { events.on("showLoader", setShowLoader); return () => { events.off("showLoader", setShowLoader); }; }, []); if (isVisible) { return <div>Loading GIF</div>; } return null; } export default function App() { return ( <AppContext.Provider value={{ showLoader }}> <div> <Loader /> Remainder of my app </div> </AppContext.Provider> ); } ``` ## To summarize * **We can use this whenever we have situation where the state is used in one component (or it's subtree) but is updated from other places in the code** * We shouldn't be [_lifting the state up_](https://reactjs.org/docs/lifting-state-up.html) if we are doing that **only to** set state from another component. * We have depended on a pub/sub model to communicate between component. https://github.com/developit/mitt * By moving the state of the `Loader` to the Loader component itself, we have **avoided re-rendering the entire app**. > Also Note: Since I have exported the `showLoader` function from the App, I can import it into any component and use it; instead of taking it from Context.
aftabnack
217,033
How Game Dev and Physics Constants Made Me Think About Religion
I am an atheist and developer, I’ve found it surprising how often these two identities collide. I’m...
0
2019-12-08T16:16:25
https://qvault.io/2019/12/08/how-game-dev-and-physics-constants-made-me-think-about-religion/?utm_source=rss&utm_medium=rss&utm_campaign=how-game-dev-and-physics-constants-made-me-think-about-religion
atheism, gamedev, programming, constants
--- title: How Game Dev and Physics Constants Made Me Think About Religion published: true date: 2019-12-08 16:10:03 UTC tags: atheism,gamedev,Programming,constants canonical_url: https://qvault.io/2019/12/08/how-game-dev-and-physics-constants-made-me-think-about-religion/?utm_source=rss&utm_medium=rss&utm_campaign=how-game-dev-and-physics-constants-made-me-think-about-religion --- ![](https://qvault.io/wp-content/uploads/2019/12/photo-1495954222046-2c427ecb546d-1024x576.jpeg) I am an [atheist and developer,](https://twitter.com/wagslane) I’ve found it surprising how often these two identities collide. I’m fascinated when something that deals with engineering directly influences my views on theism, or at least makes me consider new ideas. When building a game engine, even the most basic one, it becomes apparent that certain constants must be set. Take a look at this GIF of a Halo warthog launching into the air: <iframe title="Halo Reach - The Warthog Ramp" width="1080" height="608" src="https://www.youtube.com/embed/BJjPfr5gCBo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> Let us assume (incorrectly) that the Halo developers wrote a physics engine that perfectly describes the functions of the natural laws of the real universe. Even with those perfect behaviors programmed, the programmers would need to define certain constants that control the level to which certain behaviors will execute. For example, it seems that Halo’s gravitational constant is much lower than ours. Everything is much more floaty. The gravitational constant in our universe is: ![](https://qvault.io/wp-content/uploads/2019/12/Screen-Shot-2019-12-07-at-2.19.00-PM.png) which is just a number and a unit. If we were the game developers of our own universe, we could just as easily swap that number out for a smaller number to give ourselves superpowers. ## What does this have to do with Theism? One of the greatest philosophical and scientific mysteries of the universe is the question, “How were these universal constants set?” Who or what was the entity that decided that 6.67430\*10^(-11) was the right gravitational constant? Why not a bit more? Other [physical universal constants](https://en.wikipedia.org/wiki/Physical_constant) that seem to be arbitrarily set include: ![](https://qvault.io/wp-content/uploads/2019/12/Screen-Shot-2019-12-07-at-2.28.29-PM-1024x711.png) ## So you are saying there is a God? No. It will take more than the question: _“Who decided what the universal constants should be?”_ for me to arrive at the conclusion that Moses really parted the Red Sea, Jesus rose from the dead, or Mohammad received the last and final revelation. I do however acknowledge that it is a very interesting question, and we may never even get to find the answer. Having built games in the past, I find it especially eerie that there is no way to simulate physics without choosing constants, and that there aren’t any “right” constants. By that, I mean whatever constants are chosen will simply result in different experiences for the players. This implies (but by no means proves) that someone is making decisions on the behalf of our universe’s game engine. I’m absolutely still a soft atheist, I reject that we have enough supporting evidence to justify belief in the divine. I think that the universal constants are a fascinating mystery however, one that I hope we someday learn more about. Lane on Twitter: [@wagslane](https://twitter.com/wagslane) Lane on Dev.to: [wagslane](https://dev.to/wagslane) Lane on Medium: [@wagslane](https://medium.com/@wagslane) Download Qvault: [https://qvault.io](https://qvault.io/) Star our Github: [https://github.com/q-vault/qvault](https://github.com/q-vault/qvault) The post [How Game Dev and Physics Constants Made Me Think About Religion](https://qvault.io/2019/12/08/how-game-dev-and-physics-constants-made-me-think-about-religion/) appeared first on [Qvault](https://qvault.io).
wagslane
219,269
How to Write the Best Meta Description
When you are creating a blog, the only thing that matters is the content. You know the saying, conten...
0
2019-12-12T05:53:34
https://dev.to/rahuuzz/how-to-write-the-best-meta-description-2efi
meta, description, seo, blogging
When you are [creating a blog](https://storylens.io), the only thing that matters is the content. You know the saying, content is king. But when you’re dealing with search engines, there are a few complimentary things that matter too like metadata, title length, keywords, etc. Although those who have spent a lot of years on website and content creation will be aware of it, while those who are new to it might be unaware. With the help of this article, we are going to discuss everything about the Meta description and will also know how to write the best Meta description. ## What is a Meta description? A **meta description** is a type of short paragraph, and the primary use of this description is to describe all about the content. Meta descriptions are placed in the HTML of a webpage. You must have noticed that whenever you search a website in Google, you can see a small description below the name of the site. That is called the Meta description. ## Where to add the Meta description? No matter what platform you are using for blogging, you should always add the meta description in the <head> section of your website HTML section. While when you are using WordPress and a [plugin for SEO](https://yoast.com/wordpress/plugins/seo/), then at that time, you can also place the meta description in the "meta description" section of that plugin. This will enhance the SEO, and hence we would be able to see better results. ## How to write the best meta description? Here are some essential points with the use of which we would be able to write the best meta description- 1. **Keywords** - We should always make sure to use the most [relevant keywords](https://moz.com/learn/seo/what-are-keywords) of a website. With the help of which search engines would be able to make themselves clear about the type of content, we have on our website. 2. **Readable description** - As we all know, the meta description has a particular word limit within which it should be completed. So to chase that word limit, most of the people out there are unable to focus on the readability of their meta description. Make sure to write a natural kind of meta description, which simply describes the content. 3. **Word limit **- We should always keep this thing in your mind that, the meta description should not be longer than 160 characters and should always have a minimum of 135 characters. If you tried to exceed or decrease the character limit, then the search engines would not accept your meta description. 4. **No duplication **- Every page on your website would have a different kind of meta description. So make sure you are not creating duplicate content, as if you created duplicate meta descriptions, then Google will not be happier with your website and will penalize you for sure. 5. **Use snippets** - We can also use rich snippets with the help of which we would be able to increase the appeal. ## Why is the meta description essential? The meta description is critical as with the help of it; the searcher would be able to make their mind whether they want to get into your site or not. ## Summary Here we have come to the end of this article, in which we have had a look at what meta description is, it’s used, and why meta description is so important for every website. Moreover, the most crucial point we discussed is how to write the best meta description. If you look forward to increasing the traffic to your blog you might have to take a look at [The Role of Keywords and how they increase the reader base of your blog](https://storylens.com/@5d37e9bf92f45455638bce6a/the-role-of-keywords-in-your-blog-strategy-2808a).
rahuuzz
219,287
How to make HTML form using googlesheet as a Database
I have a google spreadsheet, where some rows append on daily basis and using the google spreadsheet,...
0
2019-12-12T06:37:10
https://dev.to/mike7755/how-to-make-html-form-using-googlesheet-as-a-database-4fci
googleappsscript
I have a google spreadsheet, where some rows append on daily basis and using the google spreadsheet, the customer feedback team follows up. Google Spreadsheet Data. https://docs.google.com/spreadsheets/d/1V-XZdCUZAQVkfCat9vXVxITjjNMxNMPDin6B5j9uMWY/edit?usp=sharing The above mentioned Google Spreadsheet always have the below mentioned data at google sheet (Highlighted in blue): Ref ID Company Name Contact No.1 Contact No.2 Project Name Agent ID Rest of the mentioned details would be captured from the HTML UI basis the user response and finally click on 'Submit & Next' or 'Next' the input get stored at google sheet. The User has to first enter the 'Agent Id' on HTML UI and accordingly one by one `Ref ID` detail would be given to particular 'Agent Id` user. As mentioned in the attached screenshot, The left side of the information would be static as per the googlespread sheet, and right hand side information would be filled by the user basis the telephonic conversation. Below mentioned particulars will be drop down or radio options basis user input: Product : Lite, Lite-I, Elite Ref Code: LIT-1, LIT-2, LIT-3 Status : Accept, Reject, Pending Comment : Satisfied, Call Back, Pending Below mentioned particulars will be derived: Days Passed: It will be derived from the current system year - year mentioned in the `Date` Below mentioned particulars will be user input as a free text. Client Name Notes Final_Status **Note:** The agents will be assigned and shown only those `Ref ID` where the `Agent ID` is not blank and `Final_Status` is either blank or other than 'Submit & Next' marked in Googlespread Sheet. We need to add one more column in the Googlespread sheet, Which capture the Date time stamp as per the system date as soon as the `Final_Status` marked as 'Submit & Next` or 'Next' Submit & Next button would only be enable if all the details are captured by user. Next Button would only be enable if `Comment` option is selected. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/1yxseqs5lu1ifdes6mnm.PNG)
mike7755
219,293
Máy phát điện xoay chiều Vinafarm
Theo một số ý kiến thì bất kỳ loại máy phát điện nào tạo ra một dòng điện xoay chiều đều gọi là máy p...
0
2019-12-12T06:59:49
https://dev.to/thnvinafarm/may-phat-di-n-xoay-chi-u-vinafarm-21bi
Theo một số ý kiến thì bất kỳ loại máy phát điện nào tạo ra một dòng điện xoay chiều đều gọi là máy phát điện xoay chiều. Nhưng theo như lý thuyết máy phát điện xoay chiều được ghi trong giấy tờ sách vở thì lại khác. Bởi vì máy phát điện xoay chiều phải sản sinh ra điện năng nhờ vào hiện tượng cảm ứng điện từ. Máy phát điện xoay chiều là một trong những thiết bị không còn quá xa lạ gì với người tiêu dùng Việt Nam. Loại máy này được người dùng gọi với nhiều cái tên khác nhau. Ví dụ như là máy phát điện không đồng bộ hay máy phát điện Bạn có thể xem thêm các loại máy phát điện tại đây:https://thnvinafarm.com/ https://thnvinafarm.com/may-phat-dien https://thnvinafarm.com/may-phat-dien-chay-bang-dau https://thnvinafarm.com/may-phat-dien-chay-bang-xang Nếu vẫn chưa tìm được nơi mua máy phát điện ưng ý, bạn có thể tìm thêm thông tin của Vinafarm tại hệ thống profile: https://buddypress.org/members/thnvinafarm/profile/ https://www.instapaper.com/p/thnvinafarm https://www.wantedly.com/users/125164602
thnvinafarm
219,342
Hidden Tutorial Gems (Easy but Great Ones)
Besides the previous posts' main, impactful tutorials, I came across some nice material in slightly m...
0
2019-12-14T09:13:36
https://dev.to/zoltanhalasz/hidden-tutorial-gems-easy-but-great-ones-1afm
dotnet, angular, javascript, tutorial
Besides the previous posts' main, impactful tutorials, I came across some nice material in slightly more hidden/unknown places. There are authors that deserve more attention, because their content is high quality, and even more importantly, teach the foundations of fresh developments. **A. The .NET stack** For me, *Entity Framework* was something that I wanted to learn this year. But all that I found were books and tutorials - mostly complicated and obscure, that I continuously gave up. Until I found this simple but excellent material for EF Core, that laid down the foundation for me: https://code-maze.com/entity-framework-core-series/ *The Razor Pages* tutorial in ASP.NET Core, a simple, yet beautiful one, which made me studying this further: https://www.learnrazorpages.com/razor-pages/tutorial/bakery **B. The Web Stack (Angular, Firebase and Javascript.)** *Fireship*. I am amazed a little bit how posts in dev.to do not really mention this site as an excellent resource: https://fireship.io/lessons/ Jeff Delaney, the author, is a Google Developer Expert, and the content is funny, high quality and latest technology in the Angular, Firebase, JavaScript stack. I did quite a few of those tutorials, and purchased his PDF Firebase book. In my opinion, Firebase is a very interesting solution, and for web developers, a must to dive into. I wish I could have more time to study the material on Jeff's site above, but now I am caught in the .NET world with my current project. *Angular Material* tutorial. This design system seems very modern and powerful, yet the knowledge requested is advanced, and the best tutorial overall which was not too basic: https://code-maze.com/angular-material-series/ *PWA with Angular*. Unfortunately I did not finish this, but I very much liked it as it combines Angular, Material Design, Firebase and PWA: https://www.pwawithangular.com/
zoltanhalasz
219,458
Forms, Web Components and Redux
I think this was because I was thinking along the lines of But it's just a form, we build forms on the web every other day. By the end of this task, I went to my team with a big smile on my face and said I Reduxed the hell out of that form.
0
2019-12-17T07:14:20
https://medium.com/@gerybbg/forms-web-components-and-redux-59e2ffd2fc79
redux, lithtml, javascript, forms
--- title: Forms, Web Components and Redux description: I think this was because I was thinking along the lines of But it's just a form, we build forms on the web every other day. By the end of this task, I went to my team with a big smile on my face and said I Reduxed the hell out of that form. published: true tags: Redux, lit-html, JavaScript, Forms canonical_url: https://medium.com/@gerybbg/forms-web-components-and-redux-59e2ffd2fc79 --- My team and I have been working on a project using the [PWA Starter Kit](https://pwa-starter-kit.polymer-project.org/) since the beginning of this year. I've learned so much on this project, how to create performant web components, how to embrace Redux (instead of fighting it) and how to secure the whole application using Azure Active Directory, just to name a few. Although we did get stuck a number of times on a few different things (which we plan to write about in the future), nothing stumped me more than building a form with validation. I think this was because I was thinking along the lines of _"But it's just a form, we build forms on the web every other day"_. By the end of this task, I went to my team with a big smile on my face and said _"I Reduxed the hell out of that form"_. In this post I'd like to share with you what I did to get our form to work the way we wanted it. The end goal is to be able to create or update an object named **Quest** which consists of one or more **Missions**. Our object structure will look something like this: ``` Quest { goal: string } Mission { name: string, description: string } ``` # The first mission The first component will be an HTML form that creates a mission object for us, it will have a property to store errors and it will not allow you to submit the form if there are errors on the page. The code for this looks as follows: ```js import { LitElement, html } from 'lit-element'; export class MissionsForm extends LitElement { constructor() { super(); this.errors = []; } static get properties() { return { errors: Array }; } render() { const hasError = (name) => (this.errors.indexOf(name) >= 0 ? 'error' : ''); return html` <style> .error { border: 1px solid red; } </style> <form @submit="${(e) => this.submit(e)}"> <div> <label>Name: </label> <input class="${hasError('name')}" type="input" name="name"/> </div> <div> <label> Description: </label> <textarea class="${hasError('description')}" name="description"></textarea> </div> <div> <button type="submit">Save</button> </div> </form> `; } submit(e) { e.preventDefault(); let form = e.target; this.errors = this.checkForErrors(form); if (!this.errors.length) { let mission = { name: form.name.value, description: form.description.value }; //save mission here form.reset(); } } checkForErrors(form) { let errors = []; if (!form.name.value) { errors.push('name'); } if (!form.description.value) { errors.push('description'); } return errors; } } customElements.define('missions-form', MissionsForm); ``` This works fine, because every time we trigger the submit, if there are any errors, we update the property which causes the page to re-render and show us those errors. However, it's not great that the only time the errors will disappear is if the user clicks submit again. We want them to know that the error is fixed as soon as they have fixed it. In order to do that we must listen to the change event on the form: ```html <form @submit="${(e) => this.submit(e)}" @change="${(e) => this.formValueUpdated(e)}"> <!--...--> </form> ``` We can now remove the errors as soon as they are fixed by implementing the method: ```js formValueUpdated(e) { let errorList = [...this.errors]; if (!e.target.value) { errorList.push(e.target.name); } else { let indexOfError = errorList.indexOf(e.target.name); if (indexOfError >= 0) { errorList.splice(indexOfError, 1); } } this.errors = [...errorList]; } ``` ## Adding more missions What we need to do next is implement the `//save mission here` method. In order to do that we will first make a new component, this new component will have our list of missions and it will also contain our form component. The basic outline will look like this: ```js import { LitElement, html } from 'lit-element'; import './missions-form.component'; export class MissionsList extends LitElement { constructor() { super(); this.missions = []; } static get properties() { return { missions: Array }; } render() { return html` <h2>Missions</h2> <ul> ${this.missions.map( (m) => html` <li><strong>${m.name}:</strong> ${m.description}</li> ` )} </ul> <missions-form></missions-form> `; } } customElements.define('missions-list', MissionsList); ``` We are going to use Redux to update our list of missions whenever save is clicked in the form component. If you are using the PWA Starter Kit, then you already have all of the Redux plumbing set up for you. If you started from scratch, follow [this tutorial](https://vaadin.com/tutorials/lit-element/state-management-with-redux) to help you set it up. The following is the first version of our reducer: ```js import { MISSIONS_UPDATED } from "../actions/missions-updated.action"; const INITIAL_STATE = { missions: [] }; export const editor = (state = INITIAL_STATE, action) => { switch (action.type) { case MISSIONS_UPDATED: return { ...state, missions: action.missions } default: return state; } } ``` This reducer imports an action, let's implement that action: ```js export const MISSIONS_UPDATED = 'MISSIONS_UPDATED'; export const missionsUpdated = (missions) => { return { type: MISSIONS_UPDATED, missions }; }; ``` Now, whenever save is clicked we will need to dispatch that action. This means that we will need to change our `MissionsList` component to connect it to the Redux store: ```js export class MissionsList extends connect(store)(LitElement) { //... } ``` And our `MissionsForm` component will also need to be connected: ```js export class MissionsForm extends connect(store)(LitElement) { //... } ``` Both of these components need to implement the `stateChanged` method: ```js stateChanged(state) { this.missions = state.missions; } ``` > Here we are accessing the missions directly from the state. In my project we use `reselect`, which is a middleware for creating optimised selectors. To see some of the things we did to improve our performance and make our code less complex, checkout my colleague's article on [Wrangling Redux](https://mikerambl.es/article/wrangling-redux-reducer-size). The last thing left to do is to replace that comment with a call to our action and update the list of missions: ```js store.dispatch(missionsUpdated([...this.missions, mission])); ``` # Quest Our next component will be in charge of gathering information about the quest. In our example the quest only has one property, however, the code is written in such a way that this can be extended. Let's create a `QuestEditor` component: ```js import { LitElement, html } from 'lit-element'; import { connect } from 'pwa-helpers'; import { store } from '../store'; import { questUpdated } from '../actions/quest-updated.action'; import { errorsDetected } from '../actions/errors-detected.action'; export class QuestEditor extends connect(store)(LitElement) { constructor() { super(); this.errors = []; } static get properties() { return { quest: Object, errors: Array }; } render() { const hasError = (name) => (this.errors.indexOf(name) >= 0 ? 'error' : ''); return html` <style> .error { border: 1px solid red; } </style> <form @change="${(e) => this.formValueUpdated(e)}" @submit="${(e) => e.preventDefault()}"> <div> <label>Goal:</label> <input class="${hasError('goal')}" name="goal" type="text" /> </div> </form> `; } formValueUpdated(e) { let errorList = [...this.errors]; if (!e.target.value) { errorList.push(e.target.name); } else { let indexOfError = errorList.indexOf(e.target.name); if (indexOfError >= 0) { errorList.splice(indexOfError, 1); } } let quest = { ...this.quest, [e.target.name]: e.target.value }; store.dispatch(errorsDetected(errorList)); store.dispatch(questUpdated(quest)); } stateChanged(state) { this.quest = state.quest; this.errors = state.errors; if (!this.quest) { this.quest = { goal: '' }; } } } customElements.define('quest-editor', QuestEditor); ``` This component is very similar to the one we created for missions, the big difference is that this component does not have a save button. This is because we want to save the quest and missions at the same time (which we will do in another component in a moment). The `QuestEditor` component also has two new actions `errorsDetected` and `questUpdated`. We can implement them as follows: ```js export const ERRORS_DETECTED = 'ERRORS_DETECTED'; export const errorsDetected = (errors) => { return { type: ERRORS_DETECTED, errors }; }; ``` and ```js export const QUEST_UPDATED = 'QUEST_UPDATED'; export const questUpdated = (quest) => { return { type: QUEST_UPDATED, quest }; }; ``` We also need to update our reducer to cater for these two actions, first we change our `INITIAL_STATE` to: ```js const INITIAL_STATE = { quest: {}, missions: [], errors: [] }; ``` Then add two more cases to our switch statement: ```js case QUEST_UPDATED: return { ...state, quest: action.quest } case ERRORS_DETECTED: return { ...state, errors: action.errors } ``` ## Putting it all together We have to combine what we have done in one "_main_" component, this component will be called `Quest` and will look as follows: ```js import { LitElement, html } from 'lit-element'; import { connect } from 'pwa-helpers'; import { store } from '../store'; import { errorsDetected } from '../actions/errors-detected.action'; import './quest-editor.component'; import './missions-list.component'; export class Quest extends connect(store)(LitElement) { render() { return html` <h1>Create Quest</h1> <quest-editor></quest-editor> <missions-list></missions-list> <div> <button type="button" @click="${() => this.saveQuest()}">Save</button> </div> `; } saveQuest() { let errors = this.pageValid(); if (!errors.length) { //save quest and missions here } store.dispatch(errorsDetected(errors)); } pageValid() { let errors = []; if (!this.quest.goal) { errors.push('goal'); } if (!this.missions.length) { errors.push('missions'); } return errors; } stateChanged(state) { this.missions = state.missions; this.quest = state.quest; } } customElements.define('my-quest', Quest); ``` The `Quest` component is in charge of saving the things we have filled in. It needs to know about both the quest and the missions. However, you may have noticed that this component does not have any of its own properties, this is because we do not need to re-render it when quest, missions or errors change. We also need to make sure we have filled in all of the details correctly, the `pageValid` method is doing that for us. Lastly, if there are no errors, we can save everything (`//save quest and missions here`). ## Some cleaning up We are almost done, there are a few more small things we have to handle. Let's start by displaying the `missions` error in the `MissionsList` component. To do that we need to: 1. Add errors as a property: ```js static get properties() { return { missions: Array, errors: Array }; } ``` 2. Initialise it to an empty array in the constructor: ```js constructor() { super(); this.missions = []; this.errors = []; } ``` 3. Set it in the `stateChanged` method: ```js stateChanged(state) { this.missions = state.missions; this.errors = state.errors; } ``` 4. Create a new method to render our error message: ```js hasError() { return this.errors.indexOf('missions') >= 0 ? html` <div class="error">There must be at least one mission in every quest!</div> ` : html``; } ``` 5. Call that method inside our render method: ```js render() { return html` <style> .error { color: red; } </style> <h2>Missions</h2> ${this.hasError()} <ul> ${this.missions.map( (m) => html` <li><strong>${m.name}:</strong> ${m.description}</li> ` )} </ul> <missions-form></missions-form> `; } ``` The last thing we have to do is some cleaning up in our `MissionsForm` component so that it follows the same pattern as the others. To do this we need to change: 1. The `stateChanged` to get the errors from state: ```js stateChanged(state) { this.missions = state.missions; this.errors = state.errors; } ``` 2. The `formValueUpdated` method to dispatch an action instead of changing the property directly: ```js formValueUpdated(e) { let errorList = [...this.errors]; if (!e.target.value) { errorList.push(e.target.name); } else { let indexOfError = errorList.indexOf(e.target.name); if (indexOfError >= 0) { errorList.splice(indexOfError, 1); } } store.dispatch(errorsDetected(errorList)); } ``` 3. And the `submit` method to do the same: ```js submit(e) { e.preventDefault(); let form = e.target; let errors = this.checkForErrors(form); if (!errors.length) { let mission = { name: form.name.value, description: form.description.value }; store.dispatch(missionsUpdated([...this.missions, mission])); form.reset(); } store.dispatch(errorsDetected(errors)) } ``` ## Summary That's all we need to get our forms working with LitElement and Redux. From here on it is possible to implement any other CRUD operations. You can take a look at the full example on [my GitHub repo](https://github.com/geryb-bg/lit-forms). The example will be updated with editing and deleting missions as well as editing quest.
gerybbg
219,476
Lightsaber prototyping with the Nordic Thingy:52
We're building a lightsaber!!! I don't want to give too much away, but a few weeks ago we started with the 3D print… I decided that while I wait for all of the components to arrive, I would get started on a prototype.
0
2019-12-18T13:42:34
https://medium.com/@gerybbg/lightsaber-prototyping-with-the-nordic-thingy-52-890d54493b86
javascript, prototyping, iot, bluetooth
--- title: Lightsaber prototyping with the Nordic Thingy:52 description: We're building a lightsaber!!! I don't want to give too much away, but a few weeks ago we started with the 3D print… I decided that while I wait for all of the components to arrive, I would get started on a prototype. published: true tags: JavaScript, Prototyping, IoT, Bluetooth canonical_url: https://medium.com/@gerybbg/lightsaber-prototyping-with-the-nordic-thingy-52-890d54493b86 --- We're building a lightsaber!!! I don't want to give too much away, but a few weeks ago we started with the 3D print… I decided that while I wait for all of the components to arrive, I would get started on a prototype. Obviously, our lightsaber would have to be wireless, so I thought we could rely on Bluetooth. Since I have a [Nordic Thingy:52](https://www.nordicsemi.com/Software-and-Tools/Prototyping-platforms/Nordic-Thingy-52) that I haven't played with yet, I thought it would be a great place to start for prototyping our lightsaber. It has the four things we need: - Lights - because it's in the name - Sound - to make it more cooler - Button - so it turns off when you drop it (also known as the dead Jedi switch) - Accelerometer - so we can detect movement Connecting to the Thingy and using all of its Bluetooth attributes we can build a pretty good prototype. In order to identify all of these, I used [this repo](https://github.com/NordicPlayground/Nordic-Thingy52-Thingyjs) created by Nordic so that you can easily prototype with the Thingy:52. I decided it would be best to connect directly to the Bluetooth services and characteristics. In this way, when all of the components arrive, and we create our own custom Bluetooth lightsaber peripheral, we can just change the UUIDs and be up and running in no time! If you'd like to know a bit more about how Bluetooth and Web Bluetooth work, you should check out the other two posts I wrote about these technologies: - [BLE and GATT and other TLAs](https://dev.to/gerybbg/ble-and-gatt-and-other-tlas-21f5) - [Web Bluetooth by example](https://dev.to/gerybbg/web-bluetooth-by-example-46dh) In order to keep this post short, here are the links to the [HTML](https://github.com/geryb-bg/lightsaber/blob/master/thingy-poc/index.html) and [CSS](https://github.com/geryb-bg/lightsaber/blob/master/thingy-poc/styles.css) we'll be using. We will concentrate more on writing and understanding the JavaScript. We want to accomplish the following: 1. Connect to the lightsaber and check it's battery status. 2. When the button is pressed - turn on the led and play the turning on sound 3. When the button is released - turn off the led and play the turning off sound. 4. Be able to change the colour of the led when the lightsaber is on. 5. Play different sounds when the lightsaber is being moved around using the accelerometer data. Let's get started! ## Connecting and battery The first thing we need to do is initiate a scan for the device and connect to it. We also need to include all of the services that we might need in the optional services. Let's define a few variables so that we can interact with our HTML and so that we have all of the attribute UUIDs that we will need. ```js //buttons const connectButton = document.getElementById('connectButton'); const disconnectButton = document.getElementById('disconnectButton'); const colourButton = document.getElementById('colourButton'); //divs shown and hidden based on lightsaber status const connect = document.getElementById('connect'); const control = document.getElementById('control'); const off = document.getElementById('off'); //spans displaying information loaded from lightsaber const batteryStatus = document.getElementById('batteryStatus'); const orientationX = document.getElementById('orientationX'); const orientationY = document.getElementById('orientationY'); const orientationZ = document.getElementById('orientationZ'); //services const batteryServiceUuid = 'battery_service'; const motionServiceUuid = 'ef680400-9b35-4933-9b10-52ffa9740042'; const userInterfaceServiceUuid = 'ef680300-9b35-4933-9b10-52ffa9740042'; const soundServiceUuid = 'ef680500-9b35-4933-9b10-52ffa9740042'; //characteristics const batteryCharUuid = 'battery_level'; const orientationCharUuid = 'ef680404-9b35-4933-9b10-52ffa9740042'; const ledCharUuid = 'ef680301-9b35-4933-9b10-52ffa9740042'; const btnCharUuid = 'ef680302-9b35-4933-9b10-52ffa9740042'; const soundConfigCharUuid = 'ef680501-9b35-4933-9b10-52ffa9740042'; const speakerCharUuid = 'ef680502-9b35-4933-9b10-52ffa9740042'; let device, batteryCharacteristic, orientationCharacteristic; let ledCharacteristic, btnCharacteristic, soundConfigCharacteristic, speakerCharacteristic; ``` Now we can connect: ```js connectButton.onclick = async () => { device = await navigator.bluetooth.requestDevice({ filters: [{ namePrefix: 'atc' }], optionalServices: [batteryServiceUuid, motionServiceUuid, userInterfaceServiceUuid, soundServiceUuid] }); const server = await device.gatt.connect(); batteryCharacteristic = await getCharacteristic(server, batteryServiceUuid, batteryCharUuid); orientationCharacteristic = await getCharacteristic(server, motionServiceUuid, orientationCharUuid); ledCharacteristic = await getCharacteristic(server, userInterfaceServiceUuid, ledCharUuid); btnCharacteristic = await getCharacteristic(server, userInterfaceServiceUuid, btnCharUuid); soundConfigCharacteristic = await getCharacteristic(server, soundServiceUuid, soundConfigCharUuid); speakerCharacteristic = await getCharacteristic(server, soundServiceUuid, speakerCharUuid); //what to do if the device is disconnected (code in next block) device.ongattserverdisconnected = disconnect; //display changes connected.style.display = 'block'; connectButton.style.display = 'none'; disconnectButton.style.display = 'initial'; }; const getCharacteristic = async (server, serviceUuid, characteristicUuid) => { const service = await server.getPrimaryService(serviceUuid); const char = await service.getCharacteristic(characteristicUuid); return char; }; ``` We also need to cater for disconnecting: ```js disconnectButton.onclick = async () => { await device.gatt.disconnect(); disconnect(); }; const disconnect = () => { device = null; batteryCharacteristic = null; connected.style.display = 'none'; connectButton.style.display = 'initial'; disconnectButton.style.display = 'none'; }; ``` Now let's read the initial percentage of the battery and also listen to the characteristic changes so that we know when the battery level changes: ```js const setUpDevice = async () => { //get initial battery value const batteryValue = await batteryCharacteristic.readValue(); batteryStatus.innerText = batteryValue.getInt8(0); }; const listen = () => { batteryCharacteristic.addEventListener('characteristicvaluechanged', (evt) => { const value = evt.target.value.getInt8(0); batteryStatus.innerText = value; }); batteryCharacteristic.startNotifications(); }; ``` Don't forget to call these two methods from the connect method: ```js connectButton.onclick = async () => { //... await setUpDevice(); listen(); }; ``` Test this out by running it using something like [http-server](https://www.npmjs.com/package/http-server). ## Button and LED We need to be able to turn the LED on and off based on whether the button is pressed or not. We do this by listening to the status of the button inside the `listen()` function we created earlier: ```js let ledColour = new Uint8Array([1, 0, 0, 255]); let lightsaberOn = false; const toggleLed = async (toggle) => { if (toggle) { await ledCharacteristic.writeValue(ledColour); lightsaberOn = true; control.style.display = 'block'; off.style.display = 'none'; } else { await ledCharacteristic.writeValue(new Uint8Array([0])); lightsaberOn = false; control.style.display = 'none'; off.style.display = 'block'; } }; const listen = () => { //... btnCharacteristic.addEventListener('characteristicvaluechanged', (evt) => { const value = evt.target.value.getInt8(0); toggleLed(value); }); btnCharacteristic.startNotifications(); }; ``` We should also turn everything off when we start up by calling this function inside our `setUpDevice()` function: ```js const setUpDevice = async () => { //... //turn off when starting up await toggleLed(false); }; ``` Lets also allow our Jedi to change the colour of their lightsaber if they want to. We already have a colour picker and button in the HTML and we can use them like this: ```js const hexToRgb = (hex) => { const r = parseInt(hex.substring(1, 3), 16); //start at 1 to avoid # const g = parseInt(hex.substring(3, 5), 16); const b = parseInt(hex.substring(5, 7), 16); return [r, g, b]; }; colourButton.onclick = async () => { ledColour = new Uint8Array([1, ...hexToRgb(colourPicker.value)]); ledCharacteristic.writeValue(ledColour); }; ``` ## Sound Since we are going to have a set of pre-recorded lightsaber sounds, I thought it would be easiest to use the sample sounds on the Thingy:52. To do that we need to change the code we wrote above to start up the speaker: ```js const setUpDevice = async () => { //... //turn on speaker const dataArray = new Uint8Array(2); dataArray[0] = 3 & 0xff; dataArray[1] = 1 & 0xff; await soundConfigCharacteristic.writeValue(dataArray); }; ``` We are going to use the first sample as the turning on sound and the second as the turning off sound: ```js const getSampleSound = (sound) => { return new Uint8Array([sound & 0xff]); }; const toggleLed = async (toggle) => { if (toggle) { //... await speakerCharacteristic.writeValue(getSampleSound(0)); } else { //... await speakerCharacteristic.writeValue(getSampleSound(1)); } }; ``` ## Accelerometer Lastly, we are going to use the motion of the lightsaber to trigger a few other sounds. We will first print out the X, Y and Z co-ordinates on the screen so we can see them changing. Add the following to the `listen()` function: ```js const listen = () => { //... orientationCharacteristic.addEventListener('characteristicvaluechanged', (evt) => { let data = evt.target.value; let w = data.getInt32(0, true) / (1 << 30); let x = data.getInt32(4, true) / (1 << 30); let y = data.getInt32(8, true) / (1 << 30); let z = data.getInt32(12, true) / (1 << 30); const magnitude = Math.sqrt(Math.pow(w, 2) + Math.pow(x, 2) + Math.pow(y, 2) + Math.pow(z, 2)); if (magnitude !== 0) { x /= magnitude; y /= magnitude; z /= magnitude; } playSound(x, y, z); orientationX.innerText = `X: ${x.toFixed(2)}`; orientationY.innerText = `Y: ${y.toFixed(2)}`; orientationZ.innerText = `Z: ${z.toFixed(2)}`; }); orientationCharacteristic.startNotifications(); }; ``` The calculation above is directly from the Nordic repo, I did some reading on what [quaternion rotation](https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation) is, but I will not try to explain it. Now we can choose a different sample per movement and play them as the lightsaber moves around: ```js let position = { x: 0, y: 0, z: 0 }; const playSound = async (x, y, z) => { if (lightsaberOn) { if (Math.abs(position.x - x) > 0.2) { await speakerCharacteristic.writeValue(getSampleSound(2)); } else if (Math.abs(position.y - y) > 0.2) { await speakerCharacteristic.writeValue(getSampleSound(3)); } else if (Math.abs(position.z - z) > 0.2) { await speakerCharacteristic.writeValue(getSampleSound(4)); } } position.x = x; position.y = y; position.z = z; }; ``` We've also added a check in there to make sure that our lightsaber doesn't continue to make sounds when it turns off. ## Summary Wow, that was quite a lot of code. If you got lost anywhere along the way you can check out the complete code on [GitHub](https://github.com/geryb-bg/lightsaber/tree/master/thingy-poc). We now have a lightsaber prototype. It has all of the things we require from our lightsaber, lights, sounds, dead Jedi switch. It just doesn't really look like a lightsaber, but that's because it's just a prototype, right? I will definitely be posting more about this, and so will the rest of my team. So keep an eye out on this blog and my team's Twitter account for more updates on the complete fully functioning lightsaber.
gerybbg
219,535
The Magic of the Fibonacci Numbers & why we love computing them - part 1
Interview questions regarding the Fibonacci series are very popular, so there are very few developers...
3,724
2020-01-12T09:10:37
https://dev.to/kruzzy/the-magic-of-the-fibonacci-numbers-why-we-love-computing-them-part-1-18gp
computerscience, algorithms, mathematics
Interview questions regarding the Fibonacci series are very popular, so there are very few developers that haven't heard of or computed the sequence in one way or another. They are defined by the following relation: ![reccurence](https://i.imgur.com/zRttr0V.png) But why is that? Why does everyone love the Fibonacci Numbers so much? During a series of articles, I am going to present some of the magic behind this sequence and why I think they are so popular, as well as compare some ways of computing them. # Properties ## They appear in natural settings One beautiful example of where exactly numbers from the Fibonacci Series appear in nature is the number of petals some flowers have. Some examples: * Irises typically have 3 petals ![iris](https://c1.staticflickr.com/1/802/40529003405_6815325824_b.jpg) * Buttercups have 5 petals ![buttercup](https://c4.wallpaperflare.com/wallpaper/465/38/493/creeping-buttercup-5-yellow-petal-flower-wallpaper-preview.jpg) * Michaelmas Daisies / New York Asters (Purple Daisies) usually have 55 petals ![daisy](https://photos1.blogger.com/blogger/5361/1488/400/MichaelmasDaisies_2.0.jpg) ## Human bodies exhibit Fibonacci characteristics The Fibonacci series goes like this: 1, 1, 2, 3, 5, 8 and so on. Well, we have **2** hands. Each with **5** fingers. Each finger has **3** sections, separated by **2** knuckles. All of these fit right in. ## They have very interesting mathematical properties One interesting property of the Fibonacci Sequence is that any integer can be written as a unique sum of one or more non-consecutive Fibonacci numbers ([Zeckendorf's theorem](https://en.wikipedia.org/wiki/Zeckendorf%27s_theorem)). Later during the series, we will see a very simple algorithm that computes this. Another remarkable property is this: ![gcd](https://i.imgur.com/i1Bt0eW.png) - where gcd denotes the greatest common divisor. # Computing ## Naive recursive approach The most basic way of computing a specific Fibonacci number is by using the recursive formula directly - more specifically, by writing a recursive function. I'll attach a simple C++ snippet of the function: ```cpp int F(int n) { if(n <= 1) return n; return F(n-1) + F(n-2); } ``` However, when examining the recursive tree of this function, we get something like this: ![tree](https://i.imgur.com/ZIcVej6.png) We can clearly see that the function is called multiple times for some numbers, such as F(2) or F(3) - this is one of the possible drawbacks of recursion and teaches us that we should be careful when writing recursive functions as they can "backfire". Let's analyze this approach in terms of complexity. The recurrence relation is: ![t(n)](https://i.imgur.com/SRxTAe3.png) (the time needed for computing F(n) equals the time needed for F(n-1) + F(n-2) and some constant) Looking at the recurrence relation, we guess that the relation might be exponential. So, let's say that: ![xn](https://i.imgur.com/4Hj6w8m.png) That leads us to: ![xn2](https://i.imgur.com/lv0RgJb.png) (we won't take the constant into account) When diving by x^(n-2) and move all the terms to the left hand side, we get: ![x2](https://i.imgur.com/CbibRBB.png) By solving the equation, we get 2 possible solutions: x = 1.62 or x = -0.62. As x is a positive integer, the final solution is x = 1.62. So, the time complexity of the algorithm above is ~ O((1.62)^n)), which is exponential. ## Recursive approach with memoization This approach can be slightly improved in order to get the time complexity a lower by using an additional array in which we "remember" the numbers we have already computed: ```cpp int *fib; int fRec(int n) { if(n <= 1) return n; return (fib[n] != 0) ? fib[n] : (fib[n] = fRec(n-1) + fRec(n-2)); /// if we have already computed the n-th fibonacci number, return it. /// else, "remember" the result for later use. } int F(int n) { if(fib != nullptr) delete[] fib; /// deleting the whole array fib = new int[n+1]; /// dynamically allocating memory for a new array for(int i = 1; i <= n; i++) /// setting all the elements to 0 fib[i] = 0; return fRec(n-1) + fRec(n-2); } ``` The dynamic memory allocation can be skipped if we know approximately how big n is going to be. This concept of "remembering" results of recursive calls is called memoization and can be used to improve run times of several algorithm types. The time complexity of this algorithm is ~ O(n), which is much better than the non-memoization approach. There is also an O(n) space complexity. That's it for this article. I will continue presenting other properties and methods of computing the Fibonacci numbers during 1 or 2 more articles, so stay tuned! But, after all, why are Fibonacci numbers so popular in computer science? Well, as you have seen above, they are a pretty simple concept which we can use in order to understand some programming concepts (not only recursion and memoization, but others that we will see in other articles).
kruzzy
220,173
I'm Leaving a Job I Love, and I'm Ready
I'm a week out from leaving a perfect, full-time, dreamy remote work position. I've been with the com...
0
2019-12-12T22:17:36
https://dev.to/alexlsalt/i-m-leaving-a-job-i-love-and-i-m-ready-2l7p
career, codenewbie, 100daysofcode
I'm a week out from leaving a perfect, full-time, dreamy remote work position. I've been with the company for a little under two years as a customer support specialist and it's entirely possible that I would've stayed in that position for the next ten years if they let me. But the thing is - it's time to move on. Sure, it took a bit of a push in the sense that I had to decide between leaving my expat life in France and moving back to the US or to say goodbye to my beloved job. So I said goodbye, and I'm viewing it as a good thing. Actually, it's a great thing. I love the company and the people there, but now I've been given the ol' nudge that means it's really time to get to the heart of what I truly want to be doing. And right now, that's becoming a software engineer. They say the perfect time to leave something behind is when you're both really sad about what you're leaving and really happy about what's to come. I've reached that point. I'll admit - I had moments over the past few months where motivation was at an all-time low and I was dragging myself around in self-pity. But the moment I decided to actually STOP and view this entire period in my life as an opportunity was when I started climbing up and out of that pit of despair. When my friends and family members ask me how the job hunt is going, it feels almost blasphemous to let them know that I'm actually *not* doing any hunting at the moment. Instead of desperately grasping for just any and every job posting I see, I'm choosing to take my time, commit to building my coding skills and portfolio projects, and trust that I'll be met with the perfect opportunity when the time comes. To me, there's so much power in going about it intentionally rather than reactively. Intentional is setting aside a few hours each morning to work on a coding project and to eventually have a record of all of the skills I've worked hard to build. Reactive is applying to any and every odd job and not allowing myself to level up to the sort of career I truly want for myself. In terms of leveling up, I started to imagine myself with one foot in a docked boat and the other foot on the pier. Staying put would mean applying for jobs that are familiar, comfortable, and safe. I'm choosing to set sail. I don't know what's located just beyond the horizon, but you'd better believe I'm hungry to find out.
alexlsalt
220,204
Hello World
Hello and welcome to my new blog! My name is Spencer Pollock and I'm an aspiring software developer....
0
2019-12-13T01:12:26
https://spollock.ca/blog/posts/
helloworld, blog
Hello and welcome to my new blog! My name is Spencer Pollock and I'm an aspiring software developer. I find myself often reading up on new technology and working to use their best practices to make some new things. I jump from project to project, trying to expand my knowledge where possible. Follow me along on my journey, and I hope you can while I learn too! Knowledge is power, and sharing is caring. My focus for this blog is to share my experience. Be it traveling, philosophy, programming or just updates on what I'm working on. I want to keep this and open forum and topic to discuss and learn. Keep an open mind too. Try and look from new perspectives. Keeping an open mind is a great way to expand it. Follow me on this journey. Let me know what you think (you can find my contact information somewhere I'm sure, either my main website will have it, or one of my social sites). I encourage all forms of communication, only stating that if I don't get to it asap, it's on the list and I want to chat! --- Thanks for taking the time to read this and get to know me a little bit. Looking forward to going on this journey with all of you. All the best - Spencer
srepollock