id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
328,029
What is Template Method Design Pattern?
What is Template Method Design Pattern? - Mahipal Nehra - Mediu...
0
2020-05-05T10:51:33
https://dev.to/j_marathi/what-is-template-method-design-pattern-2i37
webdev, java, programming, codenewbie
{% medium https://medium.com/@mahipal.nehra/what-is-template-method-design-pattern-cf99389c8658 %} **Article Source: https://www.decipherzone.com/blog-detail/template-method-design-pattern**
j_marathi
328,263
How to Write Middleware using FastAPI
For one of my project, I needed to host an API service on the RapidAPI platform. In order to make sure that all the requests to the application are routed via RapidAPI I needed to check special header sent by RapidAPI.
0
2020-05-05T16:23:46
https://learnings.desipenguin.com/post/middleware-fastapi/
python, api, fastapi, rapidapi
--- title: How to Write Middleware using FastAPI published: true description: For one of my project, I needed to host an API service on the RapidAPI platform. In order to make sure that all the requests to the application are routed via RapidAPI I needed to check special header sent by RapidAPI. tags: python, api, FastAPI, RapidAPI canonical_url: https://learnings.desipenguin.com/post/middleware-fastapi/ --- For one of my project, I needed to host an API service on the RapidAPI platform. In order to make sure that all the requests to the application are routed via RapidAPI I needed to check special header sent by RapidAPI. RapidAPI forwards each *valid* request to the configured server, but injects additional header `X-RapidAPI-Proxy-Secret`. While a hacker may also send the same header, the value of this header will be only known to RapidAPI platform and your app. I deployed the server on heroku, and defined an environment variable `PROXY_SECRET` which I check against the one sent with the request. Sometimes I need to test the server directly, in which case I simply do not set this variable (like on my local machine) and this check is bypassed. ### Code ### ```python import os from starlette.requests import Request from starlette.responses import PlainTextResponse app = FastAPI() @app.middleware("http") async def check_rapidAPI_proxy_header(request: Request, call_next): # Check if server knows about valid "secret" secret_header = os.environ.get("PROXY_SECRET", None) if secret_header: headers = request.headers # If the header is missing, or does not match expected value # Reject the request altogether if ( "X-RapidAPI-Proxy-Secret" not in headers or headers["X-RapidAPI-Proxy-Secret"] != secret_header ): return PlainTextResponse( "Direct access to the API not allowed", status_code=403 ) response = await call_next(request) return response ``` ### Resources ### * [Uvcorn's ProxyMiddleWare](https://github.com/encode/uvicorn/blob/master/uvicorn/middleware/proxy_headers.py) - This is related but was not directly useful for me. * [FastAPI middleware documentation](https://fastapi.tiangolo.com/tutorial/middleware/) * [RapidAPI Proxy Secret](https://docs.rapidapi.com/docs/headers-sent-by-mashape-proxy2#headers-sent-to-the-request)
mandarvaze
328,299
Text classification with transformers in Tensorflow 2: BERT, XLNet
The transformer-based language models have been showing promising progress on a number of different natural language processing (NLP) benchmarks. The combination of transfer learning methods with large-scale transformer language models is becoming a standard in modern NLP. In this article, we will make the necessary theoretical introduction to transformer architecture and text classification problem. Then we will demonstrate the fine-tuning process of the pre-trained BERT and XLNet model for text classification in TensorFlow 2 with Keras API
0
2020-05-05T17:20:19
https://atheros.ai/blog/text-classification-with-transformers-in-tensorflow-2
machinelearning, python, datascience, webdev
--- title: Text classification with transformers in Tensorflow 2: BERT, XLNet published: true description: The transformer-based language models have been showing promising progress on a number of different natural language processing (NLP) benchmarks. The combination of transfer learning methods with large-scale transformer language models is becoming a standard in modern NLP. In this article, we will make the necessary theoretical introduction to transformer architecture and text classification problem. Then we will demonstrate the fine-tuning process of the pre-trained BERT and XLNet model for text classification in TensorFlow 2 with Keras API tags: machinelearning, python, datascience, webdev canonical_url: https://atheros.ai/blog/text-classification-with-transformers-in-tensorflow-2 --- ## Introduction The transformer-based language models have been showing promising progress on a number of different natural language processing (NLP) benchmarks. The combination of transfer learning methods with large-scale transformer language models is becoming a standard in modern NLP. In this article, we will make the necessary theoretical introduction to transformer architecture and text classification problem. Then we will demonstrate the fine-tuning process of the pre-trained BERT model for text classification in TensorFlow 2 with Keras API. {% youtube z6Kl52nh04U %} ## Text classification - problem formulation Classification, in general, is a problem of identifying the category of a new observation. We have dataset {% katex inline %}D{% endkatex %}, which contains sequences of text in documents as {% katex %} D=X_{1}, X_{2},\cdots,X_{N}, {% endkatex %} where {% katex inline %} X_{i} {% endkatex %} can be for example text segment and {% katex inline %} N{% endkatex %} is the number of such text segments in {% katex inline %} D{% endkatex %}. The algorithm that implements classification is called a **classifier**. The text classification tasks can be divided into different groups based on the nature of the task: * **multi-class classification** * **multi-label classification** Multi-class classification is also known as a **single-label problem**, e.g. we assign each instance to only one label. **Multi** in the name means that we deal with at least 3 classes, for 2 classes we can use the term **binary classification**. On the other hand, multi-label classification task is more general and allows us to assign multiple labels to each instance, not just one label per example. ## Why transformers? We will not go into much detail on transformer architecture in this post. However, it is useful to know some of the challenges in NLP. There are two important concepts in NLP, which are complementary: * <a href="https://en.wikipedia.org/wiki/Word_embedding">word embeddings</a> * <a href="https://en.wikipedia.org/wiki/Language_model">language model</a> Transformers are used to build the language model, where the embeddings can be retrieved as the by-product of pretraining. ### Approaches based on RNNs/LSTMs Most older methods for language modelling are based on RNNs (recurrent neural network). The simple RNNs suffer from the problem known as **vanishing gradient problem** and therefore fail to model the longer contextual dependencies. They were mostly replaced by the so-called **long short-term neural networks (LSTMs)**, which is also a form of RNN but can capture the longer context in the documents. However, LSTM can process sequences only unidirectional, so the state of the art approaches based on LSTMs evolved into the so-called bidirectional LSTMs, where we can read the context left to right and also right to left. There are very successful models based on LSTMs such as ELMO or ULMFiT and such models are still valid for today's modern NLP. ### Approaches based on transformer architecture One of the main limitations of bidirectional LSTMs is its sequential nature, which makes training in parallel very difficult. The **transformer architecture** solves that by **completely replacing LSTMs by the so-called attention mechanism** (Vashvani et al. 2017). With attention, we are seeing an entire sequence as a whole, therefore it is much easier to train in parallel. We can model the whole document context as well as to use huge datasets to pre-train in an unsupervised way and fine-tune on downstream tasks. ### State of the art transformer models There is a lot of transformer-based language models. The most successful ones are (as of April 2020) * <a href="https://arxiv.org/abs/1706.03762" title="Attention is all you need">Transformer (Google Brain/Research) </a> * <a href="https://github.com/google-research/bert" title="BERT">BERT (Google Research)</a> * <a href="https://openai.com/blog/better-language-models/">GPT-2 (OpenAI)</a> * <a href="https://arxiv.org/pdf/1906.08237.pdf">XLNet (Google Brain)</a> * <a href="https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/">CTRL (SalesForce)</a> * <a href="https://devblogs.nvidia.com/training-bert-with-gpus/">Megatron (NVidia)</a> * <a href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/" title="Turing NLG">Turing-NLG (Microsoft)</a> There are **slight differences between models**. BERT has been considered as the state of the art results on many NLP tasks, but now it looks like it is surpassed by XLNet also from Google. XLNet leverages the **permutation language modelling**, which trains an autoregressive model on all possible permutation of words in a sentence. For the purpose of illustration, we will use BERT-based model in this article. ## BERT BERT (Bidirectional Encoder Representations from Transformers) (Devlint et al., 2018) is a method of pretraining language representation. We will not go into much detail, but the main difference from the original transformer (Vaswani et al., 2017) is that **BERT does not have a decoder**, but **stacks 12 encoders** in the basic version and increase the number of encoders for bigger pre-trained models. Such architecture is different from <a href="https://openai.com/blog/better-language-models/" title="Better Language Models and Their Implications">GPT-2 from OpenAI</a>, which is autoregressive language model suited for natural language generation (NLG). ### Tokenizer Official <a href="https://github.com/google-research/bert" title="BERT Github">BERT</a> language models are pre-trained with **WordPiece vocabulary** and use, not just token embeddings, but also **segment embeddings** distinguish between sequences, which are in pairs, e.g. question answering examples. **Position embeddings** are needed in order to inject positional awareness into BERT model as attention mechanism does not consider positions in the context evaluation. The important limitation of BERT to be aware of is that the **maximum length** of the sequence for BERT is **512 tokens**. For **shorter sequence input** than maximum allowed input size, we would need to **add pad tokens [PAD]**. On the other hand, if the sequence is longer, we **need to cut the sequence**. This BERT limitation on the maximum length of the sequence is something that you need to be aware of for longer text segments, see for example this <a href="https://github.com/huggingface/transformers/issues/2295">GitHub issue</a> for further solutions. Very important are also the so-called special tokens, e.g. **[CLS]** token and **[SEP]** tokens. The [CLS] token will be inserted at the beginning of the sequence, the [SEP] token is at the end. If we deal with sequence pairs we will add additional [SEP] token at the end of the last. ![BERT internal representation](https://dev-to-uploads.s3.amazonaws.com/i/l1lb8wivlcxqwsf2ienz.png) When using <a href="https://huggingface.co/transformers/">transformers</a> library we first load the tokenizer for the model we would like to use. Then we will proceed as follows: ``` python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) max_length_test = 20 test_sentence = 'Test tokenization sentence. Followed by another sentence' # add special tokens test_sentence_with_special_tokens = '[CLS]' + test_sentence + '[SEP]' tokenized = tokenizer.tokenize(test_sentence_with_special_tokens) print('tokenized', tokenized) # convert tokens to ids in WordPiece input_ids = tokenizer.convert_tokens_to_ids(tokenized) # precalculation of pad length, so that we can reuse it later on padding_length = max_length_test - len(input_ids) # map tokens to WordPiece dictionary and add pad token for those text shorter than our max length input_ids = input_ids + ([0] * padding_length) # attention should focus just on sequence with non padded tokens attention_mask = [1] * len(input_ids) # do not focus attention on padded tokens attention_mask = attention_mask + ([0] * padding_length) # token types, needed for example for question answering, for our purpose we will just set 0 as we have just one sequence token_type_ids = [0] * max_length_test bert_input = { "token_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_mask } print(bert_input) ``` We can see that the sequence is tokenized, we have added **special tokens** as well as calculate the number of pad tokens needed in order to have the same length of the sequence as the maximal length 20. Then we have added **token types**, which are all the same as we do not have sequence pairs. **Attention mask** will tell the model that we should not focus attention on [PAD] tokens. ``` bash tokenized ['[CLS]', 'test', 'token', '##ization', 'sentence', '.', 'followed', 'by', 'another', 'sentence', '[SEP]'] { 'token_ids': [101, 3231, 19204, 3989, 6251, 1012, 2628, 2011, 2178, 6251, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` Now in the practical coding, we will use just **encode_plus** function, which does all of those steps for us ``` bash bert_input = tokenizer.encode_plus( test_sentence, add_special_tokens = True, # add [CLS], [SEP] max_length = max_length_test, # max length of the text that can go to BERT pad_to_max_length = True, # add [PAD] tokens return_attention_mask = True, # add attention mask to not focus on pad tokens ) print('encoded', bert_input) ``` The output is the same as our above code. ### Pretraining Pretraining is the first phase of BERT training. It is done in an unsupervised way and consists of two main tasks: * masked language modelling (MLM) * next sentence prediction (NSP) From a high level, in MLM task we replace a certain number of tokens in a sequence by **[MASK]** token. We then try to predict the masked tokens. There are some additional rules for MLM, so the description is not completely precise, but feel free to check the original paper (Devlin et al., 2018) for more details. When choosing the sentence pairs for next sentence prediction we will choose 50% of the time the actual sentence that follows the previous sentence and label it as **IsNext**. The other 50% we choose the other sentence from the corpus, not related to the previous one and labels it as **NotNext**. Both such tasks can be performed on text corpus without labelled examples, therefore the authors used the datasets such as BooksCorpus (800m words), English Wikipedia (2500m words). ![BERT pretraining](https://dev-to-uploads.s3.amazonaws.com/i/zkspc7prl7vogiz21hn9.png) ### Fine-tuning Once we have either pre-trained our model by ourself or we have loaded already pre-trained model, e.g. <a href="https://huggingface.co/bert-base-uncased">BERT-based-uncased</a>, we can start to fine-tune the model on the downstream tasks such as question answering or text classification. We can see that BERT can be applied to many different tasks by adding a task-specific layer on top of pre-trained BERT layer. For text classification, we will just add the simple softmax classifier to the top of BERT. ![Fine Tunning](https://dev-to-uploads.s3.amazonaws.com/i/tbew6ep3cwsxg5djydwu.jpeg) The pretraining phase takes significant computational power (BERT base: 4 days on 16 TPUs; BERT large 4 days on 64 TPUs), therefore it is very useful to save the pre-trained models and then fine-tune one specific dataset. Unlike pretraining, the fine-tuning does not require much computation power. The fine-tuning process can be done in a couple of hours even on a single GPU. It is recommended to have at least 12GB VRAM in order to fit the batch size into memory. When fine-tuning for text classification we can choose several paths, see the figure below (Sun et al. 2019). ![Fine Tunning](https://dev-to-uploads.s3.amazonaws.com/i/nh3m3owl561qch1nsp93.png) ## IMDB dataset We will solve the text classification problem for well-known <a href="https://ai.stanford.edu/~amaas/data/sentiment/" title="IMDB dataset">IMDB movie review dataset</a>. The dataset consists of 50k reviews with assigned sentiment to each. Only highly polarizing reviews are considered and no more than 30 reviews are included per movie. The following are two samples from the dataset: | Review | Sentiment | |--------|:---------:| | One of the other reviewers has mentioned that after watching just 1 Oz episode you'll be hooked. They are right, as this is exactly what happened with me. The first thing that struck me abo... | positive | | Petter Mattei's "Love in the Time of Money" is a visually stunning film to watch. Mr. Mattei offers us a vivid portrait about human relations. This is a movie that seems to be telling us what money, p... | negative | The review can be only positive or negative and only one label can be assigned for each review. This leads us to formulate the problem as a **binary classification**. In addition, we determine the sentiment of each review, therefore we will solve the sub-task of text classification - called **sentiment analysis**. When we take a look at the already achieved results, we can see that <a href="https://github.com/zihangdai/xlnet">XLNet</a>, as well as <a href="https://github.com/google-research/bert">BERT</a>, are the transformer-based machine learning models that achieved best results on <a href="http://nlpprogress.com/english/sentiment_analysis.html">IMDB dataset</a>. | Tables | Accuracy | |----------|:-------------:| | XLNet (Yang et al., 2019) | 96.21 | | BERT_large+ITPT (Sun et al., 2019) | 95.79 | | BERT_base+ITPT (Sun et al., 2019)| 95.63 | | ULMFiT (Howard and Ruder, 2018) | 95.4 | | Block-sparse LSTM (Gray et al., 2017) | 94.99 | Source: <a href="http://nlpprogress.com/english/sentiment_analysis.html">nlpprogress.com</a> The other two ULMFit (Howard and Ruder, 2018) and Block-sparse LSTM (Gray et al., 2017) are based on LTSMs, not transformer language models. Similar approaches have great results as well but are slowly replaced for some tasks by transformer language models. BERT and XLNet are consistently in top positions also on other text classification benchmarks like <a href="http://nlpprogress.com/english/text_classification.html">AG News</a>, <a href="http://nlpprogress.com/english/text_classification.html">Yelp</a> or <a href="http://nlpprogress.com/english/text_classification.html">DBpedia dataset</a>. In this article, we will focus on preparing step by step framework for fine-tuning BERT for text classification (sentiment analysis). This framework and code can be also used for other transformer models with minor changes. We will use the smallest BERT model (bert-based-cased) as an example of the fine-tuning process. ## Fine tunning BERT with TensorFlow 2 and Keras API First, the code can be downloaded on [Google Colab](https://colab.research.google.com/drive/1934Mm2cwSSfT5bvi78-AExAl-hSfxCbq) as well as on [GitHub](https://github.com/atherosai/python-graphql-nlp-transformers/blob/master/notebooks/BERT%20fine-tuning%20in%20Tensorflow%202%20with%20Keras%20API/BERT_fine_tunning_in_TensorFlow_2_with_Keras_API.ipynb). Let's use the TensorFlow dataset API for loading IMDB dataset ``` python import tensorflow_datasets as tfds (ds_train, ds_test), ds_info = tfds.load('imdb_reviews', split = (tfds.Split.TRAIN, tfds.Split.TEST), as_supervised=True, with_info=True) print('info', ds_info) ``` The dataset info is as follows: ``` python tfds.core.DatasetInfo( name='imdb_reviews', version=1.0.0, description='Large Movie Review Dataset. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.', homepage='http://ai.stanford.edu/~amaas/data/sentiment/', features=FeaturesDict({ 'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=2), 'text': Text(shape=(), dtype=tf.string), }), total_num_examples=100000, splits={ 'test': 25000, 'train': 25000, 'unsupervised': 50000, }, supervised_keys=('text', 'label'), citation=InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http:\/\/www.aclweb.org\/anthology\/P11-1015} }, redistribution_info=, ) ``` We can see that train and test datasets are split 50:50 and the examples are in the form of **(label, text)**, which can be further validated: ``` python for review, label in tfds.as_numpy(ds_train.take(5)): print('review', review.decode()[0:50], label) << review This was an absolutely terrible movie. Don't be lu 0 review I have been known to fall asleep during films, but 0 review Mann photographs the Alberta Rocky Mountains in a 0 review This is the kind of film for a snowy Sunday after 1 review As others have mentioned, all the women that go nu 1 ``` The positive sentiment is represented by **1** and the negative sentiment is represented by **0**. Now we need to apply BERT tokenizer on all the examples. We will map tokens into WordPiece embeddings. As said, this can be done using <a title="Encode plus Hugging Face" href="https://huggingface.co/transformers/model_doc/bert.html#berttokeniz r">encode_plus</a> function. ``` python # map to the expected input to TFBertForSequenceClassification, see here def map_example_to_dict(input_ids, attention_masks, token_type_ids, label): return { "input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": attention_masks, }, label def encode_examples(ds, limit=-1): # prepare list, so that we can build up final TensorFlow dataset from slices. input_ids_list = [] token_type_ids_list = [] attention_mask_list = [] label_list = [] if (limit > 0): ds = ds.take(limit) for review, label in tfds.as_numpy(ds): bert_input = convert_example_to_feature(review.decode()) input_ids_list.append(bert_input['input_ids']) token_type_ids_list.append(bert_input['token_type_ids']) attention_mask_list.append(bert_input['attention_mask']) label_list.append([label]) return tf.data.Dataset.from_tensor_slices((input_ids_list, attention_mask_list, token_type_ids_list, label_list)).map(map_example_to_dict) ``` We can encode the dataset using the following functions: ``` python # train dataset ds_train_encoded = encode_examples(ds_train).shuffle(10000).batch(batch_size) # test dataset ds_test_encoded = encode_examples(ds_test).batch(batch_size) ``` ``` python from transformers import TFBertForSequenceClassification import tensorflow as tf # recommended learning rate for Adam 5e-5, 3e-5, 2e-5 learning_rate = 2e-5 # we will do just 1 epoch for illustration, though multiple epochs might be better as long as we will not overfit the model number_of_epochs = 1 # model initialization model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased') # classifier Adam recommended optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=1e-08) # we do not have one-hot vectors, we can use sparce categorical cross entropy and accuracy loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) ``` We have chosen a rather smaller **learning rate 2e-5** and only **1 epoch**. BERT overfit quite quickly on this dataset, so if we would like to do 2 and more epoch it would be useful to add some additional regularization layers or use for example Adam optimizer with weight decay. Now we have everything needed in order to start fine-tuning. We will use Keras API **model.fit** method: ``` python bert_history = model.fit(ds_train_encoded, epochs=number_of_epochs, validation_data=ds_test_encoded) ``` We have achieved **over 93% accuracy** on our test dataset. ``` bash 4167/4167 [==============================] - 4542s 1s/step - loss: 0.2456 - accuracy: 0.9024 - val_loss: 0.1892 - val_accuracy: 0.9326 ``` That looks reasonable in comparison with the current state of the art results. According to (Sun C et al. 2019) we can achieve **up to 95.79 accuracy** with BERT large on this task. The only better accuracy than BERT large on this task has XLNet from Google AI Brain. XLNet can be also easily used with transformers library with just minor changes to the code. ## Conclusion We have developed the end to end process to use transformers on the text classification task. We have achieved great performance with additional ability to improve either by using XLNet or BERT large model. We can also improve accuracy with multi-task fine-tuning, hyperparameter tuning or additional regularization. The process can be adjusted to other NLP tasks with just minor changes to the code. This article was originally published at <a href="https://atheros.ai/blog/text-classification-with-transformers-in-tensorflow-2"> https://atheros.ai/blog/text-classification-with-transformers-in-tensorflow-2</a> > Did you like this post? You can clone <a href="https://github.com/atherosai/python-graphql-nlp-transformers/tree/master/notebooks/BERT%20fine-tunning%20in%20Tensorflow%202%20with%20Keras%20API">the repository with the examples and project set-up</a>. Feel free to send any questions about the topic to david@atheros.ai and subscribe to get more knowledge about building AI-driven systems.
a7v8x
328,392
How to build an ecommerce site on the Jamstack with Snipcart and TakeShape
Follow a step-by-step guide to build your own ecommerce site on the Jamstack using Snipcart and TakeShape
0
2020-05-05T23:08:07
https://www.takeshape.io/articles/how-to-build-an-ecommerce-site-on-the-jamstack-with-snipcart-and-takeshape/
webdev, beginners, jamstack, ecommerce
--- title: How to build an ecommerce site on the Jamstack with Snipcart and TakeShape published: true description: Follow a step-by-step guide to build your own ecommerce site on the Jamstack using Snipcart and TakeShape tags: webdev, beginners, jamstack, ecommerce canonical_url: https://www.takeshape.io/articles/how-to-build-an-ecommerce-site-on-the-jamstack-with-snipcart-and-takeshape/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/aeftb9cyr4teknro30p1.jpg --- ## Follow our step-by-step guide and create your own shop in under 30 minutes Say you're using the Jamstack to make landing pages and marketing websites for your clients when a new client walks in the door and asks for an online store. There’s no way a Jamstack site could deliver this, right? Wrong! In fact, when it comes to building an ecommerce site, the Jamstack can’t be beat. In the ecommerce business, conversion-optimization is necessary to succeed, and one of the best ways to increase your conversion rate is to have a website that’s blazing fast and scales along with your product. In a [study run on cart abandonment](http://blog.conversionconference.com/case-study-how-making-pages-2-seconds-faster-boosted-conversions-by-66/) from Tammy Everts, she found "to maximize conversions, every page of a transaction—from landing page through to order confirmation page—needs to be… less than [two] seconds." Serving every page in under two seconds is a tall order for any database-driven site, but it’s an easy feat for a static site that’s served from a CDN! What’s more, many ecommerce platforms have their own unique style of data management that’s tied to their proprietary website builders and these sites are much slower than what you’ll get on the Jamstack. Instead, using a headless CMS that you can tailor to your own needs and deploying it to a global CDN is the best way to get a site that’s fast, stable, and converts well. But how would you provide functionality like a shopping cart, credit card processing, and order handling? Enter [Snipcart](https://snipcart.com/), an ecommerce solution for the Jamstack! Snipcart is well-suited to Jamstack ecommerce because they don’t care what kind of front-end you use or how you manage your inventory. When a user clicks the "Add to Cart" button, the data is passed to Snipcart's JavaScript SDK and the rest is handled by them! They provide a shopping cart and checkout flow; a dashboard for handling orders, abandoned carts, and customer data; and an API for more powerful applications. ## Using TakeShape and Snipcart together In this demo, we'll take a look at using TakeShape's CMS and static site generator to build an ecommerce site, using Snipcart to handle the cart and checkout experience. And we’ll do it all in 30 minutes or less! *Bryan [livestreamed](https://www.youtube.com/watch?v=lIFYpWJZzS0) his experience connecting TakeShape and Snipcart. See how he figured out how it all fits together!* First, we'll create our ecommerce site by copying TakeShape's "Shape Shop" sample project and installing Snipcart by embedding a script on the page. Then, we’ll modify the "Add to Cart" button to work with Snipcart and update our products in TakeShape with some additional fields. Finally, we’ll test our new site to make sure it’s all working. ## Create the TakeShape sample project If this is your first time using TakeShape, you’ll need to [sign up for a free account](https://app.takeshape.io/signup). Then, create a new project using the Shape Shop template and clone the starter project from GitHub: ```bash git clone https://github.com/takeshape/takeshape-samples.git takeshape-samples && cd takeshape-samples/shape-shop ``` More detailed instructions, including how to deploy the project to Netlify, are available in [this article about getting started with the store template](https://www.takeshape.io/articles/getting-started-with-the-shop-template/). ## Install Snipcart into your project Once you've got the sample store up and running, it's time to set up a Snipcart account. After you [sign up for a free Snipcart account](https://app.snipcart.com/register), you’ll need to include a CSS file, script tag and additional div on any page that can access the cart following [their basic installation instructions](https://docs.snipcart.com/v3/setup/installation). ![After creating your Snipcart account, you'll be taken to your store's dashboard.](https://dev-to-uploads.s3.amazonaws.com/i/6jel66p3p7ijx7vya7uo.png) If you want to include a link to the shopping cart in your navigation, this will likely end up being most pages on your site. So we’ll add Snipcart’s code to the default layout of the TakeShape project. In the Shape Shop project, this is the file `layouts/default.html`. Snipcart’s CSS file will go in the `<head>` element after our other CSS links: ```html <link rel="stylesheet" href="/stylesheets/base.css"/> <link rel="stylesheet" href="/stylesheets/feature.css"/> <link rel="stylesheet" href="/stylesheets/footer.css"/> <link rel="stylesheet" href="/stylesheets/header.css"/> <link rel="stylesheet" href="/stylesheets/hero.css"/> <link rel="stylesheet" href="/stylesheets/pagination.css"/> <link rel="stylesheet" href="/stylesheets/products.css"/> <link rel="stylesheet" href="/stylesheets/thumb.css"/> <!-- Snipcart CSS file goes here --> <link rel="stylesheet" href="https://cdn.snipcart.com/themes/v3.0.11/default/snipcart.css" /> ``` Then, we’ll add Snipcart’s script and additional `div` right before the `</body>` tag: ```html <script src="/javascripts/main.js"></script> <!-- Snipcart div and JS here --> <div hidden id="snipcart" data-api-key="{{YOUR-SNIPCART-API-KEY}}"></div> <script src="https://cdn.snipcart.com/themes/v3.0.11/default/snipcart.js"></script> </body> ``` We’ll need to replace `{{YOUR-SNIPCART-API-KEY}}` with an API key from Snipcart, which can be found under [Account → API Keys](https://app.snipcart.com/dashboard/account/credentials). We’ll start in “Test” mode, which will allow us to use fake credit card numbers to check the purchase flow later. ![![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/867qq9g5rzw69rl66tu4.png)](https://dev-to-uploads.s3.amazonaws.com/i/867qq9g5rzw69rl66tu4.png) Now, we have access to Snipcart's HTML API on any page. ## Use Snipcart’s “Add to Cart” buttons Next, let's modify our `pages/product/individual.html` template to add Snipcart’s “Add to Cart” button. Snipcart will expect to find cart buttons on the page as HTML `<button>` tags instead of `<a>` tags, so that's our first change. We'll replace the anchor that the Shape Shop comes with. ```html <button class="button snipcart-add-item"> Add to cart </button> ``` On this button, we have two classes. The `button` class is for styling and is built into the default CSS for Shape Shop. The `snipcart-add-item` class is key to telling Snipcart's JavaScript where the product information is and when to add items to the cart. From here, we'll need to add additional product details to the button via [data attributes](https://developer.mozilla.org/en-US/docs/Learn/HTML/Howto/Use_data_attributes) with various variables from our TakeShape data. The Shape Shop template comes with a "Product" Content Type that has most of the fields we'll need to make this work. The data is fetched from TakeShape in the `data/product.graphql` file and provided to the template as the “product” variable. Here's what our buttons will need to look like with that data added: ```html <button class="button snipcart-add-item" data-item-id="{{ product.name }}" data-item-price="{{ product.price }}" data-item-url="{{ product | route('product') }}" data-item-description="{{ product.description }}" data-item-image="{{ product.image.path | image }}" data-item-name="{{ product.name }}"> Add to cart </button> ``` Most of that is finding bits of data about the product and putting it into the proper data attribute that Snipcart is looking for. But if you've done ecommerce before, you might notice a couple potential issues with this code. First, `data-item-id` is a unique identifier for the product, which probably shouldn't be keyed off just a name string. To fix this, I recommend having unique SKU numbers. Most shops will have this built into their inventory management system. We'll need to add this field to our CMS. The other potential problem is the price will always be the same. In our data, we already have a "sale price." We should make sure we're always passing Snipcart the correct price, whether an item is on sale or not. ## Add a SKU field to your Product in TakeShape First, we’ll need to go into the project in TakeShape and edit the Product content type to add a new field with a SKU. You could call this "SKU Number" or "ID," whatever your client expects. Be sure to note the name of the field. ![Adding a SKU field to TakeShape is as simple as dragging the "Single Line" field into your model and then configuring it.](https://dev-to-uploads.s3.amazonaws.com/i/5mro3e9j4jgjqlluttyt.png) Save your change to the content type and then add that new field to the GraphQL query in `data/product.graphql`: ```graphql query { getProductList(sort: [{field: "_enabledAt", order: "desc"}]) { total items { _contentTypeName _enabledAt name skuNumber ... } } } ``` After that, update the data in the “Add to Cart” button in `/pages/product/individual.html`. We can add this to our template with a conditional, in case a product doesn't have a SKU: ```twig data-item-id="{{ product.skuNumber if product.skuNumber else product.name }}" ``` We'll use that same conditional syntax to add a sale price, as well, if one is available: ```twig data-item-price="{{ product.salePrice if product.salePrice else product.price }}" ``` With these changes, when a user clicks the "Add to Cart" button, they'll be presented with the cart screen from Snipcart with the item they just added. ## Test your site While working locally, we’ll only be able to test the checkout flow until we reach the “Place Order” button. After that, Snipcart won’t be able to crawl the local site to check the product information. This is a [security feature](https://docs.snipcart.com/v3/security) to make sure someone doesn’t modify prices in DevTools and buy things for cheap on your site! ![If you test your checkout before deploying your site, you'll encounter this Product Crawling Error.](https://dev-to-uploads.s3.amazonaws.com/i/1gyciourru01zlhwud2w.png) To get your site live, use [TakeShape’s 1-click Netlify integration](https://www.takeshape.io/docs/configuring-netlify/) to build and deploy it. Snipcart also has a guide for [setting up ngrok to make local URLs accessible to their service](https://snipcart.com/blog/develop-a-snipcart-powered-website-locally-using-ngrok). If you’re working on a live site already, you can [set up a branch-based URL in Netlify](https://docs.netlify.com/site-deploys/overview/#branch-deploy-controls) to push your new branch to and view there. Back in your [Snipcart account settings](https://app.snipcart.com/dashboard/account/), you'll need to add the domain name for your live shop. In the "Store Configuration" section, navigate to ["Domains & URLs"](https://app.snipcart.com/dashboard/account/domains) and add in the appropriate information there. Once you publish your site to the live URL, you’ll see that Snipcart adds the products it finds at the domain—based on the data set in the “Add to Cart” button—to the “Products” section of their CMS. ![Once your shop is live at a publicly accessible URL, Snipcart will populate with products it finds on the site.](https://dev-to-uploads.s3.amazonaws.com/i/llyb4lnwjmxkj98b7mtw.png) Finally, when completing the checkout flow (live or locally) in test mode, you can provide these fake credit card credentials: - Credit Card Number: `4242 4242 4242 4242` - Security Code: `123` - Expiration: any future month/year combination. `12/25`, for instance. - ZIP code: Any ZIP code If everything works as expected, you should have completed a test order to your new Jamstack ecommerce site! Congratulations! ## Where to go from here This article has just scratched the surface of what Snipcart and TakeShape can do together. You can create custom product options, manage store emails, and even add cart information into your header! Since you control the data, the site, and everything else, you have the control.
brob
328,623
Rust Overload Add operator
How and why overload Add operator
0
2020-05-06T07:30:52
https://dev.to/jcaromiq/rust-overload-add-operator-3a9o
rust, beginners, programming
--- title: "Rust Overload Add operator" description: "How and why overload Add operator" tags: #rust, #beginners, #programming published: true cover_image: https://dev-to-uploads.s3.amazonaws.com/i/z49kqwciar5vqglaf3ir.jpg --- Can you imagine being able to perform arithmetic operations with your own types in java? to have two Money instances and add between them? Well in Rust it is possible! Let's see a simple example of how to implement it. <!--more--> For our example, we want to be able to add amounts of the same currency, for that we will create the ValueObject Money ```rust #[derive(Debug, PartialEq)] enum Currency {DOLLAR, EURO} #[derive(Debug, PartialEq)] struct Money { currency: Currency, amount: u8, } ``` Now what we would like is to be able to add Money without having to add the internal amount and creating a Money again. In Rust it is as simple as implementing the std :: ops :: Add trait for our struct: ```rust impl std::ops::Add for Money { type Output = Self; fn add(self, other: Self) -> Self { Money { currency: self.currency, amount: self.amount + other.amount, } } } ``` in this way we could already add Money to each other: ```rust #[test] fn should_add_money_with_same_currency() { let ten_dollars = Money { currency: Currency::DOLLAR, amount: 10, }; let five_dollars = Money { currency: Currency::DOLLAR, amount: 5, }; let fifteen = Money { currency: Currency::DOLLAR, amount: 15, }; assert_eq!(ten_dollars + five_dollars, fifteen); } ``` But of course ... what if we are adding dollars to euros? with this implementation the sum would be incorrect, so in our case what we want is that the result of the sum is a Result Type. For that case, we are going to change our implementation of the Add so that the output becomes a Result <Money, E> instead of a Money ```rust impl std::ops::Add for Money { type Output = Result<Self, &'static str>; fn add(self, money: Self) -> Self::Output { if money.currency != self.currency { return Err("Can not operate with different currencies") } Ok(Money { currency: self.currency, amount: self.amount + money.amount, }) } } #[test] fn should_add_money_with_same_currency() { let ten_dollars = Money { currency: Currency::DOLLAR, amount: 10, }; let five_dollars = Money { currency: Currency::DOLLAR, amount: 5, }; let fifteen = ten_dollars + five_dollars; assert!(fifteen.is_ok(),true); assert_eq!(fifteen.ok().unwrap().amount,15); } #[test] fn should_not_allow_add_money_with_different_currency() { let ten_dollars = Money { currency: Currency::DOLLAR, amount: 10, }; let five_euros = Money { currency: Currency::EURO, amount: 5, }; let fifteen = ten_dollars + five_euros; assert!(fifteen.is_err(),true); } ``` Add operator It is not the only operator that we have available to overload it in rust, the complete list can be seen in the [documentation](https://doc.rust-lang.org/std/ops/index.html#traits) The complete code example can be downloaded at [github](https://gist.github.com/jcaromiq/aa4f96856354bb0760dd5b28dbd48ca1#file-add_trait-rs) or run it on [Rust Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=abb53e17aa6807b0af4f3b8411ed4bed) _Originally published at [blog.joaquin-caro.es](https://blog.joaquin-caro.es/post/sobrecarga_operador_rust_add/)_
jcaromiq
328,664
Magento 2 Upgrade or Migrate? What to do Next? What Expert Says
Magento 2 is one of the leading CMS platform businesses use to build their eCommerce website. Since i...
0
2020-05-06T08:41:17
https://dev.to/mconnectmedia/magento-2-upgrade-or-migrate-what-to-do-next-what-expert-says-20h1
Magento 2 is one of the leading CMS platform businesses use to build their eCommerce website. Since it’s release, it has taken the world of eCommerce by surprise. Many eCommerce leaders adopted Magento 2 foreseeing its advantages and capabilities in handling the eCommerce business. Many Magento 1 users migrated to Magento 2 due to the powerful features it has over others. Still, many Magento 1 users are confused about upgrading or migrating to Magento 2. Well, if we look at these two terms upgrades and migrate, they both have different meanings. Upgrading means updating or moving to the latest stable Magento 2 version from the current Magento 2 version. While migrating means updating or moving from Magento 1 to Magento 2 eCommerce platform. If you have already migrated from Magento 1 to Magento 2, then you may have to upgrade your Magento 2 platforms to stay on top of all the issues, and security concerns the last platform had. Magento releases upgrades for its flagship platform quarterly, which improves the performance, security, other things by adding new features and functions. So, upgrade your Magento 2 right away. [Magento 2 Upgrade or Migrate? What to do Next? What Expert Says](https://www.mconnectmedia.com/blog/magento-2-upgrade-or-migrate-what-to-do-next/) on [Mconnect Media](https://www.mconnectmedia.com/).
mconnectmedia
328,676
split .gitconfig for domains
how to use global .gitconfig for different domains
0
2020-05-06T09:09:02
https://dev.to/irlndts/split-gitignore-for-domains-m8c
git, go
--- title: split .gitconfig for domains published: true description: how to use global .gitconfig for different domains tags: git, go --- It's possible to use --local and --global .gitconfig's. Sometimes you need something in between, i.e. your workspace uses GitLab on a private domain and you use GitHub for your own purposes. Since [2.13.0](https://github.com/blog/2360-git-2-13-has-been-released) possible to split logic for different domains. You need a file for your GitHub (.gitconfig-github) ``` [user] name = username email = email@example.com ``` Another file for your GitLab (.gitconfig-github) ``` [url "git@gitlab.yourdomain.com:"] insteadOf = https://gitlab.yourdomain.com/ [user] name = John Johnson email = johnjohnson@yourdomain.com ``` Finally, split the configs via .gitconfig ``` [core] excludesfile = /Users/username/.gitignore [includeIf "gitdir:$GOPATH/src/github.com/"] path = /Users/username/.gitconfig-github [includeIf "gitdir:$GOPATH/src/gitlab.yourdomain.com/"] path = /Users/username/.gitconfig-gitlab ```
irlndts
328,686
Last 9 instagram photos on your wordpress blog in 3 minutes
Show last 9 photos from your instagram profile in wordpress
0
2020-05-06T09:25:45
https://dev.to/ptkdev/last-9-instagram-photos-on-your-wordpress-blog-in-3-minutes-26bf
wordpress, webcomponents, php, instagram
--- title: Last 9 instagram photos on your wordpress blog in 3 minutes published: true description: Show last 9 photos from your instagram profile in wordpress tags: wordpress, webcomponents, php, instagram cover_image: https://dev-to-uploads.s3.amazonaws.com/i/ofnx0ufo1jvzxsr8m1b9.png --- Hi! The other day I told you how I created [my first webcomponents](https://dev.to/ptkdev/instagram-widget-my-first-webcomponent-51ja). Today I wanted to tell you that I have implemented a wordpress plugin that loads the webcomponent into your blog and allows you to insert the [instagram widget](https://github.com/ptkdev-components/webcomponent-instagram-widget) wherever you want: html box, post or in the theme. ## 👔 Screenshot Wordpress default theme + instagram widget: ![WebComponent: InstagramWidget ](https://dev-to-uploads.s3.amazonaws.com/i/1ywvulisyhyhvejafzhu.png) ## 🚀 Installation (Wordpress) 1. Download [wordpress-plugin](https://github.com/ptkdev-components/webcomponent-instagram-widget/raw/master/dist/wordpress/instagram-widget-wordpress-plugin.zip) and install it. 1. Add code to your html widget, example: `Appearance` --> `Widget` --> insert `HTML Widget` and paste html code (and replace `@ptkdev` with your instagram username): ```html <instagram-widget username="@ptkdev" grid="3x3"></instagram-widget> ``` You can insert this html code in posts, widget, html box or theme. Where you want see instagram photos box. Resources: [[DEMO](https://codepen.io/ptkdev/pen/WNQOYqy)] [[NPM](https://www.npmjs.com/package/@ptkdev/webcomponent-instagram-widget)] [[GITHUB](https://github.com/ptkdev-components/webcomponent-instagram-widget)] ## 🧰 Options / Attributes | Parameter | Description | Values | Default value | Available since | | --- | --- | --- | --- | --- | | username | Set your instagram username | Your instagram username with or without @ | `@ptkdev` | v1.0.0 | | items-limit | Set the max number of pictures | number: from `0` to `12` | `9` | v1.1.0 | | grid | Set grid aspect ratio | `1x1`, `2x2`, `3x3`, etc... or `responsive` | `responsive` | v1.1.0 | | image-width | Set width of images (NOTE: grid different than `responsive` overwrite this value) | length units: `100%`, `100px`, `100pt` | `100%` | v1.1.0 | | image-height | Set height of images | length units: `100%`, `100px`, `100` | `100%` | v1.1.0 | | border-spacing | Set spacing around images | length units: `5%`, `5px`, `5pt` | `2px` | v2.1.0 | | border-corners | Set border radius of corners: `0`: square / `15`: rounded / `100`: circle | number: from `0` to `100` | `5` | v2.1.0 | | force-square | Force square aspect ratio if you post photos with different size on your instagram | `yes` / `no` | `yes` | v2.4.0 | | cache | Enable/disable cache | `enabled` / `disabled` | `enabled` | v2.1.0 | ## 💫 License * Code and Contributions have **MIT License** * Images and logos have **CC BY-NC 4.0 License** ([Freepik](https://it.freepik.com/) Premium License) * Documentations and Translations have **CC BY 4.0 License** # ❤️ Thanks! Leave a feedback!
ptkdev
328,758
Escaping Improperly Sandboxed Iframes
Thanks to iframe's sandbox attribute, it is possible to specify restrictions applied on content displ...
0
2020-05-06T11:39:17
https://danieldusek.com/escaping-improperly-sandboxed-iframes.html
security, iframe, sandbox, escaping
Thanks to iframe's sandbox attribute, it is possible to specify restrictions applied on content displayed inside the iframe. The documentation **strongly discourages** from using both `allow-scripts` and `allow-same-origin` values due to security risks it may introduce. In this blogpost, I am going to explain and demonstrate why. In [Mozilla's developer documentation](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe) on `<iframe>`, you can find the following remark related to `allow-scripts` and `allow-same-origin` values of the `sandbox` attribute: > When the embedded document has the same origin as the embedding page, it is **strongly discouraged** to use both `allow-scripts` and `allow-same-origin`, as that lets the embedded document remove the sandbox attribute — making it no more secure than not using the sandbox attribute at all. When I first read this note, I thought that escaping will be fairly straightforward: - Accessing `window.parent` from the page inside the iframe, - getting the iframe element reference via [Document Object Model (DOM)](https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Introduction) functions, such as `.getElementById()` or `.getElementsByTagName()`, - removing the iframe's `sandbox` element via `.removeAttribute("sandbox")`, - calling `alert()` function to create a modal window despite `allow-modals` was not set for the iframe, hence escaping from the iframe's restrictions. Unfortunatelly, this rather naïve approach does not work. From my experiments, the reason for that is the fact that browsers apply restrictions on the `<iframe>`'s content **when loading the page**. When I subsequently change or remove the sandbox attribute and then call "illegal" functions, browser will still block them. On the following lines I will demonstrate the simplest solution I have been able to come up with. Before I get to it, let me explain the terminology I am using - a `child page` refers to a page that is being displayed in the `iframe`, while a `parent page` refers to a page that contains the `<iframe>` element. Let's have a look at parent page, in `index.html`: ```html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>PoC - Discouraged combination of sandbox attribute values (parent page)</title> </head> <body> <iframe src="kid.htm" sandbox="allow-same-origin allow-scripts" id="escapeMe"></iframe> </body> </html> ``` This is the page that we need to modify from the child page (`kid.htm`). To escape the iframe as I outlined above, I will be popping an `alert()` from the child page inside the parent. One of the simplest solutions I have been able to come up with is to simply create a new iframe in place of the old one, and let the child page know it has been loaded in unrestricted mode. I will walk you through the process step by step. First, I will obtain a reference to the parent window and verify the `iframe` in question still exists. If the original iframe **is missing**, it means that the child page is loaded from another `iframe` - the one we are going to create in step 2 as a replacement: ```javascript let parent = window.parent; if (parent.document.getElementById("escapeMe") != null) { // 1. Create replacement iframe // 2. Delete the old one } else { // When the original iframe no longer exists, we can assume // it is possible to execute our code without restrictions alert("This should not have happened."); } ``` In the second step, I expand the body of the condition to create unrestricted iframe and to remove the original one with restrictions: ```javascript let parent = window.parent; if (parent.document.getElementById("escapeMe") != null) { // 1. Create replacement iframe let replacement = parent.document.createElement("iframe"); replacement.setAttribute("src", "kid.htm"); replacement.setAttribute("id", "escapedAlready"); parent.document.body.append(replacement); // 2. Delete the old one let original = parent.document.getElementById("escapeMe"); original.parentNode.removeNode(original); } else { // When the original iframe no longer exists, we can assume // it is possible to execute our code without restrictions alert("This should not have happened."); } ``` Now, when the parent page loads the child page into the `iframe` with `allow-scripts` and `allow-same-origin`, the child page manages to escape the original iframe's restrictions and execute its code. Both of the files I used in this demonstration are [available on my Github](https://github.com/dusekdan/RandomSecurity/tree/master/iframeSandboxDiscouragedCombination). You can also try for yourself on my [Github Pages](https://dusekdan.github.io/RandomSecurity/iframeSandboxDiscouragedCombination/). Can you think of a simpler way to escape? Please let me know!
dusekdan
328,775
McDonald's Card - TailwindCSS in 10 mins
Hi, Watch me coding this with just TailwindCSS under 10 minutes. It's a very simple approach to code...
0
2020-05-06T12:28:14
https://dev.to/justaashir/mcdonald-s-card-tailwindcss-in-10-mins-po8
tailwindcss, css, beginners, webdev
Hi, Watch me coding this with just TailwindCSS under 10 minutes. It's a very simple approach to code an isometric card. See this codepen: {% codepen https://codepen.io/justaashir/pen/oNjpmjj %}
justaashir
808,297
🌟 Be the person you needed when you started in the IT world 🌟
Let's talk about mentors ✨ I think the figure of a mentor in our lives and when we start our...
0
2021-08-30T19:30:30
https://dev.to/antoomartini/be-the-person-you-needed-when-you-started-in-the-it-world-4450
motivation, mentalhealth, beginners, career
Let's talk about mentors ✨ I think the figure of a mentor in our lives and when we start our careers in the IT world, is very important.⭐ > “A mentor is someone who sees more talent and ability within you, than you see in yourself, and helps bring it out of you.” - Bob Proctor ✨ In my first job, where I was a functional analyst, I didn't have any person by my side to be my mentor, guide or someone to teach me. Honestly, **I felt lost**, **scared** and with a lot of uncertainties that many times I didn't solve because I didn't have anyone to talk to. I learned a lot but **I felt alone** and **full of fears**. ***I didn't feel confident in myself*** 👾 Until then, I didn't know what it was to have a mentor or what it could be. I didn't know of their existence. Until one day I changed my job, my place and also my tasks. I started my way as a developer. And the first day I was assigned someone to accompany me during my process as a trainee. I don't think a mentor is a teacher or a friend. It is that person who may even work with you in the future. A colleague. A team worker, maybe. ### “A mentor is someone who allows you to see the hope inside yourself.” — Oprah Winfrey 🌟 **Finding a good mentor early on the learning process can help build confidence.** In ourselves but also in our professional future. 🌟 A mentor is not only someone who accompanies us on our professional or learning path. In addition to helping us avoid getting stuck technically and helping us to go step by step following the procedures, it is also someone who is interested in helping beginners move forward. 🌟 Some of us, worried about our future, ask our mentors how much we are going to earn. If we have possibilities to travel and even live abroad. With our insecurities, we also worry if we are going to be able to keep up with the rest of the people working on the same technologies. 🌟 I was lucky to meet someone very good and genius, and I dedicate this post to him. Hopefully everyone can find someone to be a guidance in this long way. > ***Mentoring builds confidence, both individually and as a group. It is a light in the darkness of the road, especially in unbalanced and hostile environments such as the technology sector.*** 🌟 I wish we were all mentors and help those who are starting out in this world. The word mentor always reminds me of this phrase that accompanies me every day: > ###🌟 Be the person you needed when you started in the IT world Tell me, would you like to be a mentor?
antoomartini
808,304
What simplicity means in OO design principles ?
Simplicity is one of the Object Oriented Design Principles that helps developers keep the complexity...
14,380
2021-08-30T19:46:15
https://www.cloudnativemaster.com/post/what-simplicity-means-in-oo-design-pattern-how-to-apply-it-in-code
oop, java, deisgn
Simplicity is one of the Object Oriented Design Principles that helps developers keep the complexity low in an object oriented program. #### Do not code for future It says, a class and its method should be as simple as possible, the code should do exactly what it should do, nothing more. We should refrain from anticipating some functionality for future and implement it for completeness. It's better to code what is known as of now and code in a way that it can be easily extended when new functionality is needed. #### Do not provide many arguments Sometimes programmers add more arguments to methods to make it more generic so that it can handle a lot of cases. But in reality all parts of the code may not be executed because all the possible scenarios the programmer thought of, is not happening now. just write what is known to be happening and don't make your method a generic one. #### Detect reusable logic and refactor If you anticipate you are writing a logic part of which can be used later in some other cases, break that method into parts, white the reusable logic in separate method(s). Call the reusable method from another method. An easy way to detect this is, when during development you need to refactor a certain part once that means it might have to be modified later too. So better to put an extra effort to refactor it so that later a refactoring is not needed, otherwise just leave as it is. You can use design patterns to refactor the code and write a more general solution but design patterns do come with more code and complexity. So if the situation demands it then only apply it. To read the main blog post with a concrete example please refer this [link](https://www.cloudnativemaster.com/post/what-simplicity-means-in-oo-design-pattern-how-to-apply-it-in-code) To read all my posts related to Object Oriented Design please refer this [link](https://www.cloudnativemaster.com/object-oriented-design)
dibyojyoti
808,311
Remote Onboarding
I'm a big believer in People as the most fundamental part of any startup. In the current state of...
0
2021-08-30T20:08:32
https://dev.to/jorgetovar/short-remote-onboarding-4n28
remote, homebrew, sdkman, people
I'm a big believer in People as the **most fundamental part of any startup**. In the current state of Remote work, the onboarding process is challenging and essential to the success of the new hires. These are some of my thoughts to make this process smooth. - Early access to communication tools - **Onboarding buddy** (Someone to ask questions) - At least two full weeks for onboarding (Culture, Vision, Politics, Security, Courses, ...) - **Job shadowing** (Learn from people with similar or the same role) - Asynchronous learning of the Domain (Its a plus to have this documented in Notion or Confluence) - Template of **user access privileges** and tools to ensuring the work is carried out properly (Matrix of user privileges based on role) - The third week, you must start to contribute to your team's work (Issues, low-risk tasks) - Keep learning about the Culture and the Domain - Make the onboarding document public within the company and encourage new hires to **contribute and make the onboarding process better** - **Setup expectations** as an individual contributor and as a manager (Management style, 1:1 cadency, ...) - Deep Dive into the code and principal abstractions and hot spots of the new company - **KPIs awareness**, as creative workers it's valuable to know how to do a good job (it keeps people focused and motivated) - Document the onboarding process - Take time to learn, invest time in learning, don't be afraid to ask **Note:** In the case of MAC users, we make the process simple by sharing our [Homebrew](https://brew.sh) formulas, plus the use of [Sdkman](https://sdkman.io/usage) to have different Java versions and [Nvm](https://github.com/nvm-sh/nvm) for different Node versions. ```shell brew leaves > list.txt cat list.txt xargs brew desc < list.txt sdk install java 11.0.11.hs-adpt ``` Brew leaves output: ```shell awscli clojure git go gh gradle groovy htop httpie hub jmeter jq kafka kotlin redis leiningen maven terraform tree vert.x wget make ``` Install zsh with curl https://ohmyz.sh/#install https://github.com/ohmyzsh/ohmyzsh/wiki/Installing-ZSH https://github.com/zsh-users/zsh-autosuggestions/blob/master/INSTALL.md run gh auth login to authenticate with Github https://www.jetbrains.com/toolbox-app/ - plugins copilot, key promoter, font16, aws cloudformation, aws toolkit, sonarLint Regarding git, don't forget to set up your username, mail, and editor Add sdk to zsh ```shell echo 'source "$HOME/.sdkman/bin/sdkman-init.sh"' >> ~/.zshrc ``` ```shell git config --global core.editor "nano" ``` Finally, you can install all the software that you wanted, and maybe ignore some formulas. ```shell xargs brew install < list.txt | grep -v ".*full" ``` A great example of good onboarding should have to be documented [Onboarding Gitlab](https://about.gitlab.com/handbook/people-group/general-onboarding/) ### Useful tools - Visual Studio Code - Github Desktop - Postman - Slack - Teams - Python https://www.python.org/downloads/ Tools like Postman allow you to save your information with the google account ALso remember to import Bookmarks Finally I like to setup Video and Audio pause on Zoom calls ### Docker https://www.docker.com/
jorgetovar
808,316
My Favorite VS Code Extensions
Hey Everyone! I wanted to show you all some of the vscode extensions I use and found really...
0
2021-08-31T05:52:02
https://dev.to/chinmaymhatre/my-favorite-vs-code-extensions-3j3
productivity, programming, vscode, beginners
Hey Everyone! I wanted to show you all some of the vscode extensions I use and found really useful. ![gravity-falls](https://user-images.githubusercontent.com/51131670/131395794-348fba52-ea26-4600-9aba-36e0ca0e8730.gif) --- ## indent-rainbow by oderwat As the name suggests this extension adds colors to the indentations in your code which can be useful to format your code .It can especially help you in writing python .This is a really useful and interesting extension. VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=oderwat.indent-rainbow ![image](https://user-images.githubusercontent.com/51131670/131396543-774b8d2a-3a7a-4e1e-8e99-b78376293063.png) --- ## polacode by P & P This is an amazing extension if you want to post snippets of your code on your blog or twitter. Polacode makes it really easy to get pretty code snippets of your code. All you have to do is run polacode from your vscode command palette, paste your code and click the capture button to download the image. VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=pnp.polacode ![image](https://user-images.githubusercontent.com/51131670/131397300-305ed717-840e-4e6e-b7c3-2fccfea10e6b.png) --- ## REST Client by Huachao Mao This is a really convenient extension for api developers .You can use this extension as an alternative to postman. One can make a file with the extension .http and test out their api in vscode itself. It supports all GET,POST,UPDATE and DELETE request. The benefit can be pushing the .http file to your GitHub repo so that other team members can contribute to the file. VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=humao.rest-client ![image](https://user-images.githubusercontent.com/51131670/131398109-ce535c46-6352-4806-9895-b13c9e7cf682.png) --- ## Here are some other vscode extensions you might find useful ### Bracket Pair Colorizer 2 VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer-2 ### Live Server VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer ### Auto Rename Tag VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag ### Better Comments VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=aaron-bond.better-comments ### ES7 React/Redux/GraphQL/React-Native snippets VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=dsznajder.es7-react-js-snippets ### Material Icon Theme VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=PKief.material-icon-theme ### Live Share VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=MS-vsliveshare.vsliveshare --- Would love to know What other extension do you all use in vscode ? Let me know in the comments!!
chinmaymhatre
808,325
My Docusaurus Pros and Cons List
Here's my pros, could be pros or cons, and cons list for Docusuarus.
0
2021-09-01T15:08:29
https://dev.to/missamarakay/my-docusaurus-pros-and-cons-list-4n0
documentation, developerrelations
--- title: My Docusaurus Pros and Cons List published: true description: Here's my pros, could be pros or cons, and cons list for Docusuarus. tags: documentation, developerrelations cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hjtbifaqafhxqeikimg.jpg --- Part of my role as the Head of Developer Experience includes being the Directly Responsible Individual (DRI) of documentation. This is exciting in that I get to craft the documentation experience, but less exciting that I get to inherit and work through years of not having a DRI of documentation. Yay growth! I mention this because I wasn't in my role (or at this company) when the engineers decided to move forward with [Docusaurus](https://docusaurus.io/), a React-based documentation framework. I think the choice they made was reasonable, probably the choice I would have made if I was in their shoes, but not the choice I would have likely made today. If this is the first time you are hearing about Docusaurus, I recommend reading their [introduction](https://docusaurus.io/docs) before continuing. I've had a few folks reach out and ask my opinion on Docusaurus, why I say I wouldn't have chosen it, and how I'm making it work for us. So here's my pros, could be pros or cons, and cons list. ##Pros The positive aspects. ###Markdown and frontmatter This just works for us as a team and organization. Markdown is comfortable for most people and frontmatter helps use add critical metadata like keywords, descriptions, etc. that may not work their way into the page otherwise. The feels like Jekyll or even this editor on Dev(!!!) and I like that. It's a great fit for my team and our engineers (who author our content). ###Flexible to extend with React Written in React, so you can customize it with React. Not a lot to say here other than this makes me feel like I'm not boxed into a corner. ###Search Algolia DocSearch is super powerful and I was thrilled to see it as a touted option here for search. I do have more to say on this topic, but it's unfortunately a con. ##Could be pros or cons After reading what I outlined for this section I realize these are mostly not neutral and more con-leaning. What do you think? ###MD files can contain both content and code See [tabs](https://docusaurus.io/docs/markdown-features/tabs) for an example. Now there is nothing wrong with this except I like separation of content and functionality, so to me this is more of a con. Maybe this makes me old, but I'd rather have my content in one place and the code running the presentation of the content in another place. Part of this thinking is because for us this could be two people maintaining each distinct piece. ###By default, whatever are the first chars in your files are the description preview ![Slack links with previews showing short descriptions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70ke5dy0vt9gt0ez4vct.png) This one snuck up on me. If you use tabs at the top of your page, like the example in the above section, the first chars on your page are code. So you get something fun in your short description preview like {object} or {children}. If you are lucky your page might start with another header, so you just get one word like "License". Luckily you can fix this by adding a brief sentence or using frontmatter's `description` but finding these seems to require testing in production. My technical writer and I either use Google Search or sending links in Slack to see what the short descriptions were. Is there a better way? I haven't found one. ##Cons These are irritating. ###No built in feedback mechanism This is huge for my team. We have no way to measure engagement with our individual documentation pages either through comments, stars, or a thumbs up/down. While we are an open source-minded company and our docs openly accept PRs and issues, we see people more comfortable with posting to the forums or Slack about a docs issue or clarification. Even the link to "Edit this page..." doesn't include our contribution guide, leaving users confused about the process or what to include in the PR initially. Having closely-coupled (or more closely-coupled) feedback mechanisms would be tremendous! We could write a custom plugin for this and it's on our roadmap, but this frequently comes up in discussions about Docusaurus. ###Algolia DocSearch I think Algolia DocSearch is super powerful, but when we were having issues with it, I couldn't figure out where I needed to make a change. Did I need to make the change in my docs source? In the Algolia UI? Turns out, there was a 3rd option unknown to me! Algolia DocSearch maintains a [separate config repo](https://github.com/algolia/docsearch-configs) for all their customers. If you want to update your config, you'll need to [find it](https://github.com/algolia/docsearch-configs/tree/master/configs) in their repo and submit a PR (with some info that you are, in fact, involved with the documentation project you are updating). The only way I figured this out was after stumbling across their docs and finding this [playground](https://docsearch.algolia.com/playground). Entering in our site and search data, the experience WAS TOTALLY DIFFERENT AND EXACTLY WHAT I WANTED. In the process of me updating our config, I found out the config was super outdated and contributing to our overall horrendous search experience we had been desperately trying to fix for months. That config change was merged quick and almost instantly all the search issues we were experiencing just corrected themselves. This experience feels disjointed, but we'll have it documented so we (hopefully) don't have to go through this again. Still unsure when we would know we need to update the config on the Algolia repo. ### What generates the sitemap? Is it Algolia DocSearch? Do you need the plugin? This is related to the above ramblings. It looked like we didn't have a sitemap without adding config to create one. Then when looking at the Algolia repo it includes a parameter for a sitemap, I assume for search? Are they different? Do they refresh the same? ###Trailing slashes... You have the ability to enabled or disable trailing slashes, but somehow we ran into massive issues with trailing slashes added to URLs. The initial page loaded just fine, but every subsequent page may not load, may 404, or may just look like it's loading forever and not really do anything. We disabled trailing slashes and I still see some weirdness when loading links. Is that from a redirect config issue? Is that something with my cache? Is it still this trailing slash thing back to haunt me? I have no idea. ###Limited plugins & integrations are... weird? Google Analytics and GTAG are available in plugins, but of course, our marketing team uses something different - GTM, Google Tag Manager. Neither of these plugins worked how we wanted them to with GTM. After building a custom plugin, Universal Analytics didn't seem to load when GA4 did just fine. This led to interesting conversations around adding custom event fires for each page (??? hard pass). This then led to all sorts of issues and data confidence concerns. We tweaked the custom plugin, threw it out for one we found online, and then gave up trying to make Universal Analytics work knowing GA4 is the future anyway. Right before all of this we were also implementing a cookie consent tool called Osano and it straight up borked the entire site. Not sure I can blame Docusaurus for that bit, but it was frustrating. Documentation tools and products I've worked with in the past were more mature and offered robust integrations with more enterprise-y tools. Not saying Docusaurus can't get there, it's just not there yet. ###No built in snippets I really like this feature on most documentation products made for technical writers. Snippets allow you to write once and apply it to many different pages or articles. This is great for writing your blurb about how to download or install something, access premium or Enterprise features, and other things that tend to appear on many pages across your documentation corpus. Could we build a custom plugin or component to make this happen? Sure. But it's unfortunately not high on the priority list. ##Pulling my thoughts together I don't hate Docusaurus. My perspective and list will be different than yours. If I was a full-time front end engineer, like in my past life, this would be a great documentation framework for me to use. But unfortunately, I know too much now 😅. My docs team doesn't have an engineering resource, so while Docusaurus is flexible, I need time from someone with a React skill set or I have to do it myself. Sometimes I get time from the engineers, sometimes it's not a priority higher than their product priorities. Rarely do I have time to do this myself. I also need to tap into tools I didn't have a hand in choosing- Osano and Google Analytics. While we figured out a way to make this compatible, it took far longer than an OOTB plugin. All of this to say, we can make it work. Looking ahead, because the content is all Markdown, if we do have to pickup and leave to another framework or documentation product, it will likely be relatively easy to move on. While that didn't make the pro list exactly, it's nice to see we've set ourselves up for the future that doesn't punish us for where we are at today. Do you or your team use Docusaurus? What are your thoughts?
missamarakay
808,473
What's the difference between CSS, SASS, and SCSS?
When I wrote QWERTYBall, I was especially eager to get it working and looking how I wanted. If I...
0
2021-08-31T02:32:51
https://dev.to/mathlete/what-s-the-difference-between-css-sass-and-scss-g2b
css, beginners
When I wrote [QWERTYBall](www.QWERTYBall.com), I was especially eager to get it working and looking how I wanted. If I wanted a certain feature, I'd read a tutorial on how to add it, and do whatever the tutorial said to do. In the process, I ended up with a lot of extra files I didn't need. In the interest of streamlining the app I wrote all those months ago, I recently revisited the code and found I had CSS files, SASS files, and SCSS files. The problem is that I had no idea what SASS or SCSS was, so I set out to learn. Here's what I found. ## CSS CSS stands for Cascading Style Sheets. Before CSS, there wasn't any formal separation between the code that defined how a webpage looked and the content featured on that webpage. This lack of separation made it laborious to update the look of a webpage because you had to hunt all over to find and change the fonts, colors, margins, and anything else. ## SASS SASS stands for Syntactically Awesome Style Sheets. In 2006, SASS was developed to solve the problem of excessive repetition in CSS by allowing for variables, nesting, and mixins (a pre-defined group of styles). SASS also added programmatic features such as arguments, loops, and conditional logic. The authors of SASS also decided to change the syntax a bit by eliminating the need for semicolons and curly braces. Instead, the syntax is "whitespace-sensitive" and must have proper indentation. SASS is a CSS "pre-processor" and outputs traditional CSS so any slightly modern browser can implement it. ## SCSS SCSS stands for Sassy CSS, and it's a newer syntax for SASS. It's simply CSS with the same featureset as SASS but requires semicolons and curly braces just like CSS. In fact, you can convert any CSS file to SCSS by changing the doc type from `.css` to `.scss` because SCSS is an extension of the syntax of CSS. ## Examples Here's some CSS for two buttons. Following is how you would accomplish the same goal using SASS and SCSS. CSS ```css .button.danger { border: 1px solid black; background: hsl(0, 100%, 50%); } .button.warning { border: 1px solid black; background: hsl(60, 100%, 50%); } ``` SASS ```sass $red: hsl(0, 100%, 50%) $yellow: hsl(60, 100%, 50%) $neutral: hsl(0, 0%, 0%) =myBtn($bdr, $bg) border: 1 px solid $bdr background: $bg button danger +myBtn($neutral, $red) warning +myBtn($neutral, $yellow) ``` SCSS ```scss $red: hsl(0, 100%, 50%); $yellow: hsl(60, 100%, 50%); $neutral: hsl(0, 0%, 0%); @mixin myBtn($bdr, $bg) { border: 1 px solid $bdr; background: $bg; } button { danger { @include myBtn($neutral, $red); } warning { @include myBtn($neutral, $yellow); } } ```
mathlete
808,648
Integrate NuxtJS with Appwrite
Speed up your development projects by using nuxtjs and appwrite as back-end
0
2021-08-31T05:35:13
https://dev.to/hrdtr/integrate-nuxtjs-with-appwrite-1o7f
nuxt, vue, appwrite, baas
--- title: Integrate NuxtJS with Appwrite published: true description: Speed up your development projects by using nuxtjs and appwrite as back-end tags: nuxt, vue, appwrite, baas cover_image: https://github.com/Hrdtr/nuxt-appwrite/blob/main/docs/static/preview-bg-white.png?raw=true --- ### What is Appwrite? Appwrite is an end-to-end backend server that is aiming to abstract the complexity of common, complex, and repetitive tasks required for building a modern app. Appwrite provides you with a set of APIs, tools, and a management console UI to help you build your apps a lot faster and in a much more secure way. Between Appwrite different services, you can find user authentication and account management, user preferences, database and storage persistence, cloud functions, localization, image manipulation, scheduled background tasks, and more. ### Preparation Before starting make sure you have installed appwrite on the server and appwrite is running fine there, if you haven't installed it please open the [appwrite documentation](https://appwrite.io/docs) and install it on your server. By the way the setup process is very easy. ### Getting Started Lets create a new NuxtJS project ```bash yarn create nuxt-app <project-name> ``` or using npm: ```bash npm init nuxt-app <project-name> ``` After the the package successfully installed, add appwrite module for NuxtJS ```bash $ cd <project-name> $ yarn add nuxt-appwrite ``` or using npm: ```bash $ cd <project-name> $ npm i nuxt-appwrite <project-name> ``` Next, add nuxt-appwrite to the modules section of in `nuxt.config.js` ```js export default { ... modules: ['nuxt-appwrite'] ... } ``` At this point, make sure we have an active project in appwrite, if not, please login to your appwrite console and create a new project, then go to project settings and copy the value from project id field. Next, add appwrite object inside `nuxt.config.js` export and fill it with some options ```js export default { ... modules: ['nuxt-appwrite'], appwrite: { endpoint: 'https://appwrite.example.com/v1', // appwrite endpoint project: '60046530a120d', // project id } ... ``` Great! We have successfully setup the Appwrite Web SDK in NuxtJS From here, we can use `this.$appwrite` to access the SDK from clients side methods in NuxtJS (e.g. `mounted()`). For example, we can fetch database document inside vue component like this: ```js { ... mounted() { try { const res = this.$appwrite.database.getDocument(collectionID, documentID) this.document = res } catch (err) { console.log(err.message) } }, ... } ``` ### Server Side User Action To maximize the capabilities of NuxtJS, `$appwrite` also accessible from NuxtJS context. So we can access the SDK from server side too (e.g.`asyncData()`). However, when doing SDK calls in your users scope from the server, it is not possible right away, since the HTTP-only cookie used for authentication is saved in the user's browser. That's why the Appwrite Web SDK allows to use JWT for authentication. There are **additional steps** that must be taken so that our NuxtJS server instance knows who we are (the logged in user). that way, the server can get the same access rights according to the user who is currently logged in. Below is an example code to set JWT using the APIs available in nuxt-appwrite module *(do it directly after the user has successfully logged in)*: ```js this.$appwrite.account .createJWT() .then((response) => { console.log(response) this.$appwrite.utils.setJWT(response.jwt) }) .catch((error) => { console.log(error) }) ``` Once the JWT is set, we can use the user-scoped action on the Nuxt process.server context, asyncData and nuxtServerInit. Don't forget to remove the JWT after the user logs out ```js this.$appwrite .account.deleteSessions('current') .then(() => { this.$appwrite.utils.removeJWT() }, function (error) { console.log(error); }); ``` By the way Appwrite has a public [community on discord](https://appwrite.io/discord), you can join and find out more about Appwrite and if you run into any problems or difficulties people there are always there to help.
hrdtr
808,658
The Week in Review
It was the 4th week of the 100DaysofCode challenge and it turned out to be the most productive week...
0
2021-08-31T05:49:15
https://dev.to/aliasgarkc/the-week-in-review-klm
programming, webdev, coding, 100daysofcode
It was the 4th week of the **100DaysofCode** challenge and it turned out to be the most productive week in the entire challenge. I learned about REST API and defining RESTful routes. To practice defining RESTful routes, I made a project involving CRUD operations where the user could post a comment, edit the comment, view and delete comments. Though there were not any authentication and user account features, it was sufficient to get an idea about defining RESTful routes and passing data. In addition to this, I also solved 40 problems on arrays on geeksforgeeks and leetcode. Also focused on personal development and made it a habit to read at least 20 pages a day.
aliasgarkc
808,824
Alineación de los perfiles UX+FE en un equipo de producto
Si alguna vez como diseñador de experiencia de usuario (UX) has pensado algo como: y esos dos...
0
2021-10-27T19:11:41
https://dev.to/adevintaspain/alineacion-de-los-perfiles-uxfe-en-un-equipo-de-producto-3k0a
uxdesign, designsprint, teamwork, peakteam
Si alguna vez como diseñador de experiencia de usuario (UX) has pensado algo como: **y esos dos píxeles.. ¿cómo han acabado ahí, no ha visto que está desalineado?** o si eres desarrollador _front-end_ (FE) algo como: **¿de verdad no voy a poder reutilizar un componente por dos píxeles de _padding_?** creemos que te va a interesar esto que vamos a contarte. Y.. ¡sí! decimos “vamos”, porque tanto el diseño como el desarrollo de la solución, pasa en primer lugar muy de cerca para dos perfiles: UX y FE . Vamos a contarte juntos **de primera mano cómo trabajar en colaboración sin que ninguna de las partes acabe frustrada**. Ambos perfiles deben convertirse en una alianza clave para ahorrar tiempo, anticipar dificultades, aumentar el compromiso y trabajar de una forma ágil y dinámica. En definitiva, **un trabajo paralelo y colaborativo.** En este artículo encontrarás **herramientas y dinámicas** a tener en cuenta para conseguir la alineación y comunicación adecuada en cada fase del proyecto: * **Discovery track (fase de exploración):** El análisis con usuarios del problema/oportunidad tendrá al perfil de FE como partícipe. Será así como podrá vincular con casos reales el porqué de las propuestas y así proponer iniciativas desde una perspectiva técnica en la definición de la solución. * **Delivery track (fase de producción):** El desarrollo de la solución tendrá como partícipe al perfil de UX. Será así como entenderá el porqué de las limitaciones y así aportar la flexibilidad en la iteración de las mismas para poder ajustar tiempos en la entrega de valor al usuario. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79dzs3gy4vvtzkshcq6q.png) ##Primera barrera a eliminar Un **error común** en equipos de producto es pensar que el objetivo del rol de UX es realizar el mejor entregable para desarrollo y que el objetivo del rol de desarrollo es recogerlo y llevar a la realidad una copia idéntica de ese pixel-perfect, como una carrera de relevos y por tanto, un trabajo independiente, en cascada. Esta regla no escrita es la primera barrera que debemos eliminar de nuestras mentes para poder proseguir con la implantación de la metodología que venimos a contarte: **un trabajo en equipo de principio a fin.** El **proceso de diseño no puede ser un misterio para el desarrollador** hasta que éste recibe el entregable, al igual que **el proceso de desarrollo tampoco puede estar en la sombra para el diseñador** hasta que éste lo ve en producción. **Es necesario trabajar la visibilidad en ambos sentidos.** A través de este método de trabajo que vamos a explicarte, **FE se convertirá en un evangelizador de problemas del usuario** tras empatizar con ellos gracias a participar en el discovery, mientras que **UX se transformará en un predicador de las limitaciones y estándares a seguir por parte de desarrollo** gracias a entender las bases del desarrollo. El objetivo es **conseguir una atmósfera de tolerancia recíproca**. ##Contexto del problema: Investigación de campo Una vez se planifiquen acciones de discovery, como por ejemplo unas entrevistas o un test de usabilidad, éstas no tendrán por qué estar limitadas a perfiles directamente relacionados con producto o negocio (Product Owner, UX Designer o Data), **es importante que se extienda la invitación y se involucre a los perfiles de desarrollo**, ya que no suelen sentirse partícipes de la toma de decisiones de producto pero su aportación puede ser clave para llegar a una mejor solución, ¿por qué no darles la oportunidad de formar parte en el contacto directo con el usuario? De esta forma **aumentaremos el compromiso por parte del equipo en alcanzar la solución planteada y disminuiremos los tiempos de alineamiento**. Los desarrolladores empatizarán con los pains del usuario y entenderán el objetivo que se persigue desde producto y UX de primera mano. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zk57uzprb3kjzgus4i6.png) **Tip:** Al terminar las investigaciones durante la fase de exploración, repasad en grupo los insights obtenidos, ya que por naturaleza de los roles unos enriquecerán a otros y juntos se complementarán para obtener los requisitos de diseño a cumplir para esa solución. Una herramienta útil para clasificar estos insights y ponerlos en común es el mapearlos en: must, nice-to-have y delighter. Esto ayudará a entender mejor la prioridad de cada uno en el momento de definir una primera versión o MVP (Minimum Viable Product). ##Wireframing: Reutilización de componentes Las conclusiones de la fase de exploración convergen en requisitos de diseño a cumplir y reflejar en la solución final. Durante la anterior recopilación de estos criterios a cumplir, gracias a haber visto los problemas de usuario en primera persona, **inconscientemente cada perfil ha imaginado ya posibles soluciones desde su perspectiva.** En este punto de partida es importante **alinear expectativas** compartiendo todas esas posibles soluciones a través de un workshop de co-creación. Será una sesión que ayudará al equipo a **converger en una propuesta** donde desarrollo será partícipe desde el momento inicial. En Adevinta, tanto UX como FE, trabajamos con una librería de componentes open source en React: [SUI Components](https://sui-components.vercel.app/). Esta librería, al tener recursos fáciles de usar e iterar, permite desarrollar proyectos haciendo uso de una gran variedad de componentes ya analizados, definidos y desarrollados. Una de las conversaciones más importantes que tiene que tener lugar en este punto entre UX y FE es aquella en la que se ponga sobre la mesa los componentes que se van a reutilizar, iterar o crear para llevar a cabo esa propuesta. **El desarrollador deberá aportar las limitaciones y el diseñador deberá valorar las afectaciones en la experiencia. El objetivo es encontrar un punto de equilibrio** entre la reutilización de componentes y la experiencia de usuario con la intención de que ésta última no salga perjudicada. Reutilizar y estandarizar no solamente **ayuda al usuario a tener una mejor experiencia y a reducir su curva de aprendizaje** desde el punto de vista de diseño, sino que desde desarrollo **se consigue que el servicio sea más escalable y fácil de mantener**. Los componentes nos permiten un **desarrollo ágil, ordenado y con una arquitectura mantenible** gracias a que trabajan de manera independiente, siendo muy sencillo utilizarlos. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84jioqnfwwuunajmb3kr.png) **Tip:** En esta sesión de co-creación se deben llegar a cumplir los requisitos de diseño acordados con producto y a marcar qué componentes formarán el wireframe inicial. Utilizad desde un papel A3 a una pizarra física o digital (Miro, Figjam, Freehand..) para este ejercicio, al más puro estilo draft. ##Adiós a los deadlines en el diseño El entregable que recoge las especificaciones de diseño para FE, **el _pixel-perfect_ no debe ser el testigo de una carrera de relevos que va pasando de un rol a otro una vez terminado**. Tampoco lo serán las primeras maquetaciones por parte de desarrollo. **FE tiene que poder contar con la flexibilidad de UX para poder levantar la mano en el momento que percibe que el coste incrementa**. Pivotad e iterad colaborativamente para adaptar la solución en conjunto y llegar así a un consenso. Es importante trabajar en **un ambiente de transparencia**, donde no haya miedo a compartir ideas o problemas. **Las propuestas de diseño _WIP_ son clave para obtener _feedback_** por parte de FE y **adelantarnos a posibles limitaciones de componentes o funcionalidades (incluso es importante obtenerlo antes de las _design reviews_ para poder aportar datos técnicos si aparecen dudas). Al igual que también es clave adelantarnos y ver la maquetación en un entorno de pruebas para llegar al momento de QA con soluciones más maduras**, ahorrando así tiempo en las correcciones o _bugs_. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5mku43c9y9rx7afp7o35.png) **Tip:** Pequeñas alineaciones entre UX y FE durante el proceso pueden conseguir dar luz verde a ideas que desde UX se valoraban de alto coste y por tanto iban a ser descartadas. Anticiparse será otro beneficio, prever posibles obstáculos ahorrará tiempos de desarrollo y reducirá riesgos en la entrega. ##Testea, testea y testea. Tenemos el desarrollo en el entorno de preproducción listo para ser visto por el resto del equipo y a un solo paso para subir a producción. Testea, testea y testea. **El trabajo para crear la solución ha sido colaborativo** desde un principio. Ambos perfiles saben en qué punto está la propuesta creada. No hay sorpresas ni expectativas distintas, por lo que **el feedback podrá venir detallado y sin miedo a ser un juicio al trabajo realizado.** Resolver en preproducción todos los escenarios será en gran parte una tarea que completa el trabajo de UX, que tras diseñarlo es quien conoce la propuesta a la _pixel-perfección_. Esta revisión dará lugar a un conjunto de capturas de las diferencias con el diseño y a un listado de ajustes. Es el momento de crear un documento visual que recoja de forma explícita ese resultado del QA. Para FE será un acelerador en el momento de atacar el problema. Un paso más allá será resaltar en el documento aquellos que deberían ser un must a resolver previo a la salida a producción. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ka8vun04gostii0pz7ac.png) **Tip:** La herramienta que recomendamos en este post para crear este entregable es aquella en la que el diseñador sea más ágil y pueda llevarla de forma paralela durante su testeo. Por ejemplo, una herramienta de diseño como Figma o Sketch, donde ya se contemplan los diseños finales. En nuestros archivos por flujo de usuario añadimos una página donde recopilamos todos los casos encontrados, destacamos los errores y comparamos las siete diferencias con el diseño final. Ahora que ya conoces nuestro _way-of-working_ basado en la transparencia, en compartir y trabajar de la mano, **¿crees que encajaría en tu equipo?** Si es así, ¡ya sabes cómo ponerte en marcha!
saracaan
808,884
Getting Started with Appwrite Realtime for Flutter
Realtime service is one of the most sought after features of Appwrite and it's now ready to play...
0
2021-09-04T06:46:19
https://dev.to/appwrite/getting-started-with-appwrite-realtime-for-flutter-4229
flutter, news, opensource, serverless
Realtime service is one of the most sought after features of Appwrite and it's now ready to play with! It's been a while as we already had realtime alpha release and a getting started tutorial to go with it. In this tutorial, we will dive into the details and understand how to develop a Flutter app leveraging Appwrite's realtime capabilities. ## 📝 Prerequisites In order to continue with this tutorial, you need to have access to an Appwrite console with a project. If you have not already installed Appwrite, please do so. Installing Appwrite is really simple following Appwrite's official [installation docs](https://appwrite.io/docs/installation). Installation should only take around 2 minutes. Once installed, login to your console and **create a new Project**. ## 💾 Setup Database Once you have logged in to the console and selected your project, from the left sidebar in the dashboard click on the **Database** option to get to the database page. Once on the database page, click on the **Add Collection** button. ![Create Collection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/032v3z2l3piultfmrre9.png) In the dialog that pops up, set the collection name to **Items** and click on the **Create** button to create the collection, and you will be redirected to the new collection's page where we can define its rules. Define the following rules, then click the **Update** button. Also note down the **Collection ID** from the right side of the settings page as we will need that later in our code. - **Name** - label: Name - Key: name - Rule Type: Text - Required: true - Array: false ![Add Collection Rules](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l0so4ggo1mb4d8vs2eyb.png) In the permissions, set the read and right permission both to `*` so that anyone can read and write. Now that the collection is created we need to create a user. This user will be used to create sessions when we authenticate with the realtime API. ## ⚙️ Setup Flutter Project and Dependencies We will begin by creating a new Flutter project. From your terminal in your project folder, type the following command to create a new Flutter project. ```bash flutter create flappwrite_realtime ``` Then we add the Appwrite's SDK, to do that from your terminal, in your newly created project directory, type the following command: ```bash cd flappwrite_realtime flutter pub add appwrite ``` This command will add Appwrite's latest Flutter SDK with realtime service, as a dependency to your Flutter project. Once you have installed the dependency and run `flutter pub get` you should be ready to use it. ## ➕️ Add Flutter Platforms To initialize the Appwrite SDK and start interacting with Appwrite services, you first need to add a new Flutter platform to your project. If you are running on Flutter web, you can simply add web platform instead of Flutter platforms. To add a new platform, go to your Appwrite console, select your project, and click the **Add Platform** button on the project Dashboard. Choose either Flutter or web platform. If you choose web, add **localhost** as the host name. If you choose Flutter, from the dialog, choose one of the tabs, based on which platform you plan to run on. You can add multiple platforms similarly. If you choose to add a Android platform, on the dialog box add the details. Add your app name and package name. Your package name is generally the `applicationId` in your app-level `build.gradle` file. You may also find your package name in your `AndroidManifest.xml` file. ![Add Flutter Platform](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0p73qo6b1im45xq0f80.png) By registering a new platform, you are allowing your app to communicate with the Appwrite API. ## 👩‍🔧 Home Page We will start by creating a simple stateful widget that will list all the items from our items collection, and also allow adding new items as well as deleting existing items. Our Home page will also connect to Appwrite's realtime service and display changes in the items collection by updating the UI as they happen. So, let's create our **HomePage** widget. Modify the code in **lib/main.dart** as follows: ```dart import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'FlAppwrite Realtime Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: HomePage(), ); } } class HomePage extends StatefulWidget { const HomePage({Key? key}) : super(key: key); @override _HomePageState createState() => _HomePageState(); } class _HomePageState extends State<HomePage> { List<Map<String, dynamic>> items = []; TextEditingController _nameController = TextEditingController(); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('FlAppwrite Realtime Demo'), ), body: ListView(children: [ ...items.map((item) => ListTile( title: Text(item['name']), )), ]), floatingActionButton: FloatingActionButton( child: Icon(Icons.add), onPressed: () { // dialog to add new item showDialog( context: context, builder: (context) => AlertDialog( title: Text('Add new item'), content: TextField( controller: _nameController, ), actions: [ TextButton( child: Text('Cancel'), onPressed: () => Navigator.of(context).pop(), ), TextButton( child: Text('Add'), onPressed: () { // add new item final name = _nameController.text; if (name.isNotEmpty) { _nameController.clear(); _addItem(name); } Navigator.of(context).pop(); }, ), ], ), ); }, ), ); } void _addItem(String name) { setState(() { items.add({'name': name, 'id': DateTime.now().millisecondsSinceEpoch}); }); } } ``` In the **initState** function of the HomePage, we will create and initialize our Appwrite client, as well as subscribe to realtime changes in documents in our **items** collection. ```dart RealtimeSubscription? subscription; late final Client client; initState() { super.initState(); client = Client() .setEndpoint('<http://localhost/v1>') // your endpoint .setProject('5df5acd0d48c2') //your project id ; subscribe(); } ``` And in **dispose** method, close the subscription. ```dart dispose(){ subscription?.close(); super.dispose(); } ``` Now let us setup different variables and functions to load the initial data, listen to changes in the collection documents and update the UI to reflect the changes in realtime. First, initialize our items collection id and and setup a function to load initial data when the application first starts. For that we will also setup Appwrite database service. ```dart final itemsCollection = "<collectionId>"; //replace with your collection id, which can be found in your collection's settings page. late final Database database; @override void initState() { super.initState(); client = Client() .setEndpoint('<http://localhost/v1>') // your endpoint .setProject('5df5acd0d48c2') //your project id ; database = Database(client); loadItems(); } loadItems() async { try { final res = await database.listDocuments(collectionId: itemsCollection); setState(() { items = List<Map<String, dynamic>>.from(res.data['documents']); }); } on AppwriteException catch (e) { print(e.message); } } ``` In order to be able to add data to our collection, we must first create a session. Let's add a login function and call it from our `initState` function. ```dart @override void initState() { super.initState(); //... login(); // .. } login() async { try { await Account(client).createAnonymousSession(); } on AppwriteException catch (e) { print(e.message); } } ``` Now, we will setup our subscribe function that will listen to changes to documents in our items collection. ```dart void subscribe() { final realtime = Realtime(client); subscription = realtime.subscribe([ 'collections.<collectionId>.documents' ]); //replace <collectionId> with the ID of your items collection, which can be found in your collection's settings page. // listen to changes subscription!.stream.listen((data) { // data will consist of `event` and a `payload` if (data.payload.isNotEmpty) { switch (data.event) { case "database.documents.create": var item = data.payload; items.add(item); setState(() {}); break; case "database.documents.delete": var item = data.payload; items.removeWhere((it) => it['\$id'] == item['\$id']); setState(() {}); break; default: break; } } }); } ``` Finally, let's modify our `_addItem` function to add item to Appwrite's database and see how the view updates in realtime. ```dart void _addItem(String name) async { try { await database.createDocument( collectionId: itemsCollection, data: {'name': name}, read: ['*'], write: ['*'] ); } on AppwriteException catch (e) { print(e.message); } } ``` Let us also modify our `ListTile` widget to add a delete button that will allow us to delete the item. ```dart ListTile( title: Text(item['name']), trailing: IconButton( icon: Icon(Icons.delete), onPressed: () async { await database.deleteDocument( collectionId: itemsCollection, documentId: item['\$id'], ); }, ), ) ``` ## Complete Example ```dart import 'package:appwrite/appwrite.dart'; import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'FlAppwrite Realtime Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: HomePage(), ); } } class HomePage extends StatefulWidget { const HomePage({Key? key}) : super(key: key); @override _HomePageState createState() => _HomePageState(); } class _HomePageState extends State<HomePage> { List<Map<String, dynamic>> items = []; TextEditingController _nameController = TextEditingController(); RealtimeSubscription? subscription; late final Client client; final itemsCollection = 'COLLECTION_ID'; late final Database database; @override void initState() { super.initState(); client = Client() .setEndpoint('<http://localhost/v1>') // your endpoint .setProject('YOUR_PROJECT_ID') //your project id ; database = Database(client); login(); loadItems(); subscribe(); } login() async { try { await Account(client).createAnonymousSession(); } on AppwriteException catch (e) { print(e.message); } } loadItems() async { try { final res = await database.listDocuments(collectionId: itemsCollection); setState(() { items = List<Map<String, dynamic>>.from(res.data['documents']); }); } on AppwriteException catch (e) { print(e.message); } } void subscribe() { final realtime = Realtime(client); subscription = realtime.subscribe([ 'collections.<collectionId>.documents' ]); //replace <collectionId> with the ID of your items collection, which can be found in your collection's settings page. // listen to changes subscription!.stream.listen((data) { // data will consist of `event` and a `payload` if (data.payload.isNotEmpty) { switch (data.event) { case "database.documents.create": var item = data.payload; items.add(item); setState(() {}); break; case "database.documents.delete": var item = data.payload; items.removeWhere((it) => it['\$id'] == item['\$id']); setState(() {}); break; default: break; } } }); } @override void dispose() { subscription?.close(); super.dispose(); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('FlAppwrite Realtime Demo'), ), body: ListView(children: [ ...items.map((item) => ListTile( title: Text(item['name']), trailing: IconButton( icon: Icon(Icons.delete), onPressed: () async { await database.deleteDocument( collectionId: itemsCollection, documentId: item['\$id'], ); }, ), )), ]), floatingActionButton: FloatingActionButton( child: Icon(Icons.add), onPressed: () { // dialog to add new item showDialog( context: context, builder: (context) => AlertDialog( title: Text('Add new item'), content: TextField( controller: _nameController, ), actions: [ TextButton( child: Text('Cancel'), onPressed: () => Navigator.of(context).pop(), ), TextButton( child: Text('Add'), onPressed: () { // add new item final name = _nameController.text; if (name.isNotEmpty) { _nameController.clear(); _addItem(name); } Navigator.of(context).pop(); }, ), ], ), ); }, ), ); } void _addItem(String name) async { try { await database.createDocument( collectionId: itemsCollection, data: {'name': name}, read: ['*'], write: ['*']); } on AppwriteException catch (e) { print(e.message); } } } ``` ## 🥂 Conclusion I enjoyed a lot writing tutorial and I hope you enjoyed learning and building Flutter application with Appwrite Realtime service. The full source code for this application is available on my [GitHub repository](https://github.com/lohanidamodar/flappwrite_realtime). Feel free to get back to us if you have any queries or comments. Excited to see what the community will build with Flutter and Appwrite Realtime. ## 🎓 Learn More - [Getting Started With Flutter](https://appwrite.io/docs/getting-started-for-flutter) - [Flutter Playground](https://github.com/appwrite/playground-for-flutter) - [Appwrite Docs](https://appwrite.io/docs)
lohanidamodar
808,921
Deploy a Django App on AWS Lightsail: Docker, Docker Compose, PostgreSQL, Nginx & Github Actions
So you have written your Django Application and you are ready to deploy it? Although there are...
14,756
2021-08-31T11:55:58
https://dev.to/koladev/deploy-a-django-app-on-aws-lightsail-docker-docker-compose-postgresql-nginx-github-actions-bo6
python, docker, django, github
So you have written your Django Application and you are ready to deploy it? Although there are already existing solutions like Heroku, to help you deploy your application easily and quickly, it's always good for a developer to know how to deploy an application on a private server. Today, we'll learn how to deploy a Django App on AWS Lightsail. **This can also be applied to others VPS providers. ** ## Table of content - Setup - Add PostgreSQL - Prepare the Django application for deployment - Environment variables - Testing - Docker Configuration - Github Actions (testing) - Preparing the server - Github Actions (Deployment) ## 1 - Setup For this project, we'll be using an already configured Django application. It's a project made for this article about [ FullStack React & Django Authentication: Django REST, TypeScript, Axios, Redux & React Router ](https://dev.to/koladev/django-rest-authentication-cmh). You can directly clone the repo [here](https://github.com/koladev32/django-auth-react-tutorial). Once it's done, make sure to create a virtual environment and run the following commands. ``` cd django-auth-react-tutorial virtualenv --python=/usr/bin/python3.8 venv source venv/bin/activate ``` ### Add PostgreSQL Actually, the project is running on sqlite3, which is very good in the local and development environments. Let's switch to PostgreSQL. Using this, I pretend that you have PostgreSQL installed on your machine and that the server is running. If that's not the case, feel free to check this resource to install the server. Once it's done, let's create the database we'll be using for this tutorial. Open your shell enter `psql` and let's start writing some SQL commands. The CREATE DATABASE command lets us create a new database in PostgreSQL. ```SQL CREATE DATABASE coredb; ``` The CREATE USER command lets us create a user for our database along with a password. ```SQL CREATE USER core WITH PASSWORD '12345678'; ``` And finally, let's grand to our new user access to the database created earlier. ```SQL GRANT ALL PRIVILEGES ON DATABASE coredb TO core; ``` Now let's install `psycopg2` using `pip install psycopg2` is a popular PostgreSQL database adapter for Python. ``` pip install psycopg2 ``` Next step, let's set up the project to use PostgreSQL database instead of Sqlite3. ```python DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } } ``` Change the above code snippet to this: ```python DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'coredb', 'USER': 'core', 'PASSWORD': '12345678', 'HOST': 'localhost', 'PORT': 5432, } } ``` What have we done? - `ENGINE`: We changed the database engine to use `postgresql_psycopg2` instead of `sqlite3`. - `NAME`: is the name of the database we created for our project. - `USER`: is the database user we've created during the database creation. - `PASSWORD`: is the password to the database we created. Now, make are that the Django application is connected to the PostgreSQL database. For this, we'll be running the `migrate` command which is responsible for executing the SQL commands specified in the migrations files. ``` python manage.py migrate ``` You'll have a similar output: ``` Applying auth.0001_initial... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK ... Applying core_user.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying sessions.0001_initial... OK ``` Now, let's run the server to check if the application is working well. ``` python manage.py runserver ``` You will see something like this when you hit `http://127.0.0.1:8000` in your browser. ![Django Server Well Running](https://cdn.hashnode.com/res/hashnode/image/upload/v1629734721362/2Wy0WpBJA.png) Bravo for us! We've successfully configured our Django app to use PostgresSQL. Now, let's prepare the application for deployment. ## 2 - Prepare application for deployment Here, we'll configure the application to use env variables but also configure [Docker](https://www.docker.com/) as well. ### Env variables It's important to keep sensitive bits of code like API keys, passwords, and secret keys away from prying eyes. The best way to do it? Use environment variables. Here's how to do it in our application. First of all, let's install a python package named `django-environ`. ``` pip install django-environ ``` Then, import it in the settings.py file, and let's initialize environment variables. ```python # CoreRoot/settings.py import environ # Initialise environment variables env = environ.Env() environ.Env.read_env() ``` Next step, create two files : - a `.env` file which will contain all environment variables that `django-environ` will read - and a `env.example` file which will contain the same content as `.env`. Actually, the `.env` file is ignored by git. The `env.example` file here represents a skeleton we can use to create our `.env` file in another machine. It'll be visible, so make sure to not include sensitive information. ``` # ./.env SECRET_KEY=django-insecure-97s)x3c8w8h_qv3t3s7%)#k@dpk2edr0ed_(rq9y(rbb&_!ai% DEBUG=0 DJANGO_ALLOWED_HOSTS="localhost 127.0.0.1 [::1]" DB_ENGINE=django.db.backends.postgresql_psycopg2 DB_NAME=coredb DB_USER=core DB_PASSWORD=12345678 DB_HOST=localhost DB_PORT=5432 CORS_ALLOWED_ORIGINS="http://localhost:3000 http://127.0.0.1:3000" ``` Now, let's copy the content paste it in `.env.example`, but make sure to delete the values. ``` ./env.example SECRET_KEY= DEBUG= DJANGO_ALLOWED_HOSTS= DB_ENGINE= DB_NAME= DB_USER= DB_PASSWORD= DB_HOST= DB_PORT= CORS_ALLOWED_ORIGINS= ``` And now, let's go back to the settings file and add the env variables configurations as well. ```python # ./CoreRoot/settings.py # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = env('SECRET_KEY', default='qkl+xdr8aimpf-&x(mi7)dwt^-q77aji#j*d#02-5usa32r9!y') # SECURITY WARNING: don't run with debug turned on in production! DEBUG = int(env("DEBUG", default=1)) ALLOWED_HOSTS = env("DJANGO_ALLOWED_HOSTS").split(" ") DATABASES = { 'default': { 'ENGINE': env('DB_ENGINE', default='django.db.backends.postgresql_psycopg2'), 'NAME': env('DB_NAME', default='coredb'), 'USER': env('DB_USER', default='core'), 'PASSWORD': env('DB_PASSWORD', default='12345678'), 'HOST': env('DB_HOST', default='localhost'), 'PORT': env('DB_PORT', default='5432'), } } CORS_ALLOWED_ORIGINS = env("CORS_ALLOWED_ORIGINS").split(" ") ``` ### Testing Testing in an application is the first assurance of maintainability and reliability of our Django server. We'll be implementing testing to make sure everything is green before pushing for deployment. Let's write tests for our login and refresh endpoints. We'll also add one test to the `UserViewSet`. First of all, create a file named `test_runner.py` in `CoreRoot` directory. The goal here is to rewrite the `DiscoverRunner`, to load our custom fixtures in the test database. ```python # ./CoreRoot/test_runner.py from importlib import import_module from django.conf import settings from django.db import connections from django.test.runner import DiscoverRunner class CoreTestRunner(DiscoverRunner): def setup_test_environment(self, **kwargs): """We set the TESTING setting to True. By default, it's on False.""" super().setup_test_environment(**kwargs) settings.TESTING = True def setup_databases(self, **kwargs): """We set the database""" r = super().setup_databases(**kwargs) self.load_fixtures() return r @classmethod def load_fixtures(cls): try: module = import_module(f"core.fixtures") getattr(module, "run_fixtures")() except ImportError: return ``` Once it's done, we can add the TESTING configurations in the `settings.py` file. ```python # CoreRoot/settings.py ... TESTING = False TEST_RUNNER = "CoreRoot.test_runner.CoreTestRunner" ``` Now, we can start writing our tests. Let's start with the authentication tests. First of all, let's add the URLs and the data we'll be using. ```python # core/auth/tests.py from django.urls import reverse from rest_framework.test import APITestCase from rest_framework import status class AuthenticationTest(APITestCase): base_url_login = reverse("core:auth-login-list") base_url_refresh = reverse("core:auth-refresh-list") data_register = {"username": "test", "password": "pass", "email": "test@appseed.us"} data_login = { "email": "testuser@yopmail.com", "password": "12345678", } ``` Great! We can add a test for login now. ```python # core/auth/tests.py ... def test_login(self): response = self.client.post(f"{self.base_url_login}", data=self.data_login) self.assertEqual(response.status_code, status.HTTP_200_OK) ``` To run the tests, open the terminal and enter the following command. ``` python manage.py test ``` You should see a similar output: ``` Creating test database for alias 'default'... System check identified no issues (0 silenced). . ---------------------------------------------------------------------- Ran 1 test in 0.287s OK Destroying test database for alias 'default'... ``` Let's add a test for the refresh endpoint. ```python # core/auth/tests.py ... def test_refresh(self): # Login response = self.client.post(f"{self.base_url_login}", data=self.data_login) self.assertEqual(response.status_code, status.HTTP_200_OK) response_data = response.json() access_token = response_data.get('access') refresh_token = response_data.get('refresh') # Refreshing the token response = self.client.post(f"{self.base_url_refresh}", data={ "refresh": refresh_token }) self.assertEqual(response.status_code, status.HTTP_200_OK) response_data = response.json() self.assertNotEqual(access_token, response_data.get('access')) ``` What we are doing here is pretty straightforward : - Login to retrieve the access and refresh tokens - Making a request with the refresh token to gain a new access token - Comparing the old access token and the new obtained access token to make sure they are not equal. Now let's move to the [Docker](https://www.docker.com/) configuration. ### Dockerizing our app [Docker](https://www.docker.com/) is an open platform for developing, shipping, and running applications inside containers. Why use Docker? It helps you separate your applications from your infrastructure and helps in delivering code faster. If it's your first time working with Docker, I highly recommend you go through a quick tutorial and read some documentation about it. Here are some great resources that helped me: - [Docker Tutorial](https://www.youtube.com/watch?v=eN_O4zd4D9o&list=PLPoSdR46FgI5wOJuzcPQCNqS37t39zKkg) - [Docker curriculum](https://docker-curriculum.com/) #### Dockerfile The `Dockerfile` represents a text document containing all the commands that could call on the command line to create an image. Add a Dockerfile to the project root: ``` # pull official base image FROM python:3.9-alpine # set work directory WORKDIR /app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install psycopg2 dependencies RUN apk update \ && apk add postgresql-dev gcc python3-dev musl-dev # install python dependencies COPY requirements.txt /app/requirements.txt RUN pip install --upgrade pip RUN pip install --no-cache-dir -r requirements.txt # copy project COPY . . ``` Here, we started with an **Alpine-based Docker Image for Python**. It's a lightweight Linux distribution designed for security and resource efficiency. After that, we set a working directory followed by two environment variables: 1 - `PYTHONDONTWRITEBYTECODE` to prevent Python from writing `.pyc` files to disc 2 - `PYTHONUNBUFFERED` to prevent Python from buffering `stdout` and `stderr` After that, we perform operations like: - Setting up environment variables - Installing Postgres server - Copying there `requirements.txt` file to our app path, upgrading pip, and installing the python package to run our application - And last copying the entire project Also, let's add a `.dockerignore` file. ``` env venv .dockerignore Dockerfile ``` #### Docker Compose [Docker Compose](https://docs.docker.com/compose/) is a great tool (<3). You can use it to define and run multi-container Docker applications. What do we need? Well, just a YAML file containing all the configuration of our application's services. Then, with the `docker-compose` command, we can create and start all those services. Here, the `docker-compose.dev.yml` file will contain three services that make our app: nginx, web, and db. This file will be used for development. As you guessed : ```yaml version: '3.7' services: nginx: container_name: core_web restart: on-failure image: nginx:stable volumes: - ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf - static_volume:/app/static ports: - "80:80" depends_on: - web web: container_name: core_app build: . restart: always env_file: .env ports: - "5000:5000" command: > sh -c " python manage.py migrate && gunicorn CoreRoot.wsgi:application --bind 0.0.0.0:5000" volumes: - .:/app - static_volume:/app/static depends_on: - db db: container_name: core_db image: postgres:12.0-alpine env_file: .env volumes: - postgres_data:/var/lib/postgresql/data/ volumes: static_volume: postgres_data: ``` - `nginx`: [NGINX](https://www.nginx.com/) is an open-source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. - `web`: We'll run and serve the endpoint of the Django application through Gunicorn. - `db`: As you guessed, this service is related to our PostgreSQL database. And the next step, let's create the NGINX configuration file to proxy requests to our backend application. In the root directory, create a `nginx` directory and create a `nginx.dev.conf` file. ``` upstream webapp { server core_app:5000; } server { listen 80; server_name localhost; location / { proxy_pass http://webapp; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } location /static/ { alias /app/static/; } } ``` Let's add `gunicorn` and some configurations before building our image. ``` pip install gunicorn ``` And add it as a requirement as well in the `requirements.txt`. Here's what my `requirements.txt` file looks like : ```txt Django==3.2.4 djangorestframework==3.12.4 djangorestframework-simplejwt==4.7.1 django-cors-headers==3.7.0 psycopg2==2.9.1 django-environ==0.4.5 gunicorn==20.1.0 ``` And the last thing, let's add `STATIC_ROOT` in the `settings.py` file. #### Docker Build The setup is completed. Let's build our containers and test if everything works locally. ``` docker-compose -f docker-compose.dev.yml up -d --build ``` Once it's done, hit `localhost/api/auth/login/` to see if your application is working. You should get a similar page. ![Page GET Not allowed](https://cdn.hashnode.com/res/hashnode/image/upload/v1630022210893/4hCPqR6A4.png) Great! Our Django application is successfully running inside a container. Let's move to the Github Actions to run tests every time there is a push on the `main` branch. ## Github Actions (Testing) [GitHub actions](https://github.com/features/actions) are one of the greatest features of Github. it helps you build, test or deploy your application and more. Here, we'll create a YAML file named `django.yml` to run some Django tests. In the root project, create a directory named `.github`. Inside that directory, create another directory named `workflows` and create the `django.yml` file. ```yaml name: Django CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: test: runs-on: ubuntu-latest strategy: max-parallel: 4 matrix: python-version: [3.9] services: postgres: image: postgres:12 env: POSTGRES_USER: core POSTGRES_PASSWORD: 12345678 POSTGRES_DB: coredb ports: - 5432:5432 options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: psycopg2 prerequisites run: sudo apt-get install python-dev libpq-dev - name: Install Dependencies run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: Run Tests run: | python manage.py test ``` Basically, what we are doing here is setting rules for the [GitHub action workflow](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions), installing dependencies, and running the tests. - Make sure that this workflow is triggered only when there is a push or pull_request on the main branch - Choose `ubuntu-latest` as the OS and precise the Python version on which this workflow will run. - Next, we create a Postgres service, the database will be used to run our tests. - After that as we install the python dependencies and just run the tests. If you push the code in your repository, you'll see something similar when you go to your repository page. ![Django Actions](https://cdn.hashnode.com/res/hashnode/image/upload/v1630024016358/wxAUXlykO.png) After a moment, the yellow colors will turn to green, meaning that the checks have successfully completed. ## Setting up the AWS server I'll be using a [Lightsail server](https://aws.amazon.com/lightsail/) here. Note that these configurations can work with any VPS provider. If you want to set up a Lightsail instance, refer to the AWS [documentation](https://aws.amazon.com/lightsail/). Personally, I am my VPS is running on Ubuntu 20.04.3 LTS. Also, you'll need [Docker](https://docs.docker.com/engine/install/ubuntu/) and [docker-compose](https://docs.docker.com/compose/install/) installed on the machine. After that, if you want to link your server to a domain name, make sure to add it to your DNS configuration panel. ![Domain name configuration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9k3bz4mq7hya3a19r7b.png) Once you are done, we can start working on the deployment process. ### Docker build script To automate things here, we'll write a bash script to pull changes from the repo and also build the docker image and run the containers. We'll also be checking if there are any coming changes before pulling and re-building the containers again. ```bash #!/usr/bin/env bash TARGET='main' cd ~/app || exit ACTION='\033[1;90m' NOCOLOR='\033[0m' # Checking if we are on the main branch echo -e ${ACTION}Checking Git repo BRANCH=$(git rev-parse --abbrev-ref HEAD) if [ "$BRANCH" != ${TARGET} ] then exit 0 fi # Checking if the repository is up to date. git fetch HEADHASH=$(git rev-parse HEAD) UPSTREAMHASH=$(git rev-parse ${TARGET}@{upstream}) if [ "$HEADHASH" == "$UPSTREAMHASH" ] then echo -e "${FINISHED}"Current branch is up to date with origin/${TARGET}."${NOCOLOR}" exit 0 fi # If that's not the case, we pull the latest changes and we build a new image git pull origin main; # Docker docker-compose up -d --build exit 0; ``` Good! Login on your server using SSH. We'll be creating some new directories: one for the repo and another one for our scripts. ``` mkdir app .scripts cd .scripts vim docker-deploy.sh ``` And just paste the content of the precedent script and modify it if necessary. ``` cd ~/app git clone <your_repository> . ``` Don't forget to add the `.`. Using this, it will simply clone the content of the repository in the current directory. Great! Now we need to write the default `docker-compose.yml` file which will be run on this server. We'll be adding an SSL certificate, by the way, so we need to create another `nginx.conf` file. Here's the `docker-compose.yml` file. ```yaml version: '3.7' services: nginx: container_name: core_web restart: on-failure image: jonasal/nginx-certbot:latest env_file: - .env.nginx volumes: - nginx_secrets:/etc/letsencrypt - ./nginx/user_conf.d:/etc/nginx/user_conf.d ports: - "80:80" - "443:443" depends_on: - web web: container_name: core_app build: . restart: always env_file: .env ports: - "5000:5000" command: > sh -c " python manage.py migrate && gunicorn CoreRoot.wsgi:application --bind 0.0.0.0:5000" volumes: - .:/app - static_volume:/app/static depends_on: - db db: container_name: core_db image: postgres:12.0-alpine env_file: .env volumes: - postgres_data:/var/lib/postgresql/data/ volumes: static_volume: postgres_data: nginx_secrets: ``` If you noticed, we've changed the `nginx` service. Now, we are using the `docker-nginx-certbot` image. It'll automatically create and renew SSL certificates using the [Let's Encrypt](https://letsencrypt.org/) free CA (Certificate authority) and its client `certbot`. Create a new directory `user_conf.d` inside the `nginx` directory and create a new file `nginx.conf`. ``` upstream webapp { server core_app:5000; } server { listen 443 default_server reuseport; listen [::]:443 ssl default_server reuseport; server_name dockerawsdjango.koladev.xyz; server_tokens off; client_max_body_size 20M; ssl_certificate /etc/letsencrypt/live/dockerawsdjango.koladev.xyz/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/dockerawsdjango.koladev.xyz/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/dockerawsdjango.koladev.xyz/chain.pem; ssl_dhparam /etc/letsencrypt/dhparams/dhparam.pem; location / { proxy_pass http://webapp; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } location /static/ { alias /app/static/; } } ``` **Make sure to replace `dockerawsdjango.koladev.xyz` with your own domain name...** And no troubles! I'll explain what I've done. ``` server { listen 443 default_server reuseport; listen [::]:443 ssl default_server reuseport; server_name dockerawsdjango.koladev.xyz; server_tokens off; client_max_body_size 20M; ``` So as usual, we are listening on port `443` for **HTTPS**. We've added a `server_name` which is the domain name. We set the `server_tokens` to off to not show the server version on error pages. And the last thing, we set the request size to a **max of 20MB**. It means that requests larger than 20MB will result in errors with **HTTP 413** (Request Entity Too Large). Now, let's write the job for deployment in the Github Action. ```yaml ... deploy: name: Deploying needs: [test] runs-on: ubuntu-latest steps: - name: Deploying Application uses: appleboy/ssh-action@master with: host: ${{ secrets.SSH_AWS_SERVER_IP }} username: ${{ secrets.SSH_SERVER_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} passphrase: ${{ secrets.SSH_PASSPHRASE }} script: | cd ~/.scripts ./docker-deploy.sh ``` Notice the usage of Github Secrets here. It allows the storage of sensitive information in your repository. Check this [documentation](https://docs.github.com/en/actions/reference/encrypted-secrets) for more information. We also using here a GitHub action that requires the name of the host, the username, the key, and the passphrase. You can also use this action with a password but it'll require some configurations. Feel free to check the [documentation](https://github.com/appleboy/ssh-action#setting-up-a-ssh-key) of this action for more detail. Also, notice the `needs: [build]` line. It helps us make sure that the precedent job is successful before deploying the new version of the app. Once it's done, log via ssh in your server and create a .env file. ``` cd app/ vim .env # or nano or whatever ``` And finally, create a `.env.nginx` file. This will contain the required configurations to create an SSL certificate. ```bash # Required CERTBOT_EMAIL= # Optional (Defaults) STAGING=1 DHPARAM_SIZE=2048 RSA_KEY_SIZE=2048 ELLIPTIC_CURVE=secp256r1 USE_ECDSA=0 RENEWAL_INTERVAL=8d ``` Add your email address. Notice here that `STAGING` is set to 1. We will test the configuration first with **Let’s encrypt** staging environment! It is important to not set staging=0 before you are 100% sure that your configuration is correct. This is because there is a limited number of retries to issue the certificate and you don’t want to wait till they are reset (once a week). Declare the environment variables your project will need. And we're nearly done. :) Make a push to the repository and just wait for the actions to pass successfully. ![Successful Deployment](https://cdn.hashnode.com/res/hashnode/image/upload/v1630170291071/0t26MZ9nt.png) And voilà. I can check https://dockerawsdjango.koladev.xyz/ and here's the result. ![HTTPS expired](https://cdn.hashnode.com/res/hashnode/image/upload/v1630280545520/4f2ttZjfw.png) It looks like our configuration is clean! We can issue a production-ready certificate now. On your server, stop the containers. ``` docker-compose down ``` edit your `.env.nginx` file and set `STAGING=0`. Then, start the containers again. ``` sudo docker-compose up -d --build ``` Let's refresh the page. ![HTTPS Secure](https://cdn.hashnode.com/res/hashnode/image/upload/v1630281017853/jEnMx60C-.png) And it's working like a charm! :) ## Conclusion In this article, we've learned how to use Github Actions to deploy a dockerized Django application on an AWS Lightsail server. Note that you can use these steps on any VPS. And as every article can be made better so your suggestion or questions are welcome in the comment section. 😉 Check the code of this tutorial [here](https://github.com/koladev32/django-aws-docker-github-actions).
koladev
809,192
Explain what is vue , Like Im five
What is Vue?
0
2021-08-31T17:07:36
https://dev.to/pandademic/what-is-vuejs-2h6p
What is Vue?
pandademic
809,292
How to set up CodeBuild test reports in CDK Pipelines (C#)
I'm so happy to get into writing again - we’ve had a few challenging months: we had to self-isolate...
0
2021-09-01T11:52:54
https://oxiehorlock.com/2021/08/31/aws-cdk-adventure-part-3-using-codebuild-reports-in-cdk-pipelines-c/
aws, cdk, devops
I'm so happy to get into writing again - we’ve had a few challenging months: we had to self-isolate several times, the whole family was ill with a stomach bug, and our son is going through the terrible twos. So blogging, talks and working on professional development had to be put on the backburner. I finally had some time to finish writing this blog post about CDK Pipelines I had been working on probably since the beginning of the year. I had been trying to figure out how to make CodeBuild test reports work with CDK Pipelines. Last week when I got back to this and started working on it again, I saw that the API that was used in Developer Preview has been updated (more information on it [here](https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/pipelines/ORIGINAL_API.md)). And now it looks like it is easier to plug in the reports to be used with this high level construct. While the old API is still in use, I will focus on the new API. The purpose of this blog post is to demonstrate the set-up of CodeBuild test reports in CDK Pipelines for C#. I have written a simple .NET Core application which returns the day of the week when you pass in a date in the query string. There are also a couple of XUnit tests: ``` public class UnitTests { [Fact] public void DateInPast_ReturnsCorrectResult() { var controller = new HomeController(); var date = new DateTime(1983, 2, 3); var expected = $"{String.Format("{0:d}", date)} was Thursday"; var actual = controller.Get(date) as OkObjectResult; Assert.Equal(expected, actual.Value); } [Fact] public void DateInFuture_ReturnsCorrectResult() { var controller = new HomeController(); var date = new DateTime(2033, 12, 9); var expected = $"{String.Format("{0:d}", date)} will be Friday"; var actual = controller.Get(date) as OkObjectResult; Assert.Equal(expected, actual.Value); } } ``` The file tree looks like this: ![file tree](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn4gnjtmrxmgwoxyuol0.JPG) ![file tree](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k32fvyz5h6nl5rllm94u.JPG) For the task of creating CodeBuild test reports only without actually deploying the app, we will only work with *CdkPipelinesPipelineStack.cs*. In my case this was the file created automatically on `cdk init`, and it will contain the main pipeline. Firstly, before we build the pipeline, we need to create a connection to our Github repo and get its ARN. I wrote a post about it a while back – [AWS CDK Adventure Part 2: CDK Pipelines and GitHub fun](https://oxiehorlock.com/2021/03/15/cdk-pipelines-and-github-fun/) ``` namespace CdkPipelines { public class CdkPipelinesPipelineStack : Stack { internal CdkPipelinesPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props) { var connectionArn = "arn:aws:codestar-connections:eu-west-1:01234567890:connection/12ae43b8-923e-4a01-ba4e-274454669859"; } } } ``` We then create a report group: ``` var reportGroup = new ReportGroup(this, "MyReports", new ReportGroupProps { ReportGroupName = "MyReports" }); ``` After that, we use the *CodePipeline* construct in the *Amazon.CDK.Pipelines* namespace to create the pipeline. If we didn’t want to have any CodeBuild reports, we would set up the pipeline like so: ``` var pipeline = new Amazon.CDK.Pipelines.CodePipeline(this, "WhatDayOfWeekPipeline", new CodePipelineProps { PipelineName = "WhatDayOfWeekPipeline", SelfMutation = false, Synth = new ShellStep("synth", new ShellStepProps() { Input = CodePipelineSource.Connection("OksanaH/CDKPipelines", "main", new ConnectionSourceOptions() { ConnectionArn = connectionArn }), InstallCommands = new string[] { "npm install -g aws-cdk" }, Commands = new string[] { "cd App", "dotnet restore WhatDayOfWeekTests/WhatDayOfWeekTests.csproj", "dotnet test -c release WhatDayOfWeekTests/WhatDayOfWeekTests.csproj --logger trx --results-directory ./testresults", "cd ..", "cdk synth" } }) }); ``` One of the *CodePipelineProps* is *SelfMutation*: when set to false, it’s quite handy when doing development work – you can just run `cdk deploy` and your local changes to the pipeline will be deployed bypassing the GitHub repo. *Synth* property is used to set up the pipeline to pull from the GitHub repo, and also run the commands needed to produce the cloud assembly. In order to set up the reports, we need to customize the CodeBuild project, and it can be done by using *CodeBuildStep* class instead of *ShellStep*. *CodeBuildStepProps* class, in turn, has a *PartialBuildSpec* property, which we can use to define the reports. The reports part of a *buildspec.yml* file usually looks like this: ``` version: 0.2 phases: ... reports: XUnitTestResults: file-format: VisualStudioTrx files: - '**/*' base-directory: './testresults' ``` In CDK for C# the value of *PartialBuildSpec* has to be created using *Dictionary<string, object>*, and the reports bit translated to CDK is below: ``` var reports = new Dictionary<string, object>() { { "reports", new Dictionary<string, object>() { { reportGroup.ReportGroupArn, new Dictionary<string,object>() { { "file-format", "VisualStudioTrx" }, { "files", "**/*" }, { "base-directory", "App/testresults" } } } } } }; ``` Another thing that needs to be created to be able to work with CodeBuild test reports is a policy, otherwise you might see an error like this when you try to deploy the stack: ![auth error](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wle1t074486fd5qt1tnh.png) The policy allows several report-related actions on the report group we have created: ``` var policyProps = new PolicyStatementProps() { Actions = new string[] { "codebuild:CreateReportGroup", "codebuild:CreateReport", "codebuild:UpdateReport", "codebuild:BatchPutTestCases", "codebuild:BatchPutCodeCoverages" }, Effect = Effect.ALLOW, Resources = new string[] { reportGroup.ReportGroupArn } }; ``` Next, we can define necessary *CodeBuildStepProps* to set up reports: ``` var step = new CodeBuildStep("Synth", new CodeBuildStepProps { Input = CodePipelineSource.Connection("OksanaH/CDKPipelines", "main", new ConnectionSourceOptions() { ConnectionArn = connectionArn }), PrimaryOutputDirectory = "cdk.out", InstallCommands = new string[] { "npm install -g aws-cdk" }, Commands = new string[] { "cd App", "dotnet restore WhatDayOfWeekTests/WhatDayOfWeekTests.csproj", "dotnet test -c release WhatDayOfWeekTests/WhatDayOfWeekTests.csproj --logger trx --results-directory ./testresults", "cd ..", "cdk synth" }, PartialBuildSpec = BuildSpec.FromObject(reports), RolePolicyStatements = new PolicyStatement[] { new PolicyStatement(policyProps) } }); ``` Now, what is left to do is to use the *CodeBuildStep* as the value of the *Synth* property: ``` var pipeline = new Amazon.CDK.Pipelines.CodePipeline(this, "WhatDayOfWeekPipeline", new CodePipelineProps { PipelineName = "WhatDayOfWeekPipeline", Synth = step } ``` After that we can commit the changes, run `cdk deploy` and check the CodeBuild test report in the console: ![codebuild test reports](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmx98edcm5tsm3ar4ke6.JPG) Beautiful! Useful links: * [AWS Documentation on CDK Pipelines](https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.html) * [CDK Workshop](https://cdkworkshop.com/40-dotnet/70-advanced-topics/100-pipelines.html) * [CDK Pipelines in GitHub](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/pipelines)
oksanah
809,457
Videogame Text Datasets Release
Background In 2016 I released LibraryofCodexes, a website that aimed to gather videogame...
0
2021-08-31T20:29:37
https://dev.to/davis24/videogame-text-datasets-release-1mb7
datasets, videogames
## Background In 2016 I released [LibraryofCodexes](https://libraryofcodexes.com), a website that aimed to gather videogame text into one uniform place (think in-game notes, books, letters, audio recordings etc). This was because I found that I was too engaged in finishing a quest or killing a monster to take the time to read it, and while wikis existed they can sometimes be tedious to navigate. Ultimately the website has gone through a few iterations since 2016. Most of the original design was stripped away in favor of shifting to an eBook repository and the database that has each individual entry is now private. However, I've recently started reading through a few academic papers on Natural Language Processing applied to videogames (it's a rather small domain). I realized that there is a lack of easily accessible text and I’ve been sitting on a data set for the past few years that just needed to be formatted and released. ## Datasets I've gone ahead and released the full data set in `json` format to [github](https://github.com/Davis24/video-game-text-dataset). This repository, at the time of release, contains a slew of different game series (see full list below). Each videogame has it's own README which details what data has been collected, what kind of quirks, and the degree of sanitization. ### Videogame Series List * Assassin's Creed * Baldur's Gate * Battlefield * Crysis * Dead Space * Destiny * Deus Ex * Diablo * Doom * Dragon Age * Dying Light * Fable * Fallout * Gears of War * Horizon Zero Dawn * Kingdoms of Amalur * Mass Effect * Metroid Prime * Middle-Earth * Nier * Red Dead Redepmtion * Resident Evil * Star Wars: The Old Republic * System Shock * The Divison * The Elder Scrolls * The Last of Us * The Witcher * Tomb Raider * Watch Dogs * World of Warcraft Hopefully this can help make someone's research just a little bit easier. I will continue to update the repository in the future as I update [LibraryofCodexes](https://libraryofcodexes.com) with new games.
davis24
913,530
Moms, you're carrying a lot of plates
I see you moms. You’re carrying a lot of plates. Your children. Your work. Your family. Your life....
0
2021-11-30T21:02:03
https://dev.to/bekahhw/moms-youre-carrying-a-lot-of-plates-20eo
career, moms
--- title: Moms, you're carrying a lot of plates published: true date: 2021-11-30 00:00:00 UTC tags: career, moms canonical_url: --- I see you moms. You’re carrying a lot of plates. Your children. Your work. Your family. Your life. Everything. You are carrying more plates than anyone should expect to carry. I know it’s _hard_. I know it feels like you aren’t doing a good job at any of those things. But the truth is you’re amazing. You’re carrying those plates and they are _heavy_. And you’re still pushing forward. You’re tired. I know. It’s exhausting. You will get stronger. But you also have to learn how to hand off some of those plates. You have to know when you’ve tried too much weight and when adding one more will be the one that will strain you too much. As a mother, your identity shifts. You have another human or humans to care for. Your understanding of who you are will change. Your priorities will change. Your past self is a part of who you are, but it’s not _who_ you are anymore. And you’re awesome. And now I’m going to do my favorite thing and mix metaphors: You have to learn how to weed your garden. The things that don’t enrich the soil? Gone. The things that strangle the fruit? Gone. The things that help the plants but only in small doses? Regulate. Protect your garden. Protect yourself. If you give and you give and you give all day long, you are depleting yourself. There will be nothing left to give at some point. You have to find the things that enrich your life so that you can give when you need to. It is ok to take a walk by yourself. It is ok to ask for alone time. It is ok to go to the gym. It is ok to do something that you–and only you–enjoy doing. Because that’s the thing, you have to care for yourself or eventually no one will be cared for. You can’t be empty and continue to give. When you do that, you move on the negative axis; you accumulate exhaustion, frustration, anxiety, resentment, anger. Dear moms out there, I encourage you to find things that enrich your lives, to eliminate the things and people who deplete you, and to communicate your frustrations, your emptiness, loneliness, anxieties, to those close to you. To ask others to carry some of your plates. To find communities who can support you. To let yourself _feel_ your feelings. I don’t know if it’s ever _not_ hard to be a mom. But I do know that when we allow _our_ needs to go unmet, we’re heading down a dangerous path. Prioritize. Minimize. Optimize. Boundaries–ok, I just wanted to make that one fit. The point is, don’t try to do everything, master the art of “no,” and let people know when they’ve asked too much of you. Don’t let them minimize your feelings or persuade you that your feelings are wrong. Let that be your red flag. I’ve got you, moms. Make sure you’re enriching yourself too.
bekahhw
809,558
Learn IoT from scratch #5- C/C++ basics for embedded systems
Introduction This is the #5 post of the IoT series that I am writing, in this article I...
14,072
2021-09-08T19:43:18
https://dev.to/josethz00/learn-iot-from-scratch-5-c-c-basics-for-embedded-systems-4oek
iot, tutorial, beginners, computerscience
## Introduction This is the #5 post of the IoT series that I am writing, in this article I will talk about primitive data types in C/C++, native libraries, data structures, memory allocation. It is also very important to know what are the main compilers for each of these languages. To be a good embedded systems engineer you don't need to be **THE BEST** programmer in these languages, but you need to know their details and how they work behind the scenes. In this post I will not teach you everything about these languages, I will just give you an introduction, and you should seek to delve into these technologies by reading books, attending classes or anything like that. &nbsp; ## History I like to know about the history of the technologies that I use at work, at college or in my own studies and researches. I think that histories are very good for us to turn off the **zeros and ones mode**, and chill a little bit. If you don't want to know the history of C and C++ you can just jump this section of the article. ### C The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language, it was released in the early 70s. C was developed to make the Unix OS and its tools development easier, the creator was Dennis Ritchie together with Bell Labs, and with great contributions from Ken Thompson (the creator of Unix). ![Dennis Ritchie and Ken Thompson](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3fpogdkcnawdx7uifc9v.jpg) Initially Dennis started to improve the B language, but these improvements resulted in a completely new language, this language was C, and believe me, the C language was born from a programming language called B :laughing:. C language was first released in 1972, its preprocessor was introduced in 1973 and the language was first standardized at 1989. ### C++ In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++. The motivation for creating a new language originated from Stroustrup's experience in programming for his PhD thesis. Initially it would be just a super-set for the C language, but Stroustrup wasn't having too much progress, so he decided to made it a complete new language. In 1982, Stroustrup was also working at the legendary "Bell Labs", and he started to develop a successor to C with Classes, which he named "C++" (++ being the increment operator in C) after going through several other names. New features were added, including virtual functions, function name and operator overloading, references, constants, improved type checking. ![Bjarne Stroustrup seated at his desk](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndym0phd7totu5murjs0.jpg) In 1984, Stroustrup implemented the first C++ native libraries. And in 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard. The first commercial implementation of C++ was released in October of the same year. &nbsp; ## Primitive data types ### C #### int Basic type, it stores integer numbers and has a size of at least 2 bytes. Integer values can have modifiers such as: signed, unsigned (only values >= 0) and long (integer with a bigger range). ```c int a = 10; // signed unsigned int b = 5; // unsigned long int c = 48492909302020; // long ``` #### float Basic type, it stores floating-point numbers. The float data type is usually referred to as a single-precision floating-point type. ```c float d = 11.987; ``` #### double Basic type, it also stores floating-point numbers. The float data type is usually referred to as a double-precision floating-point type. So a **double** variable is more precise than a **float** one, and it could be more appropriate when dealing with algebra, complex math, etc. ```c double e = 11.987; ``` #### char Smallest and simplest data type in C, the **char** type stores a single character and has a size of 1 byte; ```c char yes = 'y'; ``` #### string (char * or char[]) In C programming, does not exist the string data type, so a string is a sequence of characters terminated with a null character \0. So you can use char arrays or char pointers, with char arrays you can get the length of the string easier, but you cannot pass it as a parameter to a function, neither reassign and/or modify the string value without the **string.h** native library. There are many ways to declare a string in C, so let's see some code examples: ```cpp char a[] = "abcd"; char b[50] = "abcd"; char c[] = {'a', 'b', 'c', 'd', '\0'}; char d[5] = {'a', 'b', 'c', 'd', '\0'}; char * e = "abcd"; // string declaration using pointers char f[100]; // the string was declared, but wasn't initialized ``` #### bool - ??? :thinking: The C language doesn't have "bool" as a primitive data type, so in C when we want to deal with boolean values, expressions and statements, we have to use **integer** values to evaluate boolean values. So let's suppose that we have an statement, if it returns a **zero**, the statement is false, otherwise, if it returns a **one**, the statement is true. Now to really understand all this boolean stuff, let's see an example of how would this code be in C: ```c int f = 10; int g = 20; printf("%d \n", f > g); // 0 -> False printf("%d \n", f < g); // 1 -> True ``` If you are familiar with higher level programming languages, that have the "bool type", you may have difficulty using 0 and 1 for boolean operations/values. So you can use the **stdbool.h** native library, it is available since C99, and is the best way to use bool in C. ```cpp #include <stdio.h> #include <stdbool.h> int main(void) { bool keep_going = true; // Could also be `bool keep_going = 1;` while(keep_going) { printf("This will run as long as keep_going is true.\n"); keep_going = false; // Could also be `keep_going = 0;` } printf("Stopping!\n"); return 0; } ``` #### void Void is considered a data type (for organizational purposes), but it is basically a keyword to use as a placeholder where you would put a data type, to represent "no data". Accordingly to some standards **void** is an incomplete data type, because you cannot declare a variable with type **void**, but you can use **void** for declaring pointers (`void *` or `void **`) and functions. ```cpp void myFunc(int a, int b, int c) { c = a + b; } ``` ### C++ #### string The C++ language also doesn't have the string data type by default, as a primitive data type. But you can include the **string** native library, this library contains the string data type, and some functions to manipulate strings, the **string** library of C++ is very similar to the **string.h** library of C. ```cpp #include <iostream> #include <string> using namespace std; int main (void) { string str1 = "Hello"; string str2 = "World"; string str3; int len ; // copy str1 into str3 str3 = str1; cout << "str3 : " << str3 << endl; // concatenates str1 and str2 str3 = str1 + str2; cout << "str1 + str2 : " << str3 << endl; return 0; } ``` #### bool In C++, the data type bool has been introduced to hold a boolean value, true or false.The values true or false have been added as keywords in the C++ language. Unlike the C language, C++ has a native bool type, not as a constant, a macro, or an enum. ```cpp #include<iostream> using namespace std; int main (void) { int x1 = 10, x2 = 20, m = 2; bool b1, b2; b1 = x1 == x2; // false b2 = x1 < x2; // true cout << "b1 is = " << b1 << "\n"; cout << "b2 is = " << b2 << "\n"; return 0; } ``` &nbsp; ## Structs (C/C++) X Classes (C++) Structs and classes are similar, with both of you can create models, entities, and build a structure type with multiple fields, and these fields can have equal data types, but also different. The c language only supports structs, whereas the C++ language supports both structs and classes. The only difference between a struct and class in C++ is the default accessibility of member variables and methods. In a struct they are always public, but when do you use classes, you can define variables and methods as public, protected or private. Also, classes were designed to follow to Object-Oriented paradigm, so using them it is too much easier to implement concepts like polymorphism, inheritance, abstractions and dependency injection. It is possible to implement these concepts with structs, but it is much harder and you might have to write more lines of code. ```cpp struct Foo { int x; }; class Bar { public: int x; }; ``` &nbsp; ## Main compilers for C The C programming language has at least 50 known complete compilers, but this list contains the 3 more popular compilers. 1. GCC 2. CLang 3. TurboC ## Main compilers for C++ The C++ programming language has more than 20 known complete compilers, but this list contains the 3 more popular compilers. 1. GCC (g++) 2. CLang (clang++) 3. Intel C++ Compiler (icc) &nbsp; ## Book references If you don't have previous experience using C/C++, I recommend that you go deeper and look for content from other sources, and books are always great sources of knowledge. 1. **The C Programming Language** - "Known as the bible of C, this classic bestseller introduces the C programming language and illustrates algorithms, data structures, and programming techniques. This book has 2 authors, one of them is Dennis Ritchie, the creator of the C programming language"...nothing better than learn from who has created it, right? 2. **Head First C** - "Head First C provides a complete learning experience for C and structured imperative programming. With a unique method that goes beyond syntax and how-to manuals, this guide not only teaches you the language, it helps you understand how to be a great programmer. You'll learn key areas such as language basics, pointers and pointer arithmetic, and dynamic memory management." 3. **Expert C Programming: Deep C Secrets** - "This book is for the knowledgeable C programmer, this is a second book that gives the C programmers advanced tips and tricks. This book will help the C programmer reach new heights as a professional. Organized to make it easy for the reader to scan to sections that are relevant to their immediate needs." 4. **The C++ Programming Language** - "The C++ Programming Language, Fourth Edition, delivers meticulous, richly explained, and integrated coverage of the entire language, its facilities, abstraction mechanisms and standard libraries. Throughout, Stroustrup presents concise examples, which have been carefully crafted to clarify both usage and program design." 5. **C++: The Ultimate Beginners Guide to Learn C++ Programming** - "How to set up a C++ development environment- The principles of programming that will get you started- The different operations in C++: binary, arithmetic, relational, etc.- Power of C++: operations, switches, loops and decision making - Getting started: syntax, data types, and variables- How to create custom functions in C++- The best practices for coding- A useful glossary at the end- And more..." &nbsp; ## Course references If you don't like to read books, or if you have difficulties to understand them, you can take courses, there are many free options of courses available. I will recommend free and paid courses that I think that are very good. 1. <a href="https://cs50.harvard.edu/x/2021/">CS50 Week 1</a> - In week 1 (2nd week) of CS50, a basic C course is taught, introducing the language. 2. <a href="https://www.youtube.com/watch?v=KJgsSFOSQv0">C Programming Tutorial for beginners: freeCodeCamp</a> - This course will give you a full introduction into all of the core concepts in the C programming language in 4 hours. 3. <a href="https://www.youtube.com/watch?v=vLnPwxZdW4Y">C++ Programming Tutorial for beginners: freeCodeCamp</a> - This course will give you a full introduction into all of the core concepts in the C++ programming language in 4 hours. 4. <a href="https://www.youtube.com/watch?v=GQp1zzTwrIg">C++ FULL COURSE For Beginners (Learn C++ in 10 hours): Code Beauty</a> - This is a full C++ programming course. It consists of many lectures whose goal is to take you from beginner to advanced programming level. &nbsp; ## Conclusion We have reached the end of our article, this was a very long post. Know C and C++ is essential to be a good IoT developer, because you will be able to work with high-performance systems, embedded systems and also programming of sensors. Obviously this post doesn't contain everything you need to know, because it's a very dense subject, but I tried to summarize as best I could, to make the information clear and allow you to go deeper into the subject.
josethz00
809,679
Python Software Bundle
Python Software Bundle We’ve all been there. New Year’s resolutions. A goal to learn something...
0
2021-09-01T00:13:20
https://dev.to/haze/python-software-bundle-1o91
python, codenewbie, aws, cloud
![Python Software Bundle](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ad8v7t6c5il93lah9mm4.jpg) [Python Software Bundle](https://www.humblebundle.com/software/python-2021-software?partner=indiekings) We’ve all been there. New Year’s resolutions. A goal to learn something new. Perhaps a coding friend has recommended you learn programming. Whatever the reason, now’s the time to pick up some programming skills. And courtesy of the new Python software bundle, there’s no better time than now to master a new skill and make your computer even more useful! Pick up the awesome programming potential of Python, with software like Mastering PyCharm (2021 Edition), Object-Oriented Programming (OOP) in Python, and PyCharm Professional Edition - 6 months free. Plus, your purchase will support the Python Software Foundation and Women Who Code!
haze
809,689
Accessing AppSync APIs that require Cognito Login outside of Amplify
The Need You have this great Amplify App using AppSync GraphQL. You eventually find that...
0
2021-09-01T01:23:37
https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/
aws, appsync, graphql, serverless
--- title: Accessing AppSync APIs that require Cognito Login outside of Amplify menu_order: 1 post_status: publish tags: aws, appsync, graphql, serverless published: true post_excerpt: Access your AppSync GraphQL APIs that require Cognito Logins with arbitrary tools outside of Amplify Apps canonical_url: https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/ --- ## The Need You have this great Amplify App using AppSync GraphQL. You eventually find that you need to be able to access that data in your AppSync GraphQL database from tools other than your Amplify App. Its easy if you just have your AppSync API protected just by an API Key. But that isn't great security for your data! One way to protect your AppSync data is to use [Cognito Identity Pools](https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/#cognito-user-pools). Amplify makes it pretty transparent if you are using Amplify to build your clients. AppSync lets you do really nice [table and record level access control based on logins and roles](https://docs.aws.amazon.com/appsync/latest/devguide/security-authorization-use-cases.html). What happens if you want to access that data from something other than an Amplify based client? How do you "login" and get the JWT credentials you need to access your AppSync APIs? ## Use AWS CLI The most general way is to use the AWS CLI to effectively login and retrieve the JWT credentials that can then be passed in the headers of any requests you make to your AppSync APIs. Unfortunately its not as easy as just having your login and password. It also depends on how you configured your Cognito Identity Pool and its related Client Apps. ### Cognito User Pool Client App You can have multiple Client Apps specified for your Cognito User Pool. I suggest having one dedicated to these external applications. That way you can have custom configuration just for this and not disrupt your main Amplify apps. Also you can easily turn it off if you need too. ![User Pool Client Apps](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/74z6pgmf1qdyqv9wkllr.png "User Pool Client Apps") In my case I created a new client app `shoppabdbe800b-rob-test2` as a way to test a client app with no `App Client Secret`. This makes it easier to access from the command line as you do not have to generate a Secret Hash (will describe how to deal with that below). ![App Client Config with no secret](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hipjcnn0e3q4ronqvgi0.png "App Client Config with no secret") If you want to allow admin level access (ie a user with admin permission) you need to check `Enable username password auth for admin APIs for authentication (ALLOW_ADMIN_USER_PASSWORD_AUTH)` If you want to allow regular users to login you must also select `Enable username password based authentication (ALLOW_USER_PASSWORD_AUTH)` The defaults for the other fields should be ok. Be sure to save your changes. ### Minimal IAM permissions As far as I can tell, these are the minimal IAM permissions to make the aws `cognito-idp` command work for admin and regular users of AppSync (replace the Resource arn with the arn of the user pool[s] you want to control): ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "cognito-idp:AdminInitiateAuth", "cognito-idp:AdminGetUser" ], "Resource": "arn:aws:cognito-idp:us-east-1:XXXXXXXXXXXXX:userpool/us-east-1_XXXXXXXXX" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "cognito-idp:GetUser", "cognito-idp:InitiateAuth" ], "Resource": "*" } ] } ``` ### Get the Credentials with no App Client Secret This example is if you did not set the App Client Secret. You should now be able to get the JWT credentials from the AWS CLI. This assumes you have[ set up your](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) `~/.aws/credentials` file or whatever is appropriate for your command line environment so that you have the permissions to access this service. * When using the `ADMIN_USER_PASSWORD_AUTH` ```bash aws cognito-idp admin-initiate-auth --user-pool-id us-east-1_XXXXXXXXXX --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username1,PASSWORD=XXXXXXXXXXXXX > creds.json ``` * When using the `USER_PASSWORD_AUTH` ```bash aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username2,PASSWORD=XXXXXXXXXXXX > creds.json ``` Of course replace the `XXXX`'s with the actual values. * `user-pool-id` - The pool id found at the top of the _User Pool Client Apps_ page * `client-id` - The `client-id` of the `app client` you are using * `USERNAME` - The Username normally used to login to your Amplify app * `PASSWORD` - The Password normally used to login to your Amplify app The results will be in `creds.json`. (You could not use the `> creds.json` if you want to just see the results) ### Get the Credentials when there is an App Client Secret This assumes you have an App Client that has an `app secret key` set. The main thing here is you need to generate a `secret hash` to send along with the command. You can do that by creating a little python program to generate it for you when you need it: ```python #!/usr/bin/env python3 import sys import hmac, hashlib, base64 if (len(sys.argv) == 4): username = sys.argv[1] app_client_id = sys.argv[2] key = sys.argv[3] message = bytes(sys.argv[1]+sys.argv[2],'utf-8') key = bytes(sys.argv[3],'utf-8') secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode() print("SECRET HASH:",secret_hash) else: (print("len sys.argv: ", len(sys.argv))) print("usage: ", sys.argv[0], " <username> <app_client_id> <app_client_secret>") ``` Save the file someplace that you can execute it from like `~/bin/app-client-secret-hash` and make it executable (`chmod a+x ~/bin/app-client-secret-hash`). You will need: * `app-client-id` - The `client-id` of the `app client` you are using * `app-client-secret` - The secret of the `app client` you are using (its on the App Client page of the User Pool) * `USERNAME` - The Username normally used to login to your Amplify app To use: ```bash ~/bin/app-client-secret-hash <username> <app_client_id> <app_client_secret> ``` Where of course you replace the arguments with the actual values. The result is a `secret-hash` you will use in the following command to get the actual JWT credentials ```bash aws cognito-idp admin-initiate-auth --user-pool-id us-east-1_XXXXXXXXXX --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username3,PASSWORD='secret password',SECRET_HASH='secret-hash' > creds.json ``` You could do the same thing with `USER_PASSWORD_AUTH` if you nee that instead ```bash aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=rob+admin,PASSWORD=XXXXXXXXX,SECRET_HASH='secret-hash' > creds.json ``` ## Using the Credentials How you use these credentials depends on what tool or how you are trying to access your AppSync APIs. ### From some Javascript You can just add in the `IdToken` from the `creds.json` as an `Authorization` header when you build the request: ```javascript function graphQLFetcher(graphQLParams) { const APPSYNC_API_URL = "TYPE_YOUR_APPSYNC_URL"; const credentialsAppSync = { Authorization: "eyJraWQiOiI1dVUwMld...", }; return fetch(APPSYNC_API_URL, { method: "post", headers: { Accept: "application/json", "Content-Type": "application/json", ...credentialsAppSync, }, body: JSON.stringify(graphQLParams), credentials: "omit", }).then(function (response) { return response.json().catch(function () { return response.text(); }); }); } ``` If you are using some GraphQL tool that needs to access your AppSync APIs. The tool should have a way that you can supply the token and it will add it as an `Authorization` header for its own requests. Do let me know if you have some examples of tools that would make use of this. ## References * [Explore AWS AppSync APIs with GraphiQL from your local machine](https://aws.amazon.com/blogs/mobile/appsync-graphiql-local/ "Explore AWS AppSync APIs with GraphiQL from your local machine") * [How do I troubleshoot "Unable to verify secret hash for client <client-id>" errors from my Amazon Cognito user pools API?](https://aws.amazon.com/premiumsupport/knowledge-center/cognito-unable-to-verify-secret-hash/ "How do I troubleshoot "Unable to verify secret hash for client <client-id>" errors from my Amazon Cognito user pools API?")
rberger
809,700
MyUnisoft - l'aventure Node.js
Bienvenue voyageur(se) 👋 Aujourd'hui je viens vous conter mon aventure chez MyUnisoft en tant que...
0
2021-09-13T06:46:31
https://dev.to/fraxken/myunisoft-l-aventure-node-js-12i3
myunisoft, node, javascript
Bienvenue voyageur(se) 👋 Aujourd'hui je viens vous conter mon aventure chez MyUnisoft en tant que lead technique back-end (API & [Node.js](https://fr.wikipedia.org/wiki/Node.js)). C'est aussi celle de mon équipe qui continue de grandir en embarquant des ingénieurs très talentueux 😍. Si vous êtes un (expert-)comptable alors je vais vous embarquer dans un récit qui s'éloigne probablement de ce que vous avez l'habitude de lire 📰. Mais pas d'inquiétude je ferais l'effort de vous vulgariser au maximum mon univers. ## Qui suis-je ? Moi c'est Thomas, j'ai 27 ans et je développe depuis l'âge de dix ans 🐤. Je suis un amoureux du code et j'entreprends des projets depuis mon plus jeune âge. ![gif](https://c.tenor.com/y-Z6sLQbm2kAAAAC/umaru-kawaii.gif) Je suis un expert Node.js et JavaScript. Fort aise sur des sujets comme la sécurité, le monitoring et l'architecture logicielle. Si mon parcours vous intéresse 👀 je vous invite à consulter mon [LinkedIn](https://www.linkedin.com/in/thomas-gentilhomme/). ## Chapitre 1 Découvrons sans attendre le premier chapitre 💃. ### Genèse J'ai rejoint MyUnisoft en aout 2020 pour m'occuper de la maintenance et évolution du back-end Node.js 🐢. À ce moment-là je suis le seul développeur et ma première préoccupation est évidemment de faire mes preuves auprès de [Cyril](https://www.linkedin.com/in/cyril-mandrilly/) (CTO) et [Régis](https://www.linkedin.com/in/r%C3%A9gis-samuel-3a910b18/) (CEO). J'ai commencé par travailler sur la mise en place du connecteur [Quickbooks](https://quickbooks.intuit.com/) pour ensuite très vite m'attaquer à l'évolution de l'[API partenaires](https://github.com/MyUnisoft/api-partenaires) (qui servira aussi de fondation plus tard pour l'accès cabinet). ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bozkoxe62njsyo0lkqh.png) L'écriture d'une documentation a été évidemment un des gros points pour garantir une meilleure expérience à nos partenaires (expérience que nous continuerons d'améliorer dans le temps). ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zo2tlqe506ws96o33q3m.png) Ces premiers chantiers m'ont permis d'avoir une première approche du domaine de la comptabilité en abordant plusieurs notions comme les journaux, le plan comptable, les écritures, etc 😵. > Par ailleurs, je souhaite remercier Leon Souvannavong qui m'a beaucoup aidé sur les sujets métiers depuis mon intégration (ainsi que les autres développeurs de l'équipe back-end comptabilité 💖). ### Novembre 2020 Quelques mois passe et nous intégrons un second développeur en alternance 👯. Ayant déjà une forte expérience en mentorat je ne m'inquiète pas sur le fait de réussir à accompagner convenablement un débutant. Nous recrutons donc [Nicolas Hallaert](https://www.linkedin.com/in/nicolas-hallaert/) qui ne cessera de me surprendre dans sa vitesse d'adaptation et d'apprentissage ⚡. Lui et moi avons travaillé ensemble sur divers sujets comme MyDataRH, le SSO, ou encore des interfaces génériques que vous retrouverez dans nos diverses interconnexions partenaires. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7snluks4lhuk2bh74zr.png) Mon périmètre s'étend de plus en plus et je monte rapidement en confiance. Dans la même période [Oleh Sych](https://www.linkedin.com/in/oleh-sych-41245116a/) rejoint l'équipe Node.js (développeur non francophone). > 📌 Le seul développeur que je n'ai pas personnellement choisi. Au début j'avais un peu peur mais j'ai été très rapidement étonné par son niveau technique et sa réactivité à mes remarques. Nous convenons très rapidement qu'il travaillera sur la mise à jour et migration de code "legacy" (écrit par des développeurs qui ne sont plus là). J'essaye de l'accompagner et de l'intégrer le mieux possible pour que la barrière de la langue ne soit pas un frein pour lui ✔️. En écrivant ces lignes aujourd'hui je peux témoigner du chemin parcouru avec lui. Nous allons de l'avant sur plusieurs projets (Gestion Électronique des Documents, Discussion, Crédit-bail entre autres). ### Janvier 2021 Après avoir démontré mes capacités et acquit la confiance de la direction **je prends officiellement le lead de l'équipe Node.js** 🎉. C'est un rôle qui me convient bien et j'ai toujours apprécié ce genre de responsabilité. ![gif](https://c.tenor.com/DGj-M4ERIo0AAAAC/cat-ready-cool-cat-ready.gif) J'interviens de plus en plus sur des sujets en lien avec l'authentification 🔑 et je prends rapidement la main dessus. Le reste de mon temps est dédié à la création d'un nouveau connecteur API avec [Dext](https://dext.com/fr). ### Février 2021 Une période chargée puisque nous avons embarqué deux nouveaux développeurs expérimentés dans l'équipe. 1. Le premier étant mon associé de longue date [Alexandre MALAJ](https://www.linkedin.com/in/alexandre-malaj-6062b0a6/) avec qui je travaille en binôme depuis maintenant plus d'une décennie 😲. 2. Le second est [Cédric LIONNET](https://www.linkedin.com/in/cedric-lionnet-578845121/) qui nous a été recommandé en interne. Il entame une transition vers Node.js après plusieurs années de C++. C'est un ingénieur rigoureux ainsi qu'un amoureux de la qualité de code 💎. Ces deux intégrations ont été le point de départ de ce qui est aujourd'hui la fondation de l'équipe Node.js. **Alexandre** a investi des centaines d'heures sur la création d'une couche ORM (contenant +500 tables et +2,000 relations). **Cédric** de son côté à grandement contribuer à l'ajout de tests unitaires et abstractions qui sont aujourd'hui activement utilisées au travers de nos services http. ![carbon (3)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfhuxnuo01n55en30xu5.png) Fort de mon expérience de plus quatre ans en gestion d'équipe remote, nous travaillons rapidement à la mise en place de conventions et d'un modèle de communication efficace. Il est primordial de construire une bonne entente ainsi que diverses habitudes de communication orale pour pouvoir rapidement acquérir une symbiose des compétences techniques et humaines. ### Mars 2021 Je commence à travailler sur l'intégration d'un nouveau connecteur avec [EmaSphere](https://www.emasphere.com/fr/). Quand Nicolas n'est pas en cours il travaille sur l'intégration SSO avec Zendesk (support) et 360 learning (MyAcademy). Sur le côté il travaille sur le Google sheet (les liens dynamiques). Avec Alexandre nous avons décidé de lancer une initiative DDD ([Domain Driven Design](https://blog.octo.com/domain-driven-design-des-armes-pour-affronter-la-complexite/)) au sein de MyUnisoft. Amener de la qualité et de la rigueur dans les échanges et la conception du logiciel est pour moi très important. Insuffler une meilleure compréhension du métier aux équipes techniques apporteraient énormément de valeurs à nos clients. ### Avril 2021 J'accompagne très activement de plus en plus de partenaires 😎. Le catalogue des connecteurs ne cessent de grandir ce qui me fait vraiment plaisir 😇. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tv41itzdhq6v2cvq6r1r.png) Et **encore beaucoup d'autres** intégrations sont à venir d'ici fin 2021. Nous travaillons en ce moment même sur une mise à jour conséquente qui aura pour objectif d'apporter un ensemble de fonctionnalités manquantes (paramétrages, logs ...). --- Avec l'équipe nous participons à la [ludum dare](https://ldjam.com/) 48 qui consiste à créer un jeu vidéo en 72h. Nous avons créé un jeu web utilisant le moteur Pixi.js ([projet ici](https://github.com/fraxken/yu-gi)). Une expérience très enrichissante qui nous aura permis de mieux nous connaître et de renforcer nos liens. ### Mai 2021 L'équipe intègre deux développeurs supplémentaires: 1. [Tan Karasu](https://www.linkedin.com/in/karasu-tan-12641447/) qui nous rejoint pour un stage de six mois. Développeur en reconversion qui a su me convaincre par son mental et son investissement. 2. [Mark Malaj](https://www.linkedin.com/in/mark-malaj-99b1b8b7/) cousin d'Alexandre. Nous avions déjà eu l'occasion de collaborer ensemble pendant une année, période pendant laquelle je l'ai formé à Node.js. C'est naturellement un plaisir pour moi de pouvoir recollaborer avec lui au sein de MyUnisoft. Alexandre et Mark travailleront en collaboration avec [Jean-Claude FORTIER](https://www.linkedin.com/in/jeanclaudefortier/) sur la conception et le développement de la Gestion Interne MyUnisoft. Un chantier qui est donc entre de bonnes mains. Tan de son côté aura investi énormément de temps sur la création de nouvelle abstractions pour communiquer avec notre base de données [Redis](https://redis.io/). Par ailleurs, nos projets utiliseront l'excellent package [ioredis](https://github.com/luin/ioredis#readme). ### Juin 2021 J'ai eu l'occasion de travailler sur l'implémentation et l'intégration du format [Factur-X](https://fnfe-mpe.org/factur-x/) pour nos partenaires (actuellement utilisé en production par [EBP](https://www.ebp.com/)). Une bonne occasion de jouer avec les nouveaux types de TypeScript 4 pour convertir dynamiquement les structures XML en type JSON propre. ![carbon (1)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vq6flhw4u6mhufn3beph.png) J'éprouve une certaine fatigue à cause des différents onboardings. C'est une première pour moi de gérer autant d'intégration en si peu de temps (même si cela reste une excellente expérience). ![gif](https://c.tenor.com/NcibGDKTKQAAAAAd/status-tired.gif) Il est parfois difficile de jongler entre ma vélocité personnelle qui me permet d'avancer des sujets métier critique et investir du temps en accompagnement de mon équipe (ce qui améliore probablement la vélocité à moyen-long terme). ### Aout 2021 L'équipe continue de se structurer 🔨 dans le bon sens et nous avançons positivement sur nos sujets. La période est relativement calme à cause des différents départs en vacances 🌞. Nous intégrons néanmoins encore deux développeurs expérimentés: 1. [Quentin Lepateley](https://www.linkedin.com/in/quentin-lepateley/) travaillant sur le frontend MyUnisoft depuis un an et demi. Ce n'est donc pas un petit nouveau et il arrive dans l'équipe en étant déjà familier avec les membres de l'équipe. 2. [Tony Gorez](https://www.linkedin.com/in/tonygorez/) nous venant tout droit de [Payfit](https://payfit.com/fr/). Je travaille depuis maintenant une bonne année avec lui sur des projets open source comme [NodeSecure](https://github.com/NodeSecure). C'est vraiment un grand plaisir de pouvoir travailler avec lui au sein de la même équipe! Quentin travaille activement sur notre migration vers le framework [Fastify.js](https://www.fastify.io/). L'idée est de rapidement mettre en place un monorepo utilisant la fonctionnalité de [workspace npm 7](https://docs.npmjs.com/cli/v7/using-npm/workspaces) pour héberger les différents plugins utilisés sur nos services. Tony quant à lui va rapidement venir m'épauler sur les intégrations partenaires. À court terme il travaillera sur la stabilisation du connecteur Quickbooks. {% youtube 4DxyjZH45Jk %} ## Mon sentiment sur l'équipe Il reste du chemin à parcourir c'est une certitude. Nous devons apprendre à mieux nous connaître et comprendre qu'elles sont les forces et faiblesses de chacun. Nous devons définir qu'elles seront nos pratiques et méthodologies tout en prenant évidemment en compte le contexte et les équipes qui nous entourent. Mais je suis très enthousiaste. Nous avons beaucoup d'appétence pour notre métier et une grande motivation à faire devenir réalité les ambitions de MyUnisoft. ![image](https://c.tenor.com/KeWDHG0FMYUAAAAd/redline-james-punkhead.gif) ## En avant pour un second chapitre ? Nous continuons de grandir et nombreux sont les challenges devant nous. **De belle intégration sont encore à venir** et je pense que MyUnisoft constitue l'une des meilleures équipes Node.js francophone 💪. C'est pour moi une fierté d'être à la tête d'un groupe d'ingénieurs que j'apprécie et respecte 🙇. J'ai vraiment hâte de voir ce que nous allons accomplir dans les prochains mois 🚀. --- 🙏 Merci à vous de m'avoir lu. Cet article a été volontairement épuré de beaucoup de détails techniques (mais j'espère tout de même avoir réussi à accrocher un peu de votre attention). Nous écrirons certainement plus d'articles à l'avenir pour vous parler de nos innovations et avancement technique. 🚀🚀🚀
fraxken
809,715
How Can I achive this type of response pass the Data from DTO
How Can I achive this type of...
0
2021-09-01T03:41:34
https://dev.to/bulbuldeploy/how-can-i-achive-this-type-of-response-pass-the-data-from-dto-3iap
{% stackoverflow 68996509 %}
bulbuldev
809,901
JavaScript - All The Things, Mostly
JavaScript - All The Things, Mostly Curated list of awesome JS resources ...
0
2021-09-01T04:56:02
https://dev.to/gauravbehere/js-know-it-all-5f3b
javascript, programming, tutorial, webdev
## JavaScript - All The Things, Mostly ### Curated list of awesome JS resources <hr> ## Books - [JavaScript: The Good Parts - Douglas Crockford](https://www.oreilly.com/library/view/javascript-the-good/9780596517748/) - [Programming JavaScript Applications - Eric Elliott](https://www.oreilly.com/library/view/programming-javascript-applications/9781491950289/) - [JavaScript: The Definitive Guide, 7th Edition - David Flanagan](https://www.oreilly.com/library/view/javascript-the-definitive/9781491952016/) - [Learning JavaScript Design Patterns - Addy Osmani](https://addyosmani.com/resources/essentialjsdesignpatterns/book/) - [You Don't Know JS: ES6 & Beyond - Kyle Simpson](https://www.oreilly.com/library/view/you-dont-know/9781491905241/) - [Exploring ES6 - Axel Rauschmayer](https://exploringjs.com/es6/) - [High Performance JavaScript - Nicholas C. Zakas](https://www.oreilly.com/library/view/high-performance-javascript/9781449382308/) - [JavaScript for Kids - Nick Morgan](https://www.oreilly.com/library/view/javascript-for-kids/9781457189838/) - [Eloquent JavaScript - Marijn Haverbeke](https://www.oreilly.com/library/view/eloquent-javascript/9781593272821/) - [Effective JavaScript - David Herman](http://effectivejs.com/) <hr> ## Video Tutorials - [What the heck is the event loop anyway? | Philip Roberts | JSConf EU](https://www.youtube.com/watch?v=8aGhZQkoFbQ) - [JavaScript Tutorial for Beginners: Learn JavaScript in 1 Hour [2020]](https://www.youtube.com/watch?v=W6NZfCO5SIk) - [JavaScript Tutorial for Beginners - Full Course in 8 Hours [2020]](https://www.youtube.com/watch?v=Qqx_wzMmFeA) - [Learning Functional Programming with JavaScript - Anjana Vakil - JSUnconf](https://www.youtube.com/watch?v=e-5obm1G_FY) - [JavaScript: The Good Parts](https://www.youtube.com/watch?v=hQVTIJBZook) - [ES6 and Beyond Workshop Part 1 at PayPal (Jan 2017)](https://www.youtube.com/watch?v=t3R3R7UyN2Y) - [ES6 and Beyond Workshop Part 2 at PayPal (March 2017)](https://www.youtube.com/watch?v=eOKQDh50ECU) - [Jafar Husain: Async Programming in ES7 | JSConf US 2015](https://www.youtube.com/watch?v=lil4YCCXRYc) - [JavaScript: Understanding the Weird Parts - The First 3.5 Hours](https://www.youtube.com/watch?v=Bv_5Zv5c-Ts) - [Debugging The Web (Chrome Dev Summit 2016)](https://www.youtube.com/watch?v=HF1luRD4Qmk) - [Rediscovering JavaScript by Venkat Subramaniam](https://www.youtube.com/watch?v=dxzBZpzzzo8) - [Franziska Hinkelmann: JavaScript engines - how do they even? | JSConf EU](https://www.youtube.com/watch?v=p-iiEDtpy6I) - [Recursion, Iteration, and JavaScript: A Love Story - Anjana Vakil | JSHeroes 2018](https://www.youtube.com/watch?v=FmiQr4nfoPQ) - [Learn JavaScript - Full Course for Beginners](https://www.youtube.com/watch?v=PkZNo7MFNFg) - [Object-oriented Programming in JavaScript: Made Super Simple | Mosh](https://www.youtube.com/watch?v=PFmuCDHHpwk) <hr> ## Courses - [JavaScript Fundamentals Workshop](https://kentcdodds.com/workshops/javascript-fundamentals) - [Learn JavaScript](https://www.codecademy.com/learn/introduction-to-javascript) - [The Complete JavaScript Course 2020](https://www.udemy.com/course/the-complete-javascript-course/) - [JavaScript Tutorials for Beginners](https://www.youtube.com/playlist?list=PL4cUxeGkcC9i9Ae2D9Ee1RvylH38dKuET) - [Getting Started With Javascript | Javascript Tutorial For Beginners](https://www.youtube.com/playlist?list=PLDyQo7g0_nsX8_gZAB8KD1lL4j4halQBJ) - [JavaScript Best Practices](https://www.pluralsight.com/courses/javascript-best-practices) - [JavaScript Tutorials and Courses](https://hackr.io/tutorials/learn-javascript) <hr> ## Useful Blogs/Articles - [Prototypes and Inheritance in JavaScript](https://docs.microsoft.com/en-us/previous-versions/msdn10/ff852808(v=msdn.10)) - [A Guide to Proper Error Handling in JavaScript](https://www.sitepoint.com/proper-error-handling-javascript/) - [Service Workers: an Introduction](https://developers.google.com/web/fundamentals/primers/service-workers) - [10 JavaScript concepts you need to know for interviews](https://dev.to/arnavaggarwal/10-javascript-concepts-you-need-to-know-for-interviews) <hr> ## Websites - [Google Web Developers](https://developers.google.com/web) - [ECMAScript 6](http://es6-features.org/) - [Node JS](https://nodejs.org/en/docs/es6/) - [MDN](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/JavaScript_basics) - [JavaScript.info](https://javascript.info/) - [Superhero.js](http://superherojs.com/) <hr> ## Dev Channels - [JSConf](https://www.youtube.com/c/JSConfEU/videos) - [Google Chrome Developers](https://www.youtube.com/c/GoogleChromeDevelopers/videos) - [web.dev](https://web.dev/learn/) - [JavaScript Daily](https://twitter.com/JavaScriptDaily) <hr> ## Publications/Magazines - [JavaScript on Medium](https://medium.com/tag/javascript) - [JavaScript on Smashing Magazine](https://www.smashingmagazine.com/category/javascript) - [JavaScript on Dev.to](https://dev.to/t/javascript) - [JavaScript Weekly](https://javascriptweekly.com/) - [Node Weekly](https://nodeweekly.com/) <hr> ## Useful Github Links - [33-js-concepts](https://github.com/leonardomso/33-js-concepts) - [JavaScript Questions](https://github.com/lydiahallie/javascript-questions) - [Front-end-Developer-Interview-Questions](https://github.com/h5bp/Front-end-Developer-Interview-Questions/blob/master/src/questions/javascript-questions.md) - [es6-cheatsheet](https://github.com/DrkSephy/es6-cheatsheet) - [clean-code-javascript](https://github.com/ryanmcdermott/clean-code-javascript) <hr> ## People - [BrendanEich](https://twitter.com/BrendanEich) - [Kent C. Dodds](https://twitter.com/kentcdodds) - [Addy Osmani](https://twitter.com/addyosmani) - [Paul Irish](https://twitter.com/paul_irish) - [Douglas Crockford](https://github.com/douglascrockford) - [Ben Awad](https://twitter.com/benawad/) - [Eric Elliott](https://twitter.com/_ericelliott) - [Dan Abramov](https://twitter.com/dan_abramov) - [Marijn Haverbeke](https://twitter.com/MarijnJH) - [Kyle Simpson](https://github.com/getify) - [Wes Bos](https://twitter.com/wesbos) - [Dan Wahlin](https://github.com/DanWahlin) <hr> These resources have helped me a lot in keeping up with JS. If you have a suggestion, please comment. I would love to read & add to this.
gauravbehere
809,999
Logo Maker - logo creator 3d & Graphic Design
Logo maker free is a fully loaded logo creator app for graphic design free. Now design logos for...
0
2021-09-01T07:33:29
https://dev.to/theapostle1997/logo-maker-logo-creator-3d-graphic-design-22pa
Logo maker free is a fully loaded logo creator app for graphic design free. Now design logos for gaming, business, esports and company events with logo editor using premium shapes, symbols, icons, typography and logo templates in 3d. Start designing your logo for your gaming squad using gaming logo maker to represent your clan or gaming team with a professional yet powerful logo in esports or give your brand a unique identity using logo maker app for free. Download: https://play.google.com/store/apps/details?id=com.logomaker.esports_senseapplab
theapostle1997
810,058
What to Do If You Learned Nothing from Programming Courses?
A couple of weeks ago, I found this on Reddit: Me after completing Codecademy JavaScript course: I...
0
2021-09-01T09:38:53
https://nicozerpa.com/what-to-do-if-you-learned-nothing-from-programming-courses/
beginners
A couple of weeks ago, I found this on Reddit: > **Me after completing Codecademy JavaScript course:** I now know everything about JavaScript. You can ask me anything.. > > **Another person:** Really? What are prototypes? > > **Me:** Um... They are...Um.... ([Source](https://www.reddit.com/r/learnjavascript/comments/p1gvzz/can_we_relate/)) Does this sound familiar? Unfortunately, this is common. **You found the perfect course**, you feel you understand everything, but when you have to apply what you saw in the course, you find out that **you actually learned nothing.** And that can be very frustrating. Why does this happen? First of all, **coding is a skill.** And like any other skill, the best way to develop it is by doing. **Practising is the best way to learn how to code.** Also, in a tutorial, you're basically following instructions from the instructor. But when you're coding on your own, you are the one who must decide what to do. Practising helps you get better at it. **You should practice doing tasks or projects that are slightly challenging.** It should not be too easy, and it shouldn't be too hard. If the tasks are too easy, you won't progress and it will get boring. If it's too hard, you might feel overwhelmed and you may quit. If it's hard for you to come up with project ideas to practise, you can read this article I wrote about the topic: [How to Get Project Ideas to Practice JavaScript](https://dev.to/nicozerpa/how-to-get-project-ideas-to-practice-javascript-3ip3). --- Learning JavaScript is tough, isn't it? My newsletter has tips and insights to level up your JS skills. [Click here to subscribe](https://nicozerpa.com)
nicozerpa
810,191
Software AG Cumulocity IoT Platform – Microsoft Azure Data lake Integration
The Internet of Things (IoT) is one of the driving forces for the increase in today’s data volume and...
0
2021-09-21T14:31:06
https://tech.forums.softwareag.com/t/software-ag-cumulocity-iot-platform-microsoft-azure-data-lake-integration/243986
iot, azure, integration
--- title: Software AG Cumulocity IoT Platform – Microsoft Azure Data lake Integration published: true date: 2021-09-01 06:16:24 UTC tags: #iot, #azure, #integration canonical_url: https://tech.forums.softwareag.com/t/software-ag-cumulocity-iot-platform-microsoft-azure-data-lake-integration/243986 --- The Internet of Things (IoT) is one of the driving forces for the increase in today’s data volume and diversity. The Iot Devices create lot of data as they communicate with each other. This data is used for immediate analysis/action as well as long term historical analytics. The IoT platform must allow user to retain consumable data over a period to analyse trends and make inferences to make decisions for actions in present or future. Volume of data, speed of processing, analytical power etc. are some of the critical aspects when considering storage of data. A data lake is a system or repository of data stored in its natural/raw format, usually object blobs or files. It can store large amount of structured, semi-structured, and unstructured data for e.g. raw copies of source system data, sensor data, social data etc., and transformed data. Cloud data lakes are cost-efficient and scale almost infinitely. **Azure Data Lake** includes all the capabilities required to make it easy for developers, data scientists and analysts to store data of any size, shape, and speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all your data while making it faster to get up and running with batch, streaming and interactive analytics. [Read More](https://azure.microsoft.com/en-in/solutions/data-lake/) **Cumulocity IoT** is an independent device and application management IoT platform. Some of the key features are: - - Connect devices, receive, and send data securely to platform. - Manage your devices, manage device onboarding, manage lifecycle like sending software updates and firmware updates. - Support real time streaming and predictive analytics and helps users to visualize results in very modern dashboards. - It is also API enabled which allows you to integrate wide enterprise and enhance business process and business applications. [Read More](https://www.softwareag.cloud/site/product/cumulocity-iot.html) **Software AG’s Cumulocity IoT DataHub** ![image](https://aws1.discourse-cdn.com/techcommunity/original/3X/2/3/2343e0621366d70778411328f3cf74b48f9a895b.png) Make your IoT data your advantage with Software AG’s Cumulocity IoT DataHub. With DataHub, you can bridge the gap between streaming and historical analytics in a way that simplifies processes for IT administrators and enables the business to gain new insights about operations and performance. Some of the key features are - **Simplified management of long-term data storage** - **Lower cost for IoT data storage** - **Scalable SQL querying of long-term IoT data** - **Standard interfaces to BI & data science tools** [Read More](https://www.softwareag.com/en_corporate/resources/what-is/data-lake-iot.html) The greater advantage lies in using the Software AG on-premise and/or cloud integration platforms in conjunction with the IoT platform. This will help in processing the IoT data before it is loaded to the data lake. Also, it helps to integrate data from various SaaS applications which might give a holistic view of the entire company data in the data lake. ## Software AG webMethods.io [webMethods.io](http://webMethods.io) Integration is a powerful integration platform as a service (iPaaS) that provides a combination of capabilities offered by ESBs, data integration systems, API management tools, and B2B gateways. [Read More](https://www.softwareag.cloud/site/product/webmethods-io-integration.html) **webMethods CloudStreams Connectors** for on-premise Integrations webMethods CloudStreams helps you create and govern connections between any combination of cloud and on-premises applications, databases and more. **WebMethods CloudStreams Provider for Microsoft Azure Data Lake Store** contains predefined CloudStreams connectors that you use to connect to on-premises versions of Microsoft Azure Data Lake Store. Using webMethods CloudStreams, you can configure the CloudStreams Microsoft Azure Data Lake Store connector to create directories, folders, and files in your Azure Data Lake Store instance that can store and retrieve data. ## Analyze data in Azure Data Lake Storage Gen2 by using Power BI We can use Microsoft Power BI to analyze and visualize the IoT data that is stored in Azure Data Lake Storage Gen2. [Read More](https://docs.microsoft.com/en-us/power-query/connectors/datalakestorage) ![image](https://aws1.discourse-cdn.com/techcommunity/original/3X/f/4/f4b51d5afe0d82414ddbe42a18548576460b8507.png) **Some of the DataHub use cases are** - Analysis using BI tools on offloaded historical data identifies usage metrics for equipment, including how often specific equipment was used, how long it was used for and how intensively it was used. - Historical analysis helps understand trends about how the medical devices are used in the workplace so that future investment can be best focused. - Analysis using historical data helps systems understand how long pressure testing and initial filling takes for specific types of containers. Enables predictions for how much time must be allocated for testing new types of products etc. [Read full topic](https://tech.forums.softwareag.com/t/software-ag-cumulocity-iot-platform-microsoft-azure-data-lake-integration/243986)
mariela
810,251
Lebih Dekat Dengan Lambda di Kotlin
Lambda disebut juga sebagai anonymous function, karena sifatnya yang tidak perlu mendeklarasikan...
0
2021-09-01T12:08:31
https://dev.to/alfianandinugraha/lebih-dekat-dengan-lambda-di-kotlin-2p66
kotlin, lambda, indonesia
Lambda disebut juga sebagai anonymous function, karena sifatnya yang tidak perlu mendeklarasikan fungsi. Perbedaan function dengan lambda adalah lambda langsung mengembalikan nilai tanpa perlu menggunakan keyword return. Contoh kode: ```kotlin fun main () { val sayHello = { name: String -> "Hello, I'm $name" } println(sayHello("Andi")) // output : "Hello, I'm Andi" } ``` Penulisan lambda juga bisa ditaruh di luar fungsi main. Karena lambda ini tidak perlu menggunakan return maka cara untuk membuat multilinenya adalah ```kotlin fun main() { val add = { num1: Int, num2: Int -> val result = num1 + num2 result } println(add(5, 2)) // output : 7 } ``` ## Keyword `it` It akan merujuk kepada parameter yang hanya berjumlah satu. Contoh : ```kotlin fun main() { // val getMessage: (String) -> String = { name: String -> "Hello from $name" } val getMessage: (String) -> String = { "Hello from $it" } val message: String = getMessage("Andi") print(message) // output : Hello from Andi } ``` Pada contoh diatas untuk membuat lambda `getMessage` bisa menggunakan kode : ```kotlin val getMessage: (String) -> String = { name: String -> "Hello from $name" } ``` Tapi karena lambda ini hanya memiliki satu parameter saja yaitu `name` maka parameter bisa dihapus lalu untuk memanggil parameternya cukup menggunakan keyword `it`. ```kotlin val getMessage: (String) -> String = { "Hello from $it" } ``` Syarat untuk menggunakan keyword it adalah - Parameter harus ada satu dan tidak boleh lebih - Harus mendeklarasikan tipe datanya, pada contoh tipe data pada `getMessage` adalah `(String) → String` ## Empty Parameter Lambda bisa dibuat tanpa menggunakan parameter. Contohnya : ```kotlin fun main() { val sayHi = { "Hi world !" } println(sayHi()) // output : Hi world ! } ```
alfianandinugraha
860,155
I switched to iPhone and I'm disappointed
After about 8 years using Android phones and having become somewhat disappointed by how much a...
0
2021-10-11T21:02:03
https://dev.to/jperals/i-switched-to-iphone-and-i-m-disappointed-5bld
ux, ios, android
After about 8 years using Android phones and having become somewhat disappointed by how much a pervasive system it can be even if being somewhat open-source-based, and also having been close to the UI/UX design world for some time, I decided to give a try to the iPhone and experience the superior UI/UX design which, in my mind, it was supposed to have. Well, now I am not sure why I thought that. Maybe that was true in 2010. The fact is, I am clearly disappointed. Maybe iOS is stuck with designs that its users got used to since the beginning whereas as a second player Android was a bit more free (or actually forced) to learn and improve? Whatever the reason is, I find that, in terms of UI/UX, I made a bad deal. Here are some examples I noted down: ## Notifications As probably one of the most interacted elements in a smartphone, I would dare to say that notifications are important. And they are remarkably both less functional and less clear than on Android. That is, they offer less functionality and they are also less clear when it comes to signaling how to use these functionalities. In more technical words, they offer both less "affordances" and worse "signifiers". Some examples: - Typical actions that you can directly do on Android notifications, like answering a message, are missing on iOS. - On the other hand, tapping "Open" on the corresponding notification directly returns the call, which is actually not great because it might not be what the user wanted. - A direct access to the phone settings is not to be found by swiping down, as it is on Android. - When a floating notification appears while using the phone, dismissing it works by swiping it up, rather than left as when it appears while the phone is blocked. This is inconsistent ant took me a very long time to realize. - To selectively clear notifications, you need to first swipe them left, then tap "Clear". On Android swiping it to the left already clears it, which seems convenient, so the only apparent justification to this is that besides clearing, two other actions also appear when swiping left: "Manage" and "Show". But "Manage" is for managing notification options for that particular app, which is something I personally never used so far and in my opinion could well be hidden somewhere else, and "Show" is exactly what I expect to be able to do by just tapping, without having to swipe. To make this a bit worse, "Clear" is translated in Catalan as "Esborrar", that is, "Delete", so I am always fearing that tapping there will actually delete the information from the corresponding app. Probably something similar happens in other languages, too. ## Writing - When a tooltip with a suggested correction appears, tapping on the suggested word will dismiss the suggestion, whereas tapping on the little close button will apply it! This should work the other way around... right? - Selecting text is not always easy. The cursor seems to have a strong tendency to select whole words, which you will have to fight against if you want to just modify a part of a word you wrote. ## Alarms It has taken me literally months to figure out that I could edit alarms. Until then, I was accumulating them and wondering every time why there is no "Edit" option besides "Delete" when swiping left, until I found the "Edit" option at the top left corner of the screen. Why make a button so that it will seem that it affects all elements, if you can then only select one to edit? ## Camera and pictures - The picture format is always at 4:3 and I haven't figured out how to persist it to another setting. - Editing images doesn't keep the original when saving? ## Language support Here I can speak only for the case of Catalan which is the language I use. Unfortunately the state of the Catalan language is clearly worse on iOS compared to Android: - No swiping keyboard. - Third-party apps seem to have less Catalan support in general. - Translation style differs from the established one everywhere else, most notably with use of infinitives instead of imperative for calls to action. On the back of my head, one of the reasons to get an iPhone was also that I was considering getting a decent smartwatch and I arrived to the conclusion that the Apple Watch was the best option. My disappointment now is so big, however, that I don't feel like going further this path and I might keep my 62 Euro Amazfit Bip which after all fulfills its main purpose, which is avoiding me distractions by showing notifications without having to reach out to the phone —even if it doesn't display emojis and "special characters" correctly. The battery lasts for 45 days, and I now refuse to replace it as long as it works.
jperals
812,159
3 poderosos conceptos de marketing para mejorar tu vida
Existen muchas personas que ven el marketing como algo que solo se puede aplicar a las empresas y...
0
2021-09-03T03:28:16
https://dev.to/microsoftucuenca/3-poderosos-conceptos-de-marketing-para-mejorar-tu-vida-5984
vidadiaria
Existen muchas personas que ven el marketing como algo que solo se puede aplicar a las empresas y emprendimientos. Seguramente habrás oído a personas súper exitosas decir *"Tú eres tu propia empresa"*. Pero entonces si yo soy una empresa ¿por qué no aplicar marketing en mí? Aquí te voy a enseñar tres poderosos conceptos que usaron los pequeños negocios para volverse grandes y también como puedes aplicarlos en ti. Recuerda, *el marketing no es algo que hagan solo las empresas exitosas, es lo que hacen las empresas para volverse exitosas* **1- ¿Cómo alcanzar mis metas?** Existe un vídeo en el que Steve Jobs está dando una conferencia a un grupo de personas, hasta que alguien le hace una pregunta, (Que más bien era un ataque) con la intención de hacer quedar mal a Jobs, y empieza a enumerar errores y fallas que encontró en un determinado software de un producto Apple. Después de terminar con su comentario y con la esperanza de que había vencido se sienta y guarda silencio, de pronto, Steve respondió: -Sí, supongo que tienes razón. El auditorio se quedó frio, ellos esperaban que esa pregunta inicie una batalla de egos, pero Steve continuo: La forma en la que diseñamos los productos Apple es a partir de la experiencia de usuario. ¿Cuál queremos que sea la experiencia del usuario con este producto? Y de ahí partimos hacia atrás con la tecnología, eso no te asegura una tecnología perfecta y sin errores, pero si un producto que enamorará al usuario y le será útil. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht1kt60ff28894wp1k4o.jpg) Tal como Steve Jobs lo que debes hacer es plantearte tus metas en forma de preguntas y de ahí ir hacia atrás con el plan para conseguirlo, en lugar de decir "Quiero X”, dices "¿Cómo puedo obtener X?" O "¿Qué tengo que hacer, aprender o cambiar para conseguir X?". Tu mente empezará a trabajar cada que veas la pregunta, se especifico. En lugar de plantearte "quiero bajar de peso" pregúntate. ¿Cómo puedo bajar 10 kg para el próximo 25 de octubre? Entonces ya sabes, plantéate tu meta en forma de pregunta específica, esa no solo es la estrategia de diseño de producto de Apple y de otras empresas que han dejado su huella en el mundo también es una estrategia que puedes usar para mejorar tu vida. **2- Como vencer la parálisis por análisis** Te has preguntado por qué por lo general en las tiendas online o algunas físicas. Solo hay 3 o 2 opciones en determinados productos, ya sea tres colores disponibles para un celular o dos posibles formas de suscripción para algún servicio. Esto es precisamente para evitar la parálisis por análisis en el cliente, al darles muchas opciones a los posibles consumidores de mi producto, aunque pueda parecer que al cliente le guste tener de dónde elegir, no es así, es mejor darle tres o dos opciones bien estudiadas que cubran los aspectos esenciales que hemos decidido con un previo estudio. En la vida se puede traducir como tener muchísimos mensajes de WhatsApp o correo que te da pereza responder, tener muchas tareas y no saber por cuál empezar y cosas así. Ahora. ¿Cómo combato la parálisis por análisis? Lo primero es priorizar, por ejemplo, supongamos que tienes muchos mensajes en tu correo electrónico y por lo mismo no sabes cuáles responder, sabes que tienes tareas que dependen de eso, cosas que son de tu interés y otros son simplemente spam, pero hay una fuerza que te lo impide, así es, te encontraste con la parálisis por análisis. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vpc2h8m0x1igcjr945l.jpg) Ahora, vas a priorizar basándote en ciertos criterios de tu interés, puedes usar la técnica anterior y plantearlo en forma de pregunta, supongamos que escogiste tres factores sobre los que priorizar y estos son: Urgencia, dificultad y gusto al hacer la tarea. Puedes escoger los factores que quieras y del tipo que quieras, o usar los anteriores. Si tienes mil correos no vas a pasar los mil por el filtro de tus factores obviamente, elige los primeros 5 que veas y ve cuál de esas cumple tus factores y así continua, si ninguno te convence pasa a los siguientes 5, lo importante es empezar y continuar. De pronto te encontrarás con más dirección de la que tenías al principio, tal vez continúes con tus tareas así o vayas cambiando de criterios de elección, lo importante aquí es romper la parálisis por análisis. No te detengas. **3- Cumplir las promesas** De seguro has odio hablar de la fidelización del cliente en Marketing, si no, te explico rápido, lo que diferencia a un consumidor de un cliente son las veces que te compra, alguien que te compra muy seguido ya es tu CLIENTE y debes tratarlo bien, eso incluye cumplir las promesas que le haces. ¿Cómo se empieza?, Lo haces poco a poco, los negocios empiezan ofreciendo alguna cosita gratis con la compra, algo que no les dé mayor problema, o envío gratis, etc. Así sin mayor esfuerzo prometen cosas que no son complicadas cumplir para ir tomando la imagen de una marca que cumple con sus clientes. Ahora, eso lo puedes aplicar tú en tu vida contigo, ¿Para qué? Nada más ni nada menos que para forjar una mejor autoestima, recuerda que la autoestima es la reputación que tienes contigo mismo, y si los negocios usan esto para mejorar su reputación con el cliente, ¿Por qué no usarlo contigo para tener una mejor autoestima? ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45di5kdh9je8citqypc8.jpg) ¿Cómo? Empezando con hacerte pequeñas promesas que vas a cumplir, un ejercicio muy simple es prometerme algo que sabes que vas a hacer, así empezarás a acostumbrarte a cumplir tus promesas, por ejemplo, supongamos que todos los días a las 4 de la tarde tomas café, ya es parte de tu rutina, pues vas a anotar en una sticknote *``prometo que a las 4 de la tarde voy a tomar café´´ *y así sigues con cosas de tu vida que ya hagas como hábito, Lo importante es que tu veas que puedes cumplirte las promesas que te haces, **estamos engañando a la mente**, después puedes empezar a prometerme cosas que no haces pero quieres empezar a hacer, como ejercicio, aprender algo nuevo y así. Así poco a poco te acostumbrarás a cumplirte tus promesas iniciando con promesas sencillas, y después te bar a dar cuenta que creer en ti va a ser más fácil e incluso las demás personas te tendrán más confianza porque saben que eres alguien que cumple sus promesas. Recuerda empezar suave y no agobiarte, también puedes mezclar estos tres conceptos, digamos que ya hiciste los ejercicios anteriores y ahora quieres empezar ir al gimnasio entonces, te lo prometes, usas el punto 1 para dejar la meta clara y puedes usar el punto 2 para elegir a cuál ir. Depende de tu creatividad. Estos tres conceptos son sencillos y poderosos si los aplicas te aseguro que tu vida mejorará, hazlo poco a poco, así es como se forjan los hábitos, primero empieza con uno y con algo sencillo, si fallas, sigue intentando hasta que lo logres recuerda que lo haces por ti, para hacerte la vida más fácil, para eso está el marketing.
pabloficial01
812,337
Developers mill
I have wanted to write this article for a while now, and somehow it never happened. I've been...
0
2021-09-03T09:24:13
https://daily-dev-tips.com/posts/developers-mill/
devjournal
I have wanted to write this article for a while now, and somehow it never happened. I've been talking to other developers about the topic of a developer mill, and more people seem to acknowledge this theory. So what exactly is this theory? ## We're all hamsters in a wheel ![Hamster in wheel](https://cdn.hashnode.com/res/hashnode/image/upload/v1629963134269/shLFoySkg.png) That sounds a bit morbid, right, but hear me out. We get a job. We must run the fastest to spin the wheel because once we get tired or slow, another hamster will take over and run the wheel for us, spinning us off. Now you might start to think, hmm, right, I've been part of this, and from my experience, I've been one of those. We get a job and think we have to work ridiculous hours (often unpaid) to please the wheel. And the moment we stop for some water, we get replaced. Because, well, quite frankly, there are enough hamsters around. And we, as the hamsters, are part of the process because what would happen if we all stopped running the wheel? Would the world collapse? I don't think so. But then, what's the goal that we should want to achieve? For me, it's about making an impact, not being another hamster in another wheel. Instead: being the hamster that changes the direction of the wheel, or dares to take a break. Quite frankly, it's a very toxic environment. And we, the hamsters, should stand up for ourselves. ## The nuances hamster wheel ![Hamster flying off wheel](https://cdn.hashnode.com/res/hashnode/image/upload/v1629963178731/F3p4COzFZ.png) Luckily there are innovative wheels as well, wheels that want to please us! Hey, it might be one of those wheels that drop treats at the end. And those wheels should be chased. Being in tech is not an easy job. The wheels keep upgrading, and we might not know how to operate them at one point, but that's ok. We can take a step back and relearns the basics of spinning the new wheel. ## Why is this important I think it's vital to teach newcomers, beginners, and people making a transition in tech that it's ok not to blow themselves up spinning the wheel. The experienced and wise hamsters should educate the young and wild ones on finding a perfect balance. We might call these the guiding hamsters, there to guide those in need of guidance. Today, my call is for you to evaluate which hamster you are and if that's where you want to be. If not, let's make that change together and talk about this. 🐹 ### Thank you for reading, and let's connect! Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on [Facebook](https://www.facebook.com/DailyDevTipsBlog) or [Twitter](https://twitter.com/DailyDevTips1)
dailydevtips1
818,455
Please help me with this haskell program.
Define a function dropOdds :: Int -&gt; Int with the following behaviour. For any positive number m,...
0
2021-09-09T09:05:56
https://dev.to/shashwatseth/please-help-me-with-this-haskell-program-5fjm
haskell, programming, help
Define a function dropOdds :: Int -> Int with the following behaviour. For any positive number m, dropOdds m is got by dropping all the odd digits in m. (If all the digits in the number are odd, the answer should be 0.) Test cases: dropOdds 0 = 0 dropOdds 8 = 8 dropOdds 1357 = 0
shashwatseth
818,523
How to Remove Viruses from a Desktop Computer or Laptop with Windows 10, 8 or 7 FOR FREE
Read this article to find out how to remove viruses from a computer or laptop for free and do it in...
0
2021-09-09T11:52:20
https://dev.to/hetmansoftware/how-to-remove-viruses-from-a-desktop-computer-or-laptop-with-windows-10-8-or-7-for-free-4ajj
beginners, testing, tutorial, test
Read this article to find out how to remove viruses from a computer or laptop for free and do it in Windows 10, 8 or 7. Let’s have a look at how it is done, with the example of a popular antivirus. ![Remove Viruses from a Desktop Computer or Laptop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8dnosti45ao6umbl7u49.jpg) ##Are there any free antiviruses? Many of us would certainly be very skeptical about this issue: are the “free” antiviruses really free, and how effective are they in real life? Yes, such products exist. The quality of their work is different and depends on what you intend to achieve by using such a program. ![Kaspersky Free](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h06ooj2mc8mfpjjthe5s.jpg) * Kaspersky Free * Avast Free Antivirus; * AVG AntiVirus FREE; * Avira Free Antivirus; * 360 Total Security; * Bitdefender; * Panda Free Antivirus; * ZoneAlarm Free Antivirus. On the other hand, there are also commercial antiviral solutions. Many of them are more effective than their free counterparts, and let you enjoy a free trial period before you need to pay anything. Such products are so numerous that I would hardly like to list all of them. How to remove viruses with a free antivirus product? We will search for viruses with a free version of an antiviral program, Malwarebytes. In a more or less similar way, a desktop computer or laptop can be cleaned with any other antivirus, be it free or commercial, or a free trial version of a paid antiviral product. Malwarebytes will protect your computer from rootkits, malware and spyware, block file encryption operations used by ransomware, and ensure reliable protection when working online. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i2eibjbhwjyj9nhiav4c.jpg) Go to the program’s website (https://www.malwarebytes.com/), download and install it. After installation, antiviral defense will be activated automatically, but you will have to run the scanning to detect and delete viruses that may have got into the system earlier. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjzbds2ggsjs5pqkqcxf.jpg) Start the program by clicking on its icon in the system tray and click on «Scan». The utility will scan your random-access memory, check startup files and Windows system registry. Then it will start a long check for all drives connected to the system, and run heuristics analysis. After scanning the program will suggest you a list of objects to clean and delete, which will protect important files from being automatically deleted by mistake. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ls9tvynav9sn4sg6q81q.jpg) Unfortunately, virus attacks and further cleaning operations can do irreparable damage to the operating system, programs and documents. Windows can show you error messages, some programs can stop working or won’t be able to start. If you have system restore points enabled, you can roll the system back to the moment of infestation or reset your Windows. A complete system scan took less than 3 minutes in my case, and several threats were found. Select them and send them to the quarantine so that the system will be safe from now on. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7h0y7t1fxjgzekruk9t.jpg) You can always recover files from the quarantine if they were sent there by mistake. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3cw5v8z5h8wdwepr2xj7.jpg) Remember that deleting a threat from the quarantine deletes the actual file from your computer’s hard disk. If you need to recover these files, don’t forget to use [Hetman Partition Recovery](https://hetmanrecovery.com/hard-drive-data-recovery-software). Check out other articles in our blog to find out how to recover files deleted by an antivirus. ##The «Virus & threat protection» tool in Windows 10 I would like to dedicate some more time to describing one more free tool to protect the operating system from threats and viruses – the antivirus integrated into Windows 10. Although it affects system performance considerably, it also does its job properly and it really is a free product. By default, it starts with the operating system and works automatically. However, if you need to manually scan the computer or a removable disk for viruses, you can start it by clicking on the shield-shaped icon in the system tray. Open the menu «Windows Defender Security Center», jump to «Virus & threat protection» – and here you are, the built-in system antivirus is there. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8m8mxzerfmzrsp9ls3vg.jpg)
hetmansoftware
818,771
How Status Pages Can Help You Retain Customers in This Digital Age
Introduction The impact of 2020 and the COVID-19 pandemic collectively upended customer...
0
2021-09-09T15:09:56
https://statuspal.io/blog/2021-09-06-how-status-pages-can-help-you-retain-customers-in-this-digital-age/
devops, monitoring, statuspage, ux
### Introduction The impact of 2020 and the COVID-19 pandemic collectively upended customer behavior with sweeping and immediate changes that we are still feeling the effects of now. As a result people have been forced to live and work differently, which has had a direct impact on consumption patterns and shopping habits. Consumers have for example been displaced from their traditional in-person experiences. So, when faced with overcoming the challenges of the pandemic they have adapted quickly to buying groceries online, often buying in larger volumes and bigger items, shifting from in-person to online channels. As a result, customers now spend a significant amount of their time online, not just for shopping but also for entertainment, education and working remotely. This change in behavior means companies and organizations need to adjust their business models and digital working practices. They need to find new ways to communicate and interact with their customers. These new digital times mean customers have also increased their expectations for an exemplary digital customer experience. To survive and prosper in this new digital economy, businesses must see the value in focusing on improving their digital customer experience. Providing superior customer care, can ultimately lower customer cost of acquisition and increase customer satisfaction and retention. [Bain & Company research](http://www.bain.com/Images/BB_Prescription_cutting_costs.pdf) found that increasing customer retention rate by just 5% can increase profits by 25 to 95%. And, when[ acquiring a new customer](https://hbr.org/2014/10/the-value-of-keeping-the-right-customers) can be 5 to 25x more expensive than retaining an existing one, it's a smart move to invest just as much time and resources in retaining existing customers as in gaining new ones. Plus with the rapid rise of “digital native” or “digital first” companies, such as Spotify or Slack, they expose their customers to a simple, streamlined digital user experience. As a result, consumer expectations have soon normalized around this high level of online interaction, setting high standards ripe for customer churn. Designed from the ground up for digital delivery, these companies “raise the bar” when it comes to online communications, not just for the B2C sector but also for B2B vendors as buyers expect the same interactions just as they do when they make personal purchases. To deliver a great customer experience, you need to know your customer, according to Accenture [80% of brands think they deliver an excellent customer experience, but only 8% of their customers agree](https://www.accenture.com/gb-en/insights/interactive/customer-experience-index). **With so many customers willing to switch — are we doing enough to meet our customers digital expectations?** According to Salesforce [47% of customers](https://www.salesforce.com/form/pdf/state-of-the-connected-customer-2nd-edition/?d=7010M000000uQVWQA2) say they’ll stop buying from a company if they have a subpar experience. The same study reveals that 76% of customers now say it’s easier than ever to take their business elsewhere. Customers expect their online experience to be seamless and stable, with the reward that excellent service leads to an increase in customer satisfaction and retention. Happy customers are more likely to be repeat buyers, remain loyal to your brand and be less price sensitive. Although, according to Zendesk research, roughly 50% of customers say they would switch to a new brand after one bad experience. ([Zendesk](https://www.zendesk.com/blog/why-companies-should-invest-in-the-customer-experience/ )) **When unavoidable technical issues do pop up, where do you turn?** Ideally, your business will have a plan in place to monitor your website for example and be able to alert the relevant team that an issue has been detected. It could also be a planned maintenance where you’ll need to inform key stakeholders and customers in advance of any risk of downtime. And hopefully, all of this is done before your customers even notice. In the situation that your site or application does have an ongoing incident your first step should be to report it as an open incident and decide on the appropriate next steps, depending on the severity level. And this is where status pages can effectively build trust with your customers and the organization internally by communicating quickly and effectively that you are aware something's wrong and you're looking into it. **So, what are status pages and why use them?** The idea of a [hosted status page](https://statuspal.io/features/status-page/) is to allow you to communicate to your customers and staff critical information about your online service, web application or API status. This can be especially handy when you have a service disruption and you might not have any other means to communicate what the status is. Luckily, downtime doesn’t have to become a customer service nightmare. By keeping users informed they’re satisfied that they know what is going on, happy that an incident has been flagged and is being addressed. Even though you may not know yet why or what caused the incident, users will take solace in seeing an immediate response. Having your status page hooked up to a [monitoring service](https://statuspal.io/features/monitoring/) can automate this step for you, giving your customers confidence that you’re on top of outages immediately as they start. While there’s a wide choice of technical tools to help your team flag, track and address incidents, even the best tools can’t replace clear, effective communication to your internal and external stakeholders. Internally, employees need to be kept informed of the status of core IT services that they rely on, especially if they have a habit of going down. Proactively communicating with these users means less “what’s the problem” type questions, fewer duplicate IT support tickets and keeps your IT/DevOps/SRE team happy, and focused on fixing instead of reporting. If the incident affects customers then companies can use a status page to detail the problem and inform your customers when to expect the issue to be remedied. Where organizations don't have an incident management process, they'll expose themselves to unnecessary delays and costs with the risk of losing dissatisfied customers. Statuspal aims to solve this subtle but important problem, **status communication and monitoring**. It is not uncommon for sites to go down, no matter how perfectly engineered they are. And when this happens you’ll want to communicate to your customers and internal stakeholders in a cost effective and professional way. Statuspal can automate this process for you. Thanks to our integrated monitoring, we can automatically check if your site or web application is down and immediately notify and keep the relevant stakeholders up-to-date. This saves you hours of support calls and tickets, as well as time taken up with the manual reporting of incidents. Hours that you can instead use to address and remediate incidents in a more timely manner. [Start your free 14-day trial with Statuspal](https://statuspal.io/registrations/new)
messutiedd
818,838
How to mock the aws-sdk?
Class to be tested import { Lambda } from 'aws-sdk'; export default class Service { ...
0
2021-09-09T17:19:02
https://dev.to/biglucas/how-to-mock-the-aws-sdk-1hpc
jest, test, node, awssdk
# Class to be tested ```typescript import { Lambda } from 'aws-sdk'; export default class Service { public async callLambda(): Promise<void> { const lambda = new Lambda(); const params: Lambda.Types.InvocationRequest = { FunctionName: `function-name`, InvocationType: 'Event', }; await lambda.invoke(params).promise(); } } ``` - Let suppose that we have a service that call a lambda using the `aws-sdk` library. - `aws-sdk` version: `2.546.0`. # Unit test with the mock ## First way ```typescript import Service from '../../../../src/api/services/Service'; const fakePromise = { promise: jest.fn(), }; jest.mock('aws-sdk', () => ({ Lambda: jest.fn(() => ({ invoke: () => fakePromise, })), })); describe('callLambda', () => { it('should return something... ', async done => { const service = new Service(); const result = await service.callLambda(); expect(result).toBeUndefined(); done(); }); it('should throw an error... ', async done => { // modifying the implementation before call the function fakePromise.promise = jest.fn() .mockImplementation(() => Promise.reject(new Error())); try { const service = new Service(); const result = await service.callLambda(); expect(result).toBeUndefined(); } catch (error) { expect(error).toBeDefined(); } done(); }); }); ``` ## Second way ```typescript import { Lambda } from 'aws-sdk'; import Service from '../../../../src/api/services/Service'; // removing the factory function of the first way jest.mock('aws-sdk'); describe('callLambda', () => { // moving the fake to inside our describe test // because we don't need it in jest.mock const fakePromise = { promise: jest.fn(), }; beforeEach(() => { // adding the implementation before each test (Lambda as any).mockImplementation(() => { return { invoke: () => fakePromise, }; }); }); it('should return something... ', async done => { const service = new Service(); const result = await service.callLambda(); expect(result).toBeUndefined(); done(); }); it('should throw an error... ', async done => { // modifying the implementation before call the function fakePromise.promise = jest.fn() .mockImplementation(() => Promise.reject(new Error())); try { const service = new Service(); const result = await service.callLambda(); expect(result).toBeUndefined(); } catch (error) { expect(error).toBeDefined(); } done(); }); }); ``` - Inside the unit tests we can just change the `fakePromise` or update the `mockImplementation` to pretend the behaviors that we need. - We can use these approaches to create unit test for other classes inside the `aws-sdk`. - `jest` version: `24.9.0`. # Conclusion The most difficult part of writing unit test is creating the mocks for external libraries, the purpose of this article is just help someone with troubles to mock this kind of library. We have a lot of ways to mock libraries, feel free to comment and send suggestions.
biglucas
819,052
Become A Better Developer Today: Quick Wins
Here are some common mistakes you might be making daily that are slowing you down, holding you back...
0
2021-09-09T20:01:53
https://careerswitchtocoding.com/blog/become-a-better-developer-today-quick-wins
career, programming, beginners
Here are some common mistakes you might be making daily that are slowing you down, holding you back and building tech debt. 🧪 **Not writing tests**. Get used to writing these sooner rather than later. You don’t need to go full Test Driven Development (TDD) but at least be comfortable. I should have started earlier! 📄 **Not documenting code**. Coming back to old code is a nightmare without good docs, I can’t remember how code I wrote last week works so after 1 year I have no chance. 🧰 **Not breaking out common code into a reusable library**. Reusable code is your toolbox, it saves you time, effort and complexity. Start building your personal toolbox or your teams toolbox now. 🍱 **Forgetting to split projects into smaller modules**. Good organisation is the best way to keep a codebase sensible and manageable, it doesn’t come for free though so you have to work at it. 🏛 **Not using external libraries**. Build on other peoples code and don’t write everything from scratch, this lets you move faster, benefit from others work and you can focus on your core business logic. 💅 **Not using an auto code formatter**. Worrying about how to format and layout your code should be the last thing on your mind, get used to auto format on save and you will love it! 🤖 **Not automating**. There is a knack in knowing when to automate, too early in a process and you risk automating the wrong thing and having to change it, too late and you'll have an overly complex process to automate that will take weeks to sort out. If you've done the same thing 3 times in a short period of time (measured in weeks) then it's time to automate it. ### Summary None of these on their own will derail a software project but all of them can add friction to the development process and slow you down. Pick one and implement it in your current project today. ### One more win Head over to [Career Switch To Coding](https://careerswitchtocoding.com/) and join my mailing list for regular tips and a free chapter of my book 😀
allthecode
819,053
Simple Scalable Search Autocomplete Systems
Here I'll discuss 4 ways to build a simple scalable search autocomplete service that I know of and...
0
2021-09-10T05:51:06
https://dev.to/mdnurahmed/simple-scalable-search-autocomplete-systems-1j18
go, redis, elasticsearch, sql
Here I'll discuss 4 ways to build a simple scalable search autocomplete service that I know of and some pros and cons of each of them. ## Requirements - One API for inserting texts/words that were searched by users. For example : ``` POST /insert { "search string" : "Harry Potter and the Prisoner of Azkaban" } ``` - One API for getting autocomplete suggestions as the user keeps typing. For example : ``` GET /search?SearchString=potter ``` - One API to clear all searches made by the user. For Example - ``` DELETE /delete ``` - The system should be able to return top N frequent relevant queries/searches. - The system should be horizontally scalable using shards and read replicas. - The calculations should be In-memory to make the system fast and real-time. ## Way 1: Using SQL ``` SELECT * FROM autocomplete WHERE search_items LIKE '%{SEARCH_STRING}% ORDER BY frequency DESC LIMIT 5'; ``` PostgreSQL (and some other SQL databases) have LIKE operator which can be used to search patterns in text data. Percent sign (%) matches any sequence of zero or more characters. So %{SEARCH_STRING}% will return all rows in autocomplete table where {SEARCH_STRING} exists in any position of the search-items field. We would also want to get rid of less frequent search items before they pile up. There is no direct way to do this. We could have an "expire" field which we would update every time insert API is called, that is when a user searches something. Another process could periodically delete the expired items. This can do infix matching (i.e. matching terms at any position within the input). This is the simplest way to do this but this actually doesn't happen in memory. ## Using Redis Sorted Set Redis is an in-memory database. So we could try to exploit Redis's data structures to make a search autocomplete system. Salvatore Sanfilippo Antirez, the creator of Redis, wrote about 2 ways about how it could be done in his old blog post from 2010. You can read the [original blog post](http://oldblog.antirez.com/post/autocomplete-with-redis.html) but here I'm gonna explain both of the ways. Both of the ways use sorted set data structure of Redis which is implemented using the skip list data structure under the hood. ### Way 2: Using 1 Sorted Set If any 2 members in a Redis's sorted set have an equal score they are sorted lexicographically. For each prefix in the search string, we'll store them in a sorted set with the same score. So, they will be lexicographically sorted in the set. To mark an end word we'll add '*' to it in the end. For example, if we searched "bangladesh" and "bangkok", our sorted set will look like this - ``` 0 b 1 ba 2 ban 3 bang 4 bangk 5 bangko 6 bangkok 7 bangkok* 8 bangl 9 bangla 10 banglad 11 banglade 12 banglades 13 bangladesh 14 bangladesh* ``` Each insertion in the sorted set happens at O(logN) complexity where N is the size of the sorted set. So total complexity of insert each time the insert API is called is O(L*logN) where L is the average length of a search string. We can use the [ZRANK](https://redis.io/commands/zrank) command to find any member's index/position in the sorted set. We then can use [ZRANGE](https://redis.io/commands/zrange) command to fetch M members from that position in O(M) complexity. If any fetched member has '*' in the end and it contains the search string as a prefix, then we can return it as a suggestion. For example, if the user types 'bang' we can find the index of 'bang' in a sorted set, it's 3. Now if we fetch 12 items from position 3 we would fetch - ``` [ bang,bangk,bangko,bangkok,bangkok*, bangl,bangla,banglad,banglade, banglades,bangladesh,bangladesh* ] ``` Among these members bangkok* and bangladesh* has "bang" in the prefix and "*" in the end. So they can go into the suggestion. Our API would return [bangkok,bangladesh]. But this method cannot return top N frequent relevant searches. Also, there is no way to get rid of less frequent searched items. On top of that, this cannot do infix matching (i.e. matching terms at any position within the input). [Here is my full code implementing this method](https://github.com/mdnurahmed/autocomplete-with-redis-1) ### Way 3: Using Multiple Sorted Sets For each prefix of the searched string, we'll have 1 sorted set. The searched string will be a member of each of those sorted sets. The score of each member will be the frequency. The complexity of insert here is L*log(N) where L average length of a search string, N is the size of the sorted set. For example, if we searched "bangladesh" 2 times and "bangkok" 3 times, our Redis would look like this ``` |Sorted Set| Members | | ---------|---------------------------| |b | bangladesh:2 , bangkok:3 | |ba | bangladesh:2 , bangkok:3 | |ban | bangladesh:2 , bangkok:3 | |bang | bangladesh:2 , bangkok:3 | |bangk | bangladesh:2 , bangkok:3 | |bangko | bangladesh:2 , bangkok:3 | |bangkok | bangladesh:2 , bangkok:3 | |bangl | bangladesh:2 , bangkok:3 | |bangla | bangladesh:2 , bangkok:3 | |banglad | bangladesh:2 , bangkok:3 | |banglade | bangladesh:2 , bangkok:3 | |banglades | bangladesh:2 , bangkok:3 | |bangladesh| bangladesh:2 , bangkok:3 | ``` So if the user types 'bang' we can fetch top N frequent items easily from the sorted set 'bang' using the [ZREVRANGE](https://redis.io/commands/zrevrange) command as our suggestions are sorted according to frequency. We can also use [EXPIRE](https://redis.io/commands/expire) command to delete sorted sets containing less frequent search items. Now we need to cap the length of the sorted set. We cannot let it reach infinite length. For the top 5 suggestions, keeping 300 elements is quite enough. Whenever a new member would come in the sorted set we'll pop the element with the lowest score and insert the new element with a score equal to the popped element's score plus 1. So this new element would have a chance to rise to the top. Now, this is a stream-based algorithm. So the accuracy of this algorithm depends on the distribution of input. If all the strings have very close frequency this might not be very accurate. But in the case of searching problems, usually, a small subset occupies a big percentage of all the searches. So in a real-life scenario, this will be very feasible. But this also can not do infix matching. [Here is my full code implementing this method](https://github.com/mdnurahmed/autocomplete-with-redis-2) ## Way 4: Using Elasticsearch There are multiple ways to do it using Elasticsearch. But almost all of them does most of the work at query time and are not guaranteed to be in memory. Except if we map our field as [search_as_you_type](https://www.elastic.co/guide/en/elasticsearch/reference/7.x/search-as-you-type.html) data type. Then this does most of the work at index time by updating in-memory Finite State Transducers which are a variant of the trie data structure. Also, this can do infix matching. This is the most efficient way to solve this problem. our mapping would look like this - ``` { "mappings":{ "properties":{ "search-text":{ "type":"search_as_you_type" }, "frequency":{ "type":"long" }, "expire":{ "type":"long" } } } } ``` "frequency" is the frequency of the search-text, so we can fetch top N relevant searches. "expire" field can be used to delete expired less frequent items by another process. We don't want one search to appear multiple times in the database. One way to solve this is to use the hash of the search string as the document ID. Our insert operation will be an upsert operation where we would increase the "frequency" of the document and update the "expire" field if it already exists or insert the document if it doesn't exist. We can take help of the "script" field for this - ``` POST autocomplete/_update/{DOCUMENT_ID} { "script":{ "source":"ctx._source.frequency++;ctx._source.expire={EXPIRE_TIME}", "lang":"painless" }, "upsert":{ "search-text":"{SEARCH_STRING}", "frequency":1, "expire":"{EXPIRE_TIME}" } } ``` In search API we want to fetch the top N relevant element sorted by frequency - ``` GET /autocomplete/_search { "size":5, "sort":[ { "frequency":"desc" } ], "query":{ "multi_match":{ "query":{SEARCH_STRING}, "type":"bool_prefix", "fields":[ "search-text", "search-text._2gram", "search-text._3gram" ] } } } ``` [Here is my full code implementing this method.](https://github.com/mdnurahmed/autocomplete-with-elasticsearch) ## Conclusion : Thanks a lot for taking your time to read it. Let me know what you think of it in the comment. Let me know if there are any other solutions you know of or how I could improve my solutions. Any feedback is welcome.
mdnurahmed
819,059
How to bypass captcha with 2captcha API and Selenium using Javascript
Spam is a big nightmare for website owners. With the dawn of bots, this challenge has never been more...
16,544
2021-09-12T08:44:51
https://tngeene.com/blog/how-to-bypass-captcha-with-2captcha-api-and-selenium
javascript, automation, captcha, node
Spam is a big nightmare for website owners. With the dawn of bots, this challenge has never been more prominent. Completely Automated Public Turing tests to tell Computers and Humans Apart(or CAPTCHA as they are commonly known) were introduced to tackle this issue. The squiggly lines and words of the past are less common nowadays and have been replaced by Google's version 2 of CAPTCHA, known as reCAPTCHA. However, bots are becoming more advanced and can bypass almost any captcha. With a little spare time and, a few resources, we can make a program that bypasses the all too annoying CAPTCHA. In this tutorial, we'll make use of the captcha bypass software, [2captcha](https://2captcha.com/?from=12437369). ![https://res.cloudinary.com/practicaldev/image/fetch/s--TImx3P4Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h2oqd92dh8bo0ute3qvn.jpeg](https://res.cloudinary.com/practicaldev/image/fetch/s--TImx3P4Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h2oqd92dh8bo0ute3qvn.jpeg) ## Why are we doing this? Okay, I know some of you might be wondering, why would we need to bypass captcha? and most importantly; is this even legal? No, it is not illegal to bypass 2captcha. Whatever we'll be building is within the confines of the law. Secondly, this article is intended for anyone; 1. Curious about how you can bypass captcha 2. Building web scrapers that would need to bypass a captcha > *Disclosure: I only recommend products I would use myself and all opinions expressed here are my own. This post may contain affiliate links that at no additional cost to you, I may earn a small commission. ## What is 2captcha API and how does it work? [2Captcha.com](https://2captcha.com/?from=12437369) is a captcha solving service that automates the captcha solving process. They have an API and several packages that wrap around different programming languages. All you need is to register on their website, get an API key and make requests to their API. > Note: They charge a small rate for every captcha solved, however, the fee is quite minimal. ## Signup Process and supported languages ![https://res.cloudinary.com/practicaldev/image/fetch/s--JgfXuOWJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdobzdwc4thex7mwocho.png](https://res.cloudinary.com/practicaldev/image/fetch/s--JgfXuOWJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdobzdwc4thex7mwocho.png) ![https://res.cloudinary.com/practicaldev/image/fetch/s--OEXnGqQA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szwihqwt8xxrtpnz64fn.png](https://res.cloudinary.com/practicaldev/image/fetch/s--OEXnGqQA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szwihqwt8xxrtpnz64fn.png) Supported captchas the software solves include; ![https://res.cloudinary.com/practicaldev/image/fetch/s--px3AuiEX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11ybxotkegschanhgq9i.png](https://res.cloudinary.com/practicaldev/image/fetch/s--px3AuiEX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11ybxotkegschanhgq9i.png) ## Requirements and Setup - For this tutorial, we'll utilize the [2captcha API](https://2captcha.com/2captcha-api). You'll need to have a developer account to use. You can head over to [this link and sign up](https://2captcha.com/?from=12437369) - The source code we'll be using can be found [here](https://github.com/tngeene/two-captcha-solver). Setup instructions have been linked in the project README file. Feel free to clone the repository and tinker around. Let's dive in. ## Automating with Selenium Before we get carried away by CAPTCHA, we need to understand the process our program will follow. I'm going to use [selenium](https://www.selenium.dev/documentation/) and node.js. Selenium is a browser automation service that provides extensions that allow programs to emulate browser interactions. It supports an array of browsers; from chrome, firefox, safari, and so forth, by integrating with their web drivers. For this tutorial, I'll be using chrome driver since chrome is my default browser. To setup selenium and chromedriver, run ``` npm i selenium-webdriver npm i chromedriver ``` This will write to the package.json file initialized in your node application. ## Installing 2Captcha Dependencies Next, we'll need to install 2captcha's node.js package ``` npm i @infosimples/node_two_captcha ``` This package will do the heavy lifting in terms of interacting with 2Captcha API. Finally, we'll install `dotenv` since we'll be having sensitive keys that we need stored in an environment variables file. ``` npm i dotenv ``` ## Project Code Open an `index.js` file at the root of your project directory and paste the following code. We'll go over the code in-depth in the next section. ```javascript require("chromedriver"); require("dotenv").config(); const Client = require("@infosimples/node_two_captcha"); const { Builder, By, Key, until } = require("selenium-webdriver"); const client = new Client(process.env.CAPTCHA_API_KEY, { timeout: 60000, polling: 5000, throwErrors: false, }); const initiateCaptchaRequest = async () => { console.log("solving captcha..."); try { client .decodeRecaptchaV2({ googlekey: process.env.GOOGLE_CAPTCHA_KEY, pageurl: process.env.WEBSITE_URL, }) .then(function (response) { // if captcha is solved, launch selenium driver. launchSelenium(response); }); } finally { // do something } }; function sleep(ms) { return new Promise((resolve) => setTimeout(resolve, ms)); } async function launchSelenium(response) { if (response) { console.log("Captcha Solved! Launching Browser instance..."); let driver = await new Builder().forBrowser("chrome").build(); // Navigate to Url await driver.get(process.env.WEBSITE_URL); await driver.findElement(By.id("name")).sendKeys("Ted"); await driver.findElement(By.id("phone")).sendKeys("000000000"); await driver.findElement(By.id("email")).sendKeys("tngeene@captcha.com"); await driver.findElement(By.id("comment-content")).sendKeys("test comment"); const gCaptchResponseInput = await driver.findElement( By.id("g-recaptcha-response") ); await driver.executeScript( "arguments[0].setAttribute('style','type: text; visibility:visible;');", gCaptchResponseInput ); await gCaptchResponseInput.sendKeys(`${response.text}`); await driver.executeScript( "arguments[0].setAttribute('style','display:none;');", gCaptchResponseInput ); await driver.findElement(By.id("send-message")).click(); // wait 8 seconds and close browser window await sleep(8000); driver.quit(); } else { // if no text return request time out message console.log("Request timed out."); } } (async function main() { const response = await initiateCaptchaRequest(); })(); ``` ### Importing Packages ```javascript require("chromedriver"); require("dotenv").config(); const Client = require("@infosimples/node_two_captcha"); const { Builder, By, Key, until } = require("selenium-webdriver"); ``` These lines are essentially referencing the packages we'll be using. We're telling the program that we'll require to use chrome driver, dotenv, the node_two_captcha, and selenium packages. ### Interacting with imported packages ```javascript const client = new Client(process.env.CAPTCHA_API_KEY, { timeout: 60000, polling: 5000, throwErrors: false, }); ``` The first package we need to use is the `node_two_captcha` package. The first parameter of the TwoCaptchaClient constructor is your API key from 2Captcha. In our case above, we referenced an environment variable (CAPTCHA_API_KEY). More on this below. The other parameters are: - `timeout`: Time (milliseconds) to wait before giving up on waiting for a captcha solution. - `polling`: Time (milliseconds) between polls to 2captcha server. 2Captcha documentation suggests this time to be at least 5 seconds, or you might get blocked. - `throwErrors` : Whether the client should throw errors or just log the errors. ### Environment variable. {% wikipedia [https://en.wikipedia.org/wiki/Environment_variable] (https://en.wikipedia.org/wiki/Environment_variable) %} It's advisable to use environment variables to store sensitive information that we wouldn't want getting into unauthorized hands, for example, our 2Captcha API key. Create a `env` file at the root of your project folder. Modify it to this. ``` CAPTCHA_API_KEY=<api_key_from_2_captcha.com> WEBSITE_URL=<website_we'll_be_accessing_captcha> GOOGLE_CAPTCHA_KEY=<google_captcha_site_key> ``` Paste the api key value from [2captcha](https://2captcha.com/?from=12437369) dashboard on the `CAPTCHA_API_KEY` field. ## Captcha Solving Pseudocode Now that we have our account setup, we'll do the actual capture bypassing, which brings us to the next part. For this tutorial, i'll be bypassing the capture on [this sample form](https://phppot.com/demo/php-contact-form-with-google-recaptcha/) ![https://res.cloudinary.com/practicaldev/image/fetch/s--F4Bz13C_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkgdk3qldwr178k6rtb0.png](https://res.cloudinary.com/practicaldev/image/fetch/s--F4Bz13C_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkgdk3qldwr178k6rtb0.png) In order to achieve our objective, we'll need our program to follow these steps. 1. Launch a browser tab 2. Fill in the required fields in the form 3. Solve the captcha 4. Send the contact message with the solved captcha. All this will be achieved automatically!! How cool is that? ## Retrieving Dom elements As you can tell from the form, we will need selenium to access the DOM form elements as it automatically fills the inputs. These are; the name, email, phone, comment, and finally there's a hidden google captcha field that takes the solved captcha as an input. For the visible form fields, all we need to do is open up the dev tools and retrieve the individual ids of the form fields. This section does just that. ```javascript let driver = await new Builder().forBrowser("chrome").build(); await driver.get(process.env.WEBSITE_URL); await driver.findElement(By.id("name")).sendKeys("Ted"); await driver.findElement(By.id("phone")).sendKeys("000000000"); await driver.findElement(By.id("email")).sendKeys("tngeene@captcha.com"); await driver.findElement(By.id("comment-content")).sendKeys("test comment"); ``` What we're telling selenium is, to launch chrome browser, and visit the specified url. Once it does so, find the DOM elements that matches the provided ids. Since these are form inputs, auto-populate those field with the data inside the `sendKeys` function. All 2captchas have a hidden text-area that autofills with the solved captcha code once you click the `i'm not a robot` check. Usually, this has the id set as `g-recaptcha-response` ![https://res.cloudinary.com/practicaldev/image/fetch/s--DRV3vb-q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6v3jdf99wtv974jfhffy.png](https://res.cloudinary.com/practicaldev/image/fetch/s--DRV3vb-q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6v3jdf99wtv974jfhffy.png) Since selenium simulates browser human inputs, it'll be required to make the field visible. We achieve this with this code snippet ```javascript const gCaptchResponseInput = await driver.findElement( By.id("g-recaptcha-response") ); await driver.executeScript( "arguments[0].setAttribute('style','type: text; visibility:visible;');", gCaptchResponseInput ); await gCaptchResponseInput.sendKeys(`${response.text}`); await driver.executeScript( "arguments[0].setAttribute('style','display:none;');", gCaptchResponseInput ); await driver.findElement(By.id("send-message")).click(); ``` This section typically makes the field visible, auto-populates the field with the solved captcha, hides the field again, and finally, the button click is simulated to send the comment with the solved captcha. Finally, we'll close the browser tab 8 seconds after the captcha has been solved. ```javascript // wait 8 seconds and close browser window await sleep(8000); driver.quit(); ``` All the fore-mentioned functionality resides in the `launchSelenium()` function. We need to tie it all together with the 2captcha service. From the index.js file, you can see we have a `initiateCaptchaRequest()` function. ```javascript const initiateCaptchaRequest = async () => { console.log("solving captcha..."); try { client .decodeRecaptchaV2({ googlekey: process.env.GOOGLE_CAPTCHA_KEY, pageurl: process.env.WEBSITE_URL, }) .then(function (response) { // if captcha is solved, launch selenium driver. launchSelenium(response); }); } finally { // do something } }; ``` We are calling the `node_two_captcha` client we'd initialized before. The `WEBSITE_URL` is the webpage of our captcha form. Fill it in the `.env` file. `GOOGLE_CAPTCHA_KEY` is a special identifier found in all web forms having captcha, we can retrieve it by also opening dev tools, and searching the `data-sitekey` keyword. ![https://res.cloudinary.com/practicaldev/image/fetch/s--EWcZ2FeR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lp1rxds2suvwbtgrktdn.png](https://res.cloudinary.com/practicaldev/image/fetch/s--EWcZ2FeR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lp1rxds2suvwbtgrktdn.png) Retrieve the value and paste it in the `.env`, GOOGLE_CAPTCHA_KEY value. `node_two_captcha` sends this key under the hood to 2capthca API, which then returns an API response of the solved captcha. Selenium will only be launched upon a successful captcha solve, which usually takes a few seconds. For Recaptcha version 2, the ETA is usually anytime from 15 seconds to 45seconds. Recaptcha version 3 takes a shorter time.If a request timed out, we log the api response. ## Demo Okay, now your application is set up! It may feel like a lot 😅 but we did a lot of installation. We will now test our application. To do this, run ``` npm index.js ``` ![https://res.cloudinary.com/practicaldev/image/fetch/s--wo6M1CbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2rzq2jqq3zbyu8z86pd.gif](https://res.cloudinary.com/practicaldev/image/fetch/s--wo6M1CbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2rzq2jqq3zbyu8z86pd.gif) ### Conclusion > It’s always a dilemma, should websites have a better experience and have simple to bypass the CAPTCHA or should websites aggressively protect themselves from bots and have a bad user experience. The war between websites and bots is never over. Whatever verification method websites pull out, it’s just a matter of time when someone figures out how to bypass it. ~ Filip Vitas In this guide, we were introduced to 2captcha API, selenium, and a few concepts in 2captcha. By the end of it, I hope you can apply the knowledge gained to build your own captcha bypass service. I mean, if bots can do it, then so should we! A few next steps would be to add a User interface to input our values. You can also look into using the 2captcha API using your preferred programming language and other tools such as [puppeteer](https://pptr.dev/) Finally, if you liked the content and would like to use 2captcha, [sign up with this link.](https://2captcha.com/?from=12437369) If you have any questions, you can always leave a comment below, or reach out on these channels; 1. [personal website](https://tngeene.com/) 2. [Twitter](https://twitter.com/Ngeene_kihiu) Source code of demo project can be accessed [here.](https://github.com/tngeene/two-captcha-solver) Use 2captcha responsibly. ## Sponsors - [Scraper API](https://www.scraperapi.com?via=teddy44) is a startup specializing in strategies that'll ease the worry of your IP address from being blocked while web scraping. They utilize IP rotation so you can avoid detection. Boasting over 20 million IP addresses and unlimited bandwidth. Using Scraper API and a tool like [2captcha](https://2captcha.com?from=12437369) will give you an edge over other developers. The two can be used together to automate processes. Sign up on Scraper API and use [this link] (https://www.scraperapi.com?via=teddy44) to get a 10% discount on your first purchase. - Do you need a place to host your website or app, [Digital ocean](DigitalOcean – The developer cloudwww.digitalocean.com) is just the solution you need, sign up on digital ocean using this [link](https://m.do.co/c/eaa803fe4d99) and experience the best cloud service provider. - The journey to becoming a developer can be long and tormentous, luckily [Pluralsight](http://referral.pluralsight.com/mQh0Nxp) makes it easier to learn. They offer a wide range of courses, with top quality trainers, whom I can personally vouch for. Sign up using [this link](http://referral.pluralsight.com/mQh0Nxp) and get a 50% discount on your first course.
tngeene
819,220
Pip installation on Windows using python
To get started with using pip, you should install Python on your system. What is pip? PIP...
0
2021-09-10T00:50:54
https://dev.to/pre22/pip-installation-on-windows-using-python-4534
To get started with using pip, you should [install Python](https://www.python.org/downloads/) on your system. ##What is pip? PIP is a package management system used to install and manage software packages written in Python. It stands for “preferred installer program” or “Pip Installs Packages.” ##Check for pip in your computer You have to check if you have a working Python with pip installed. This can be done by running the following commands ``` $ python --version python 3.N.N $ python -m pip --version pip X.Y.Z from ... (python 3.N.N) ``` If that worked then you have no need to reinstall pip again. But incase otherwise, you will have to follow the instruction below. ##How to install pip on windows pip can be downloaded and installed using command-line by going through the following steps: 1. Download the [get-pip.py](https://bootstrap.pypa.io/get-pip.py) file and store it in the same directory as python is installed. ![windows screen](https://media.geeksforgeeks.org/wp-content/uploads/20200117165504/pip-install-1.jpg) 2. Change the current path of the directory in the command line to the path of the directory where the above file exists. ![screen](https://media.geeksforgeeks.org/wp-content/uploads/20200117165502/pip-change-directory.jpg) 3. Run the command given below: ``` python get-pip.py ``` ![screen](https://media.geeksforgeeks.org/wp-content/uploads/20200117165506/pip-installation.jpg) pip is now installed on your system.! Thank You for reading.
pre22
819,244
What is a DAO?
Cryptocurrency, NFT, and gm seem to have become common knowledge among most folks interested in Web3...
0
2021-09-10T01:52:04
https://dev.to/rahat/what-is-a-dao-1ak1
web3, crypto, nft, token
Cryptocurrency, NFT, and gm seem to have become common knowledge among most folks interested in Web3 at the moment. There is a lot of growing chatter now about Decentralized Autonomous Organizations or DAO for short. Let's imagine for a second that you, your neighbor, random people across town or across the country you have never met all had the ability to actually make decisions about what a specific organization can do. Maybe you all have an interest in video games and want to decide on what new video game gets the funding to be completed. Maybe you're all investors and want to decide who the next big web3 startup to invest in will be. The point is you all have some similar interests and without needed to know each other can work together in determining what creators in the space you're interested should be funded or what proposals the organization has brought needs to be focused on. You all even help decide who works for the organization! This isn't some fantasy - this is actually happening with DAO's. The more I've learned about web3 the more it seems that DAO's keep coming up. I've recently joined Nader Dabit's Developer DAO and became a mod there to help with the community. The great thing about this is that I'm seeing how this works and is being built from the ground up and hoping to learn a few things here. The main problem I've had in general with DAO's is trying to figure out where to find them and how to participate in them. Developer DAO made that super easy and everyone in there is quite interested in working together on all things web3. I think I'll be pitching a couple ideas in there to see how my ideas land. For now though I'm looking for more DAO's to be a part of. Do you know of any that are actively looking for new members that can contribute to their growth? Particularly those needed frontend/solidity devs? Hit me up and let's see what the community has!
rahat
819,267
Experiencing the behavior driven design of using TDD with React Testing Library
TDD in React Test driven development (TDD) is a tool for breaking down complex problems...
0
2021-09-10T04:28:50
https://dev.to/sdiamante13/experiencing-the-behavior-driven-design-of-using-tdd-with-react-testing-library-37op
react, tdd, javascript
## TDD in React Test driven development (TDD) is a tool for breaking down complex problems into more manageable chunks. This post will explore my journey of applying a TDD approach to website development using React, Jest, and the React Testing Library. In this experiment, I didn't look at the browser for feedback at all. Instead I got all of my feedback from the automated tests. By focusing on the behavior of the components I'm building, I am able to get a working UI quickly and I'm able to change it's behavior while still verifying its accuracy. Also, I ignore all of the styling until I am happy with the behavior of the system. ## The XP Way When I started programming professionally, I learned it in an XP way. For more info on Extreme Programming check out my article on [XP compared to Scrum](https://www.path-to-programming.tech/posts/xp-scrum-compared-pt1/). My career has always been more than a job. Any product I find myself on, I care deeply about the code, design, architecture, and prosperity of the product. One practice I learned and still continue to, was how to build software through the usage of TDD. Most people have the misconception that TDD is about enforcing tests in our code. But as you will see, it is much more than that. ## Why does TDD work? It is human nature to want to break down large problems into smaller problems. By focusing on the behavior you would like to create you step away from the larger problem at hand. Nowadays, there are many talented developers that are creating life changing software. The breadth and depth that our software products have is immense. By using TDD as a tool, we are about to break these gigantic problems into one question? What is the simplest thing I can do to make this test pass? We use tests to dream up a behavior that we wish our software would do. And then that dream becomes a reality. Some people call it red, green, refactor, but you could just as well call it dream, reality, optimize. ## Attempting TDD on Android When I was on an Android mobile app team early on in my career, I wasn't able to apply TDD enough on the app. Something about having the UI there always distracted me. I would lose that flow that us TDD practitioners love to be in. Too much context switching or long running phases of red will break this flow. On my team, we would always style, design, and add business logic all at the same time. It was way too much all at once. Over time I've learned to break down those different parts of the design process. We weren't using testing libraries that check for the behavior of the UI. Although, we did have some Espresso UI tests that are much like the React Testing Library, those were not a part of our everyday local development. For these reasons, our team, which was actively applying XP practices to a mobile product, was not able to achieve a high level of TDD compared to the backend teams in the portfolio. ## Attempting TDD on React Recently I have been using TDD to generate websites using React and the React Testing Library. Instead of having a browser window open to view my changes, I just execute `npm run test:watch` which executes `jest test --watch`. Now, I have a quick feedback loop! And most importantly, LESS CONTEXT SWITCHING! I can dream up some magically behavior I want my UI to do and I can let my automated tests drive towards an optimal design. Most newcomers to the practice don't really understand that at the root of TDD it's all about design. By taking small steps, we only leave the danger zone for short amounts of times. The danger zone being that uncomfortable amount of time where your tests say that your dream and your reality are not aligned. Your software does not work the way you expect it to. ### Let's break down my thought process 1. I want to add new behavior to my website 1. This is my criteria for what will happen when 'x' happens 1. DANGER! The software is not in a working state 1. Do the simplest possible thing to get back to safety ### Jest test case Here's a test case I wrote for a task manager application: ``` it('should add new tasks when enter key is pressed', async () => { renderComponent(); addNewTask('Take out the trash'); addNewTask('Write Blog Post'); screen.getByLabelText(/Take out the trash/i); screen.getByLabelText(/Write Blog Post/i); } ); ``` And here are my helper methods so you understand what methods I'm using from the React Testing Library: ``` const addNewTask = (taskName) => { const taskInputField = getTaskInputField(); type(taskInputField, taskName); pressEnter(taskInputField); }; const getTaskInputField = () => { return screen.getByRole('textbox', { name: /Add New Task/i }); }; const type = (input, text) => { fireEvent.change(input, { target: { value: text } }); }; const pressEnter = (domElement) => { fireEvent.keyPress(domElement, { key: 'Enter', keyCode: 13 }); }; ``` As a user I want to add a task and I can accomplish that by typing my task into the input field and clicking the enter button. This test has that same behavior baked into it. After I wrote this test case I wrote the code necessary to make that happen. Here's a small snippet of the JSX for the Task Manager: ``` return ( <div> <h1>Task Manager</h1> <div> <label htmlFor="task">Add New Task</label> <input id="task" name="task" type="text" value={task.name} onChange={handleChangeEvent} onKeyPress={handleKeyEvent} /> </div> <TaskList tasks={tasks} onCompleted={handleCheckBoxEvent} /> </div> ); ``` ## Programming is fun with TDD For me, TDD gamifies programming. I love playing games and when I'm enforcing the practice of TDD it makes it feel like I'm playing a game. It makes programming fun! ## Distracted by UI One reason why I wanted to try this was due to a problem I've been having lately. While working on building a website I am often getting distracted by wanting to style my content before I've even programmed the behavior of it. I'll always have a thought like "oh I want this part to be blue... and now let's make this App Bar perfect!" But hey hey wait, all of that stuff can wait! So I stop and ask myself... What is it that the user of this product wants it to do? How can my website achieve that behavior? This is where TDD in React really shines. By leaving the styling towards the end, we have guaranteed that the application works like we expect it to work. And now we can focus on all of the details of the UI, UX, and A11y, In my opinion, adding styling is more like visual refactoring. The definition of refactoring is restructuring the code to work in a better way without modifying the current behavior of the system. By adding styling to the components last, we are just restructuring the layout of the components that have already proved themselves to exhibit the behaviors we have designed for them. We are giving them color, depth, and space to harmonize amongst the other widgets, text, and buttons on the screen. After exploring TDD in React I discovered an even better way to do it. Outside-In TDD. Maybe next time!
sdiamante13
819,427
A Little guide of Spring Web(MVC) with Custom Security for REST API
Getting started If you want security feature in your REST API App, USE Spring security....
0
2021-09-10T05:30:24
https://dev.to/composite/a-little-guide-of-custom-spring-web-mvc-with-security-for-rest-api-m6g
java, spring, security, tutorial
## Getting started If you want security feature in your REST API App, USE Spring security. BETTER THAN PLAIN `Interceptor` for WebMVC or `WebFilter` for Webflux your spring app. I'll show you small tutorial with code. Let's get started. ### Step 1. `AuthenticationToken` Prepare your Authentication Token. If your authentication data is nothing special, just use `UsernamePasswordAuthenticationToken`. Are you using JWT? No worries. or, you can implement class with `AbstractAuthenticationToken` for identity your authentication token. ### Step 2. `AuthenticationFilter` When client request your app, You will get authentication infomation by request entities. like header, request body, etc. How to make about getting authentication infomation from HTTP request? then you need to define a servlet filter. usually implements of `GenericFilterBean` or `OncePerRequestFilter`. I chose `OncePerRequestFilter` because it can implements method with `HttpServletRequest` and `HttpServletResponse` rather then more abstract interface like `ServletRequest` and `ServletResponse`. ```java @Component public class ApiAuthenticationFilter extends OncePerRequestFilter { @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { String principal = "Your Principal(ex. User ID) from request"; String credentials = "Your Credentials(ex. User Password) from request"; UsernamePasswordAuthenticationToken token = new UsernamePasswordAuthenticationToken(principal , credentials, /* DEFAULT ROLES */); SecurityContextHolder.getContext().setAuthentication(token); filterChain.doFilter(request, response); } } ``` ### Step 3. `AuthenticationProvider` Next, You will need verify the authentication infomation. you need implement `AuthenticationProvider` for what kind of verify authentication in your app. such as Verfy JWT Signature, get from your DB, etc. ```java @Component public class ApiAuthenticationProvider implements AuthenticationProvider { @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { Object principal = authentication.getPrincipal(); Object credentials = authentication.getCredentials(); // TODO: Your verify authentication logic from principal and credentials. if(verified) { return new UsernamePasswordAuthenticationToken(principal , credentials, /* REAL ROLES FOR authenticated */); } else return null; } @Override public boolean supports(Class<?> authentication) { // ... or equals your AuthenticationToken class. return authentication.equals(UsernamePasswordAuthenticationToken.class); } } ``` ### Step 4. `AuthenticationEntryPoint` If you have plan of REST API, when authentication failed or unauthorized request. You have to implement own class of `AuthenticationEntryPoint`, you'll see unwanted unauthorizaed response by Spring Security. ```java @Component public class ApiAuthenticationEntryPoint implements AuthenticationEntryPoint { private final ObjectMapper objectMapper = new ObjectMapper(); @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException e) throws IOException, ServletException { log.error("UnAuthorizaed!!! message : " + e.getMessage()); response.setContentType(MediaType.APPLICATION_JSON_VALUE); response.setStatus(HttpStatus.UNAUTHORIZED.value()); try (OutputStream os = response.getOutputStream()) { // write your own unauthorized response here. objectMapper.writeValue(os, e); os.flush(); } } } ``` ### Step 5. `AccessDeniedHandler` also you need implement class of `AccessDeniedHandler` for your own response when authentication successful but no have permission roles. ```java @Component public class ApiAccessDeniedHandler implements AccessDeniedHandler { private final ObjectMapper objectMapper = new ObjectMapper(); @Override public void handle(HttpServletRequest request, HttpServletResponse response, AccessDeniedException e) throws IOException, ServletException { log.error("Forbidden!!! message : " + e.getMessage()); response.setContentType(MediaType.APPLICATION_JSON_VALUE); response.setStatus(HttpStatus.FORBIDDEN.value()); try (OutputStream os = response.getOutputStream()) { // write your own FORBIDDEN response here. objectMapper.writeValue(os, e); os.flush(); } } } ``` ### Step 6. `WebSecurityConfigurerAdapter` At last, configure your app for secure, implement `WebSecurityConfigurerAdapter` and import security classes you made. ```java @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity(prePostEnabled = true, jsr250Enabled = true) @Import({ ApiAuthenticationFilter.class, ApiAuthenticationProvider.class, ApiAuthenticationEntryPoint.class, ApiAccessDeniedHandler.class}) public class SecurityConfig extends WebSecurityConfigurerAdapter { private final ApiAuthenticationFilter apiAuthenticationFilter; private final ApiAuthenticationProvider apiAuthenticationProvider; private final ApiAuthenticationEntryPoint apiAuthenticationEntryPoint; private final ApiAccessDeniedHandler apiAccessDeniedHandler; @Autowired public SecurityConfig() { // TODO: Autowire these beans... // or use @RequiredArgsConstructor instead if you are using lombok. } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.authenticationProvider(apiAuthenticationProvider); } @Override protected void configure(HttpSecurity http) throws Exception { http.httpBasic().disable() .csrf().disable() // required if rest api. .cors().and() // reuired if client is browser. // Required if you want stateless authentication method like JWT. .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .exceptionHandling() .authenticationEntryPoint(apiAuthenticationEntryPoint) .accessDeniedHandler(apiAccessDeniedHandler) .and() .authorizeRequests() .antMatchers("/public/**").permitAll() // for public resources .anyRequest().authenticated() .expressionHandler(defaultWebSecurityExpressionHandler()) .and() .addFilterBefore(apiAuthenticationFilter, UsernamePasswordAuthenticationFilter.class) .formLogin().disable(); } // optional: if you want role names without prefix. private DefaultWebSecurityExpressionHandler defaultWebSecurityExpressionHandler() { DefaultWebSecurityExpressionHandler result = new DefaultWebSecurityExpressionHandler(); result.setDefaultRolePrefix(""); return result; } } ``` Done. next start your spring boot app and check it works. Happy Coding!
composite
819,452
Add An Advanced File Uploader To Your React.js App - Upload Care
Overview In this article, we are going to integrate Upload Care ( An Advanced File...
14,405
2021-09-10T06:40:51
https://dev.to/suhailkakar/add-an-advanced-file-uploader-to-your-react-js-app-upload-care-487o
javascript, programming, tutorial, webdev
### Overview In this article, we are going to integrate Upload Care ( An Advanced File Uploader ) which includes drag-and-drop image uploader, direct link image uploader and etc.. in our react.js app. ### Creating a react app The first step is to create a simple react app which you can do just by running the command below in your terminal. ```sh npx create-react-app upload-care ``` This might take a while and it depends on your computer specs but once it is done go to the new directory which is created ( In our case `upload-care` ) and run `npm start` or `yarn start`. This command will start the development server for your react app. Now open this directory (In our case `upload-care`) in any code editor ### Cleaning up the project Once you opened the directory in your code editor, You can see that there are many files and folders, but for this project, we don't need most of them. Let's go ahead and delete the files which we don't need. In the `src` folder delete all files except `App.js`, `Index.js`, and `App.css`. Once you removed them, delete everything which is inside of `App.js` and paste the below code instead. ```jsx import React from 'react' export default function App() { return ( <div> <h1>React x UploadCare</h1> </div> ) } ``` also delete everything which is inside of `Index.js` and paste the below code instead. ```jsx import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById('root') ); ``` and also delete everything inside of `App.css`. Now in the `public` folder delete everything except `index.html`. Delete everything which is inside of the `index.html` and instead paste the below code ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>React x UploadCare</title> </head> <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="root"></div> </body> </html> ``` Finally, this is how your folder structure should look like 👇 ``` 📦 ├── package.json ├── public │ └── index.html ├── README.md ├── src │ ├── App.css │ ├── App.js │ └── index.js └── yarn.lock ``` ### Getting an API key Signup for an account in [Upload Care's website](https://app.uploadcare.com/accounts/signup/) and click on API Keys from the sidebar ![screely-1631249553895.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631249565100/h3_4zD8Ex.png) Copy your public key as we need it in further steps. ### Installing and Adding Upload Care Now It is time to install Upload Care in the react application, to do that simply run ``` npm install @uploadcare/react-widget ``` Once it is installed, you need to import the package into your app.js, to do that simply add this code to the top of your app.js code ``` import { Widget } from "@uploadcare/react-widget"; ``` To use the File Uploader component, you can add the below code to your app.js or another template of your choice: ```jsx <p> <label htmlFor='file'>Your file:</label>{' '} <Widget publicKey='YOUR_PUBLIC_KEY' id='file' /> </p> ``` finally, this is how your `app.js` should look like. ```jsx import React from "react"; import { Widget } from "@uploadcare/react-widget"; export default function App() { return ( <div> <p> <label htmlFor="file">Your file:</label>{" "} <Widget publicKey="YOUR_PUBLIC_KEY" id="file" /> </p> </div> ); } ``` Now, paste your public key in place of `YOUR_PUBILC_KEY` in the above code. Open your browser and go to `localhost:300`. and 💥 now you have upload care integrated into your app. If anyone uploads a file using the upload care widget, You can view those files in your dashboard. ![screely-1631249899662.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631249909192/E3C-rn7kCp.png) ## Conclusion I hope you found this article useful, if you need any help please let me know in the comment section. You can find the complete source code [here](https://github.com/suhailkakar/React-Uploadcare) Would you like to buy me a coffee, You can do it [here](https://www.buymeacoffee.com/suhailkakar). Let's connect on [Twitter](https://twitter.com/suhailkakar) and [LinkedIn](https://www.linkedin.com/in/suhailkakar/). 👋 Thanks for reading, See you next time
suhailkakar
819,463
Hybrid Cloud with AWS | AWS Whitepaper Summary
In this whitepaper summary (originally published at November 2020), we will navigate through various...
0
2021-09-10T07:51:54
https://dev.to/awsmenacommunity/hybrid-cloud-with-aws-aws-whitepaper-summary-5g9b
aws, hybrid, architecture, cloud
In this whitepaper summary (originally published at November 2020), we will navigate through various offerings from AWS for hybrid technical and organizational adoption of cloud services. AWS is a pioneer in this field, as it understands the need to integrate cloud, on-premises and edge infrastructure of existing and potential customers. In addition, and with respect to the huge effort the contributors of this whitepaper did, we will try to update the information and figures that represent the current offerings. Finally, the inspiring use case of Dropbox adoption of AWS hybrid cloud will be presented. ## Considerations of Hybrid Cloud with AWS ### Create a Hybrid Cloud Strategy * Ongoing migration to the cloud. * Ensuring business continuity during disaster. * Expanding on-premises cloud infrastructure to support low-latency apps. * Expanding international footprint with AWS. ### Create a Technical Strategy * Identify the guiding tenets for hybrid cloud architecture. * Define guiding principles for a planned hybrid cloud implementation. ## Hybrid Cloud Use Cases ### Application Migration to the Cloud **_VMware cloud with AWS_** delivers a faster, easier, and cost-effective path to the hybrid cloud while allowing customers to modernize applications enabling faster time-to-market and increased innovation, especially with the **new Amazon EC2 i3en.metal instances powered by Intel® Xeon® Scalable processors.** ### Cloud Services On-premises **_AWS Outposts_** is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. ### Data Center Extension * _Cloud Bursting_, with offerings of bursting for compute resource through **Amazon EC2, Amazon ECS, Amazon EKS, and AWS Fargate**, or bursting for storage through **Amazon S3 APIs and AWS Storage Gateway** for block and file storage. * _Backup and Disaster Recovery_, with offerings such as **Amazon S3 APIs, AWS Storage Gateway, AWS DataSync and AWS Transfer for SFTP**. * _Distributed Data Processing_, where low-latency or local data reside on-premises, while asynchronous processing, archiving, compliance, business analytics processing and machine learning-based predictions reside on AWS. AWS offerings for those purposes include **AWS Storage Gateway, AWS Backup, AWS DataSync, AWS Transfer Family, Amazon Kinesis Data Firehose and Amazon Managed Streaming for Apache Kafka (Amazon MSK)** for data importing, and leverage **AWS Analytics, AWS Machine Learning, AWS Serverless, AWS Containers** for data processing. * _Geographic Expansion_, where customers take benefit of **AWS Outposts** in countries where AWS regions do not exist, and **AWS Global Infrastructure** in countries covered by AWS regions. ### Edge Computing * AWS Snowball, AWS Snowcone and AWS Snowmobile. * AWS IoT Greengrass, which is an open-source edge runtime and cloud service for building, deploying, and managing device software. * AWS Wavelength for mobile edge computing applications. ### ISV and Software Compatibility AWS has built the most complete and proven approach for rapidly migrating tens to thousands of applications to the AWS Cloud to help you leverage your existing on-premises ISV software investments. Find out more at [AWS Marketplace](https://www.google.com). ## Operations and Management Framework for Hybrid Cloud with AWS ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9qp24bnak7ioi096ljg.png) ### Hybrid Cloud Infrastructure ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oklcbwtp38o9gu6xbr37.png) ### Core Services #### Device and Fleet Management ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q7bpkq24uz8lq08kdrb3.png) #### Identity, Security and Access Management ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2709nlw1gotteon1c12.png) ### Unified Hybrid Cloud Management #### Compute Services ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6v54isciq9zsezfr5lf5.png) #### Storage Services ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ha4p3mojrg70f51zhhj.png) #### Networking Services ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkrisndlyejzdkmvp4dh.png) ## Dropbox Hybrid Cloud Architecture ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9d39wi3ineyssm2yn6j1.png) #### References * Original Whitepaper: https://bit.ly/3zLhxSe * AWS Global Infrastructure: https://aws.amazon.com/about-aws/global-infrastructure/ * Hybrid Cloud with AWS: https://aws.amazon.com/hybrid/ * Dropbox’s re:Invent Presentation: https://www.youtube.com/watch?v=1_hKrGjYteQ
maltrkawi
819,464
Set Cell Styles and Formatting in Excel with Java
This article describes how to set cell styles, number formatting and font formatting in Excel using Java.
0
2021-09-10T07:11:14
https://dev.to/eiceblue/set-cell-styles-and-formatting-in-excel-with-java-2gjm
java, excel, styles, formatting
--- title: Set Cell Styles and Formatting in Excel with Java published: true description: This article describes how to set cell styles, number formatting and font formatting in Excel using Java. tags: Java, Excel, styles, formatting //cover_image: https://direct_url_to_image.jpg --- A cell style is a defined set of formatting characteristics, like fonts and font sizes, number formatting, and cell borders etc. In Microsoft Excel, you can set cell styles to make some data standout from others or make your spreadsheets more eye-catching. In this article, I am going to describe how to achieve the same function programmatically using Java. ##Prerequisite: Add Dependencies## In order to set cell styles and formatting, I use [Free Spire.XLS for Java](https://www.e-iceblue.com/Introduce/free-xls-for-java.html) library. To begin, you need to add dependencies to include Free Spire.XLS for Java into your Java project. For maven project, add the following configuration to the project’s pom.xml file. ```java <repositories> <repository> <id>com.e-iceblue</id> <name>e-iceblue</name> <url>http://repo.e-iceblue.com/nexus/content/groups/public/</url> </repository> </repositories> <dependencies> <dependency> <groupId> e-iceblue </groupId> <artifactId>spire.xls.free</artifactId> <version>3.9.1</version> </dependency> </dependencies> ``` For non-maven project, download Free Spire.XLS for Java pack from [this website](https://www.e-iceblue.com/Download/xls-for-java-free.html), extract the zip file and add Spire.Xls.jar in the lib folder into the project as a dependency. ##Set Cell Styles and Formatting## Using Free Spire.XLS for Java, you can set cell styles (borders, patterns, gradients, alignments, text orientation, direction, wrapping, shrinking, indentation etc.), number formatting and font formatting (font name, size, style, color, superscript, subscript etc.), as shown in the following example. ```java import com.spire.xls.*; import java.awt.*; public class CellStyles { public static void main(String []args){ //Create an Excel file Workbook workbook = new Workbook(); workbook.setVersion(ExcelVersion.Version2016); //If you want to load an Excel file, use loadFromFile method //workbook.loadFromFile("test.xlsx"); //Get the first worksheet Worksheet sheet = workbook.getWorksheets().get(0); int row = 2; //Set borders sheet.getCellRange(row, 1).setText("Borders"); sheet.getCellRange(row, 2).getBorders().setLineStyle(LineStyleType.Thin); sheet.getCellRange(row, 2).getBorders().getByBordersLineType(BordersLineType.DiagonalUp).setLineStyle(LineStyleType.None); sheet.getCellRange(row, 2).getBorders().getByBordersLineType(BordersLineType.DiagonalDown).setLineStyle(LineStyleType.None); sheet.getCellRange(row, 2).getBorders().setColor(Color.RED); //Set pattern sheet.getCellRange(row += 2, 1).setText("Pattern"); sheet.getCellRange(row, 2).getCellStyle().setFillPattern(ExcelPatternType.Angle); sheet.getCellRange(row, 2).getCellStyle().setPatternColor(Color.GREEN); //Set gradient effect sheet.getCellRange(row += 2, 1).setText("Gradient"); sheet.getCellRange(row , 2).getStyle().getInterior().setFillPattern( ExcelPatternType.Gradient);//Not applicable for Excel 97-2003 sheet.getCellRange(row, 2).getStyle().getInterior().getGradient().setForeColor(Color.CYAN); sheet.getCellRange(row, 2).getStyle().getInterior().getGradient().setBackColor( Color.BLUE); sheet.getCellRange(row, 2).getStyle().getInterior().getGradient().twoColorGradient(GradientStyleType.Horizontal, GradientVariantsType.ShadingVariants1); //Set number formatting sheet.getCellRange(row += 2, 1).setText("Number Formatting"); sheet.getCellRange(row, 2).setNumberValue(1234.5678); sheet.getCellRange(row, 2).setNumberFormat("$#,##0.00"); //Set font formatting sheet.getCellRange(row += 2, 1).setText("Font Formatting"); sheet.getCellRange(row, 2).setText("Hello World"); sheet.getCellRange(row, 2).getStyle().getFont().setFontName("Consolas"); sheet.getCellRange(row, 2).getStyle().getFont().setSize(14); sheet.getCellRange(row, 2).getStyle().getFont().isItalic(true) ; sheet.getCellRange(row, 2).getStyle().getFont().setUnderline(FontUnderlineType.Single); sheet.getCellRange(row, 2).getStyle().getFont().setColor(Color.BLUE); //Set superscript (the code to set subscript is very similar) sheet.getCellRange(row += 2, 1).setText("Superscript"); sheet.getCellRange(row, 2).getRichText().setText("a2 + b2 = c2"); ExcelFont font = workbook.createFont(); font.isSuperscript(true); //Set font for specific characters sheet.getCellRange(row, 2).getRichText().setFont(1, 1, font); sheet.getCellRange(row, 2).getRichText().setFont(6, 6, font); sheet.getCellRange(row, 2).getRichText().setFont(11, 11, font); //Set text alignment sheet.getCellRange(row += 2, 1).setText("Text Alignment"); sheet.getCellRange(row, 2).setText("Center Aligned"); sheet.getCellRange(row, 2).getStyle().setHorizontalAlignment(HorizontalAlignType.Center); sheet.getCellRange(row, 2).getStyle().setVerticalAlignment(VerticalAlignType.Center); //Set text orientation sheet.getCellRange(row += 2, 1).setText("Text Orientation"); sheet.getCellRange(row, 2).setText("25 degree"); sheet.getCellRange(row, 2).getStyle().setRotation(25); //Set text direction sheet.getCellRange(row += 2, 1).setText("Text Direction"); sheet.getCellRange(row, 2).setText("Direction"); sheet.getCellRange(row, 2).getStyle().setReadingOrder(ReadingOrderType.LeftToRight); //Set text wrapping sheet.getCellRange(row += 2, 1).setText("Text Wrapping"); sheet.getCellRange(row, 2).setText("Wrap Extra-long Text into Multiple Lines"); sheet.getCellRange(row, 2).getStyle().setWrapText(true); //Set text shrinking sheet.getCellRange(row += 2, 1).setText("Text Shrinking"); sheet.getCellRange(row, 2).setText("Shrink Text to Fit in the Cell"); sheet.getCellRange(row, 2).getStyle().setShrinkToFit(true); //Set indentation sheet.getCellRange(row += 2, 1).setText("Indentation"); sheet.getCellRange(row, 2).setText("Two"); sheet.getCellRange(row, 2).getStyle().setIndentLevel(2); //Set row height for(int rowCount = 1; rowCount <= sheet.getLastRow(); rowCount++) { sheet.setRowHeight(rowCount, 25); } //Set column width sheet.setColumnWidth(1, 20); sheet.setColumnWidth(2, 20); //Save the result file workbook.saveToFile("StylesAndFormatting.xlsx", ExcelVersion.Version2016); } } ``` The following is the output Excel file after setting cell styles and formatting: ![Set Cell Styles and Formatting in Excel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yuz2hy5p49bz2vn1elv.png)
eiceblue
819,571
เสริมความปลอดภัยให้ Backend Applications ด้วย NGINX App Protect - ตอนที่ 1 - ติดตั้ง NGINX Plus และ NGINX App Protect
*บทความนี้เป็นการใช้งาน NGINX Plus บน Proen Cloud ซึ่งจะมีค่า Subscription...
0
2021-09-10T10:02:15
https://dev.to/terngr/backend-applications-nginx-app-protect-31n1
nginx, waf, nginxplus, proen
*บทความนี้เป็นการใช้งาน NGINX Plus บน Proen Cloud ซึ่งจะมีค่า Subscription แบบรายเดือนครับ ช่วงนี้มีข่าวคราวการถูกโจมตีเยอะพอสมควรเลยครับ เครื่องมือที่จะช่วยป้องกันในระดับ Application ก็คือ Web Application Firewall - WAF การทำงานเมื่อมี requests เข้ามาจะผ่าน WAF ก่อน หากตรงกันกับ Signatures ที่เราระบุไว้บน WAF จะถือเป็นการโจมตี ทำให้ Requests ถูก Block และการโจมตีก็จะไปไม่ถึง Backend Application วันนี้เรามาดูวิธีติดตั้งใช้งาน NGINX WAF ซึ่งโดยปกติจะต้อง Install และ Configure โดยผู้เชี่ยวชาญเพื่อให้ WAF สามารถปกป้อง Web Application ของเราได้อย่างแท้จริง แต่วันนี้จะเป็นการติดตั้งบน Proen-Cloud ที่สามารถ Provision WAF ที่ได้มาตรฐานมาใช้งาน และสามารถเริ่มต้นได้ไม่ยาก เรามาเริ่มจากหน้า Login ครับ เข้าไปที่ https://app.manage.proen.cloud/ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61iaaa35wa9rqvm7yvgh.jpg) จากนั้นทำการติดตั้ง NGINX Plus ก่อน โดยกดไปที่ NEW ENVIRONMENT และเลือก NGINX Plus ในขั้นตอนนี้ เราจะตั้งชื่อให้ Environment ของเราด้วยก็ได้ จะได้ subdomain ภายใต้ proen.cloud ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3i44dil1fd4pmqi4eubt.jpg) Backend Application จะนำ Application ที่เราต้องการมาติดตั้งครับ, รองรับ Docker Image ด้วย, หรือเลือกจาก Templates ที่มีให้, ในตัวอย่างเลือกเป็น .NET Core จากนั้นกด Create ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gufqzbj9m9vc8j3n4mqk.jpg) เมื่อ Proen Cloud ทำการสร้าง Environment เสร็จแล้ว ให้ทดสอบโดยกดเข้าไปที่ลิงค์ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jmor9sj1hxt6tqtwdjsz.jpg) สามารถเข้าใช้งาน Backend Application .NET Core ได้ โดยการเข้าใช้งาน จะเป็นการเรียกผ่าน NGINX Plus ก่อน แล้วเรียกต่อไปยัง .NET Core ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9herccynpj21ysl1dfa3.jpg) หากต้องการใช้ Domain ของเราเอง ไม่ใช้ subdomain ของ Proen, สามารถเพิ่ม Public IPv4 ขึ้นมาใช้งานได้ แล้วปรับตั้ง DNS ให้ชี้มาที่ IP ของเรา ที่บรรทัด Public IPv4 กดเครื่องหมาย + ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tyxoo4zejlgpvimle4s5.jpg) ใส่จำนวน IPv4 ที่ต้องการ ในที่นี้คือ 1 IP กด Apply ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndswd1gjgt6dzhin7bv7.jpg) ทดสอบเข้าใช้งาน ผ่าน Public IPv4 หรือจะใช้ DNS ชี้มาที่ IP นี้ แล้วเข้าใช้งานผ่าน Domain ของเราเองก็ได้ครับ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7znpvb1lorynlfsnewtd.jpg) ตอนนี้เราจะติดตั้ง NGINX WAF ให้กับ NGINX Plus เพื่อปกป้อง Backend ซึ่งก็คือตัว .NET Core นั่นเองครับ วิธีการ ในบรรทัด Load Balancer ให้กดที่ปุ่ม Add-Ons ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwe8r88l92c2pmhzwkcx.jpg) เลือก Add-Ons: NGINX Plus App Protect กด Install ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7g9a4g1osuvcihlvq563.jpg) จะมีหน้าต่างแสดงรายละเอียด ให้กด Install อีกหนึ่งครั้ง ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2voyp1r7dktrvh9kon4j.jpg) รอจน Install เสร็จ ก็จะได้ NGINX WAF พร้อมใช้งานปกป้อง Web Application ของเราครับ ทำการทดสอบ เรียกไปยัง .NET อีกครั้ง พบว่าเรียกได้ปกติ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7znpvb1lorynlfsnewtd.jpg) ทดสอบเรียก โดยจำลองเหตุการณ์โจมตีแบบ Directory Traversal พยายามเข้าถึงไฟล์ htpasswd ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yetr2k1bqyc7jyh21r85.jpg) พบว่าจะถูกบล็อก ที่ระดับชั้นของ NGINX ก่อน ทำให้ Backend Application ยังคงปลอดภัย ในการ Block นี้ จะได้เลข Support ID หาก USER พบว่ามีการบล็อกที่ผิดปกติ สามารถนำ Support ID นี้ ส่งต่อให้ทีม Security เพื่อตรวจสอบ Log ได้ จากการตรวจสอบ Support ID: 15593246396323256717 พบว่า มีการ Violate เรื่อง Directory Traversal ที่ path ../etc/ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9l53ehfga0itdtkaens.jpg) เพียงเท่านี้ เราก็จะมี WAF ช่วยปกป้อง Web Application ของเราให้ปลอดภัยมากยิ่งขึ้นครับ ในบทความถัดไป เราจะมาดูวิธีการปรับแต่งที่มากขึ้น เช่นการส่ง Log ออกไปยัง ELK หรือ syslog server อื่น, การเลือกใช้งาน Signature ที่มีความเข้มข้นมากขึ้นขนาดที่ว่าเมื่อเปิดใช้งาน เราจะเข้าถึง Application ของเราไม่ได้ในทันที จะต้องค่อยๆ Configure เปิดการใช้งานทีละส่วน ให้เข้าถึงได้เฉพาะที่จำเป็นเท่านั้น --- Series: เสริมความปลอดภัยให้ Backend Applications ด้วย NGINX App Protect เสริมความปลอดภัยให้ Backend Applications ด้วย NGINX App Protect - ตอนที่ 1 - ติดตั้ง NGINX Plus และ NGINX App Protect https://bit.ly/napproen เสริมความปลอดภัยให้ Backend Applications ด้วย NGINX App Protect - ตอนที่ 2 - ปรับแต่ง NGINX App Protect - transparent mode https://bit.ly/napproen-ep2 เสริมความปลอดภัยให้ Backend Applications ด้วย NGINX App Protect - ตอนที่ 3 - ปรับแต่ง NGINX App Protect - Data Guard https://bit.ly/napproen-ep3 เสริมความปลอดภัยให้ Backend Applications ด้วย NGINX App Protect - ตอนที่ 4 - ปรับแต่ง NGINX App Protect - HTTP Compliance https://bit.ly/napproen-ep4 --- สัปดาห์หน้า พบกับ Protection Mechanism ถัดไป ติดตามได้ที่ FB Page: นั่งเล่น NGINX https://web.facebook.com/NungLenNGINX FB Group: ร่วมพูดคุยและแลกเปลี่ยนความรู้ไปกับเรา NGINX Super User TH https://web.facebook.com/groups/394098015436072 เริ่มต้นใช้งานได้ที่ https://app.manage.proen.cloud/ มีทีม Support ให้ครับ อีกหนึ่งช่องทาง nginx@mfec.co.th
terngr
819,706
Top 3 Advantages of Microsoft Dynamics 365 for Manufacturing Businesses
The process of digitalization has impacted the manufacturing industry immensely. Technology has completely revolutionized the entire supply chain of manufacturing businesses.
0
2021-09-10T10:25:37
https://dev.to/navid_awan/top-3-advantages-of-microsoft-dynamics-365-for-manufacturing-businesses-45dh
microsoftdynamics365, manufacturingindustry, dynamics365crm, dynamics365erp
--- title: Top 3 Advantages of Microsoft Dynamics 365 for Manufacturing Businesses published: True description: The process of digitalization has impacted the manufacturing industry immensely. Technology has completely revolutionized the entire supply chain of manufacturing businesses. tags: MicrosoftDynamics365, ManufacturingIndustry, Dynamics365CRM, Dynamics365ERP //cover_image: ![Microsoft Dynamics365 for Manufacturing Industry](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jductyg4t11fgkemqayv.jpg) --- The process of digitalization has impacted the manufacturing industry immensely. Technology has completely revolutionized the entire supply chain of manufacturing businesses. From daily production to commercial performance and achieving stronger bottom lines, intuitive IT solutions are helping manufacturers achieve operational efficiency. operational efficiency. Factories shying away from digitalizing their manufacturing operations and ignoring the major advances in machine learning and big data analytics will feel the rub in near future. If you own a manufacturing organization and are still dependent on outdated technology, then your competitors will surely win your customers over. You need integrated manufacturing software such as **Microsoft Dynamics 365** that gives you the [Dynamics ERP](https://www.confiz.com/microsoft-dynamics-erp) (Enterprise Resource Planning) and [CRM (Customer Relationship Management)](https://www.confiz.com/microsoft-dynamics-crm) applications necessary to tackle your growing challenges in the competitive digital market. **Microsoft Dynamics 365** is your one-stop solution for all your manufacturing needs. It combines all the functionalities needed to manage equipment, inventory, teams, and processes in factories. This software solution is designed to help manufacturers digitalize their business operations and have a better understanding of their business and customer needs. Interested? Let us discuss the top three advantages of [Dynamics 365 for the manufacturing industry](https://www.confiz.com/microsoft-dynamics-manufacturing). ##Project Management## Manufacturing businesses always have a wide array of operations running in their organization. It gets frustrating when your tasks are not being completed due to a badly improvised project management strategy. According to reports, a staggering $122 million is wasted from every $1 billion invested in the United States, due to poor project performance. You will never be able to achieve your project goals if you do not have any software helping you visualize accurate revenue metrics, effort, and costs all along your supply chain. ##Operational Visibility## You need to have clear visibility in the entire manufacturing process because this is the main factor that drives business excellence. Collecting, integrating, and visualizing global supply chain data is precisely what gets you clear visibility into end-to-end operations. You will be surprised to know that a study found 94% of supply chain leaders maintaining that digital transformation has the power to fundamentally change supply chains, but only 44% of them had a strategy ready back in 2018. You can improve access to your supply chain data using Dynamics 365 as it has all the features and modules required to enhance communication between production, warehousing, customer service, supply, and sales. It will also help you establish collaboration across the numerous departments in your factory and connect your different business systems to offer you a better view of all operations. ##Empowered Employees## Your employees need to be empowered to feel happy at work. When they are happy, they will offer you maximum productivity. Sadly, research found that 53% of Americans were unhappy at work. This is a high number, and you can rest assured that some of these people will also be in your manufacturing organization. When your employees are empowered with perfect tools and comprehensive data, you will see a substantial betterment in their productivity and efficiency in daily tasks. **Dynamics 365** offers you easy access to role-specific tools which you can offer to your employees. These tools will also give you a 360-degree view of your entire business along with the productivity of your employees. Your staff will get an opportunity to enhance their working style and adapt to fit in with the requirements of the modern world of manufacturing. ##Ending words## Manufacturers can use [Dynamics 365](https://www.confiz.com/microsoft-dynamics) to improve the quality of products, meet the varying needs of customers, and shorten time-to-market. This software solution helps you improve visibility over your manufacturing process, boost efficiency of your employees and machines alike and eventually lower costs. **Microsoft Dynamics 365 for the manufacturing industry** offers a thorough insight into the manufacturing lifecycles and warehousing specifics. This refined software suite of customizable apps and modules brings together all your manufacturing business processes into a single comprehensive solution that empowers you with purpose-built apps to help you manage finances, sales, customers, and supply chain. Having important data about your manufacturing business at your fingertips will allow you to take the right decisions at the right time. This ensures greater productivity and profitability. If you want to use [Microsoft Dynamics 365](https://www.confiz.com/microsoft-dynamics) to streamline operations and achieve operational excellence in your manufacturing business, you can opt for the services of official Dynamics 365 partners such as Confiz. These organizations have qualified teams who help you with implementation, migration, and customization.
navid_awan
819,727
Mengidentifikasi Pilihan Terbaik Untuk Percetakan Stiker
Stiker sangat populer di dunia saat ini. Mereka digunakan untuk mengekspresikan pendapat politik,...
0
2021-09-10T10:50:20
https://dev.to/stikeronlinebandung/mengidentifikasi-pilihan-terbaik-untuk-percetakan-stiker-34ja
buatstiker, cetakstiker, printstiker, stikeronline
Stiker sangat populer di dunia saat ini. Mereka digunakan untuk mengekspresikan pendapat politik, untuk menunjukkan dukungan untuk organisasi, untuk mendukung amal dan acara, dan untuk mengekspresikan diri. Stiker tersedia dalam berbagai bentuk dan ukuran, dari kecil hingga ekstra besar dan tersedia dalam hampir semua konfigurasi warna yang Anda inginkan, dari stiker satu warna hingga foil dan efek pelangi. Menggunakan stiker untuk mempromosikan, mendukung, dan mengungkapkan pendapat telah sangat populer selama beberapa dekade sekarang. Namun, jika Anda ingin memesan stiker khusus, Anda perlu mengetahui satu atau dua hal tentang menemukan perusahaan percetakan stiker yang tepat. Apa yang harus Anda ketahui? Kecepatan Putar Kecuali Anda memesan stiker enam bulan sebelum tanggal Anda benar-benar membutuhkannya, maka kecepatan penyelesaian adalah pertimbangan yang sangat penting. Anda perlu mengetahui berapa lama waktu yang dibutuhkan dari pesanan awal hingga saat Anda menerima kiriman stiker. Tidak mengetahui berapa lama waktu yang diharapkan dapat menempatkan Anda dalam situasi yang sangat buruk, terutama jika stiker tersebut untuk penggunaan yang sensitif terhadap tanggal. Oleh karena itu, pastikan Anda memilih perusahaan percetakan stiker dengan waktu pengerjaan yang baik. Idealnya, Anda harus mencari perusahaan yang menawarkan kerangka waktu yang sesuai dengan kebutuhan Anda, bukan sebaliknya. Bahkan, Anda dapat menemukan perusahaan stiker yang menawarkan pencetakan 24 jam, yang memastikan Anda dapat memiliki stiker tepat waktu untuk acara Anda. Stiker Berkualitas Tinggi Baik Anda memesan stiker untuk sebuah acara, stiker bumper untuk mendukung kandidat politik atau stiker untuk penggalangan dana, Anda perlu memperhatikan kualitas stiker. Memilih perusahaan yang tidak memiliki reputasi untuk membuat stiker dengan kualitas terbaik bukanlah ide yang baik. Masalah yang dapat Anda temui antara lain warna pudar, bahan stiker berkualitas buruk, desain buruk dan stiker kehilangan daya rekatnya dalam waktu singkat. Namun, berkolaborasi dengan perusahaan percetakan stiker yang menawarkan stiker dengan kualitas terbaik dan proses pembuatan kustom terbaik akan menghasilkan pengalaman yang jauh lebih baik. Sederhananya, Anda akan menerima stiker berkualitas tinggi dengan warna dan daya rekat yang telah teruji oleh waktu. Kontak Pribadi dengan Layanan Pelanggan Di era digital ini, menjadi semakin umum untuk melakukan bisnis dengan perusahaan tanpa pernah benar-benar berbicara dengan orang sungguhan. Namun, ketika Anda memesan stiker dari perusahaan percetakan stiker, ini biasanya bukan ide yang baik. Berbicara dengan orang sungguhan dapat menawarkan banyak manfaat dan dapat memastikan bahwa Anda dapat menikmati transaksi semulus mungkin dan menerima stiker terbaik dari kesepakatan. Oleh karena itu, Anda perlu memastikan bahwa perusahaan stiker yang Anda pilih menawarkan sarana kontak langsung, baik itu melalui email, chat, atau telepon. Berbicara dengan orang sungguhan tentang desain stiker Anda, tentang pesanan dan jangka waktu yang diperlukan agar pekerjaan kustom selesai tepat waktu adalah pertimbangan penting di sini. Pertanyaan tentang Pengembalian Meskipun Anda mungkin berharap tidak harus melalui proses pengembalian, Anda perlu memastikan bahwa Anda memilih perusahaan percetakan stiker yang menawarkan kebijakan yang baik. Meskipun Anda mungkin tidak perlu mengembalikan stiker Anda karena suatu masalah, berurusan dengan perusahaan yang bersedia bekerja sama dengan Anda dalam proses pengembalian bisa sangat bermanfaat. Idealnya, Anda akan menemukan perusahaan yang menawarkan pengembalian dana penuh untuk setiap kerusakan atau pekerjaan yang tidak memuaskan Anda. Mengikuti beberapa panduan ini akan membantu memastikan bahwa Anda dapat menemukan perusahaan percetakan stiker terbaik untuk kebutuhan Anda, apakah Anda ingin mendukung amal di gereja Anda, mempromosikan donor darah, dan mempromosikan band Anda atau apa pun. <a href="https://stikeronline.com/">Buat Stiker Bandung</a>
stikeronlinebandung
819,782
Hello Python through Docker and Kubernetes
Going from an application to a containerized application can have many benefits. Today we are taking...
0
2021-09-17T11:18:17
https://dev.to/itminds/hello-python-through-docker-and-kubernetes-379d
docker, kubernetes, virtualization
Going from an application to a containerized application can have many benefits. Today we are taking the step further and looking at how we can go from a small application all the way to deploying it on Kubernetes. We are going to take a small application, build a container image around it, write Kubernetes manifest and deploy the whole thing on Kubernetes. This is a practical guide, so I encourage you to follow along with your own application in your favorite programming language. Here we will use Python. ## Application We start out with a small Hello World python REST api in Flask. The following dependencies in Python are needed: * flask_restful * flask which can be installed with `pip3 install flask flask_restful`. Our application is a simple python script `hello-virtualization.py` ``` from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class HelloWorld(Resource): def get(self): return "Hello Python!" api.add_resource(HelloWorld, '/') if __name__ == '__main__': app.run(host='0.0.0.0') ``` Our app simply exposes an endpoint that returns "Hello Python!" and can be started with `python3 hello-virtualization.py`. You should see the following in your terminal: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7d2qublj3ztfbp2shiit.png) We can see that the server is listening on port 5000, and can confirm this by sending a request: `curl 10.0.0.11:5000` Great! We now have a very simple application that responds when we call it. Lets wrap this in a container image. ## Docker First you need to get docker installed on your machine. If you are on Windows or MacOS, [Docker Desktop](https://www.docker.com/products/docker-desktop) should do the trick. Next we will wrap the application in a docker image. To do this create a file called `Dockerfile` in the same folder as your application. In the Dockerfile we will start by setting a base container image for our own container image. Since we need Python we will use `python:3.8-buster` by typing: `FROM python:3.8-buster`. Next we need to include the packages needed for our application to run. We do this by writing `RUN pip3 install flask flask_restful` just as when we fetched the packages for our local system, we now fetch the to the image. We the specify a working directory: `WORKDIR /app` which is where all commands we run will be executed from. Copy the source code into the working directory: `COPY hello-virtualization.py /app`, inform that the container should have port 5000 open with `EXPOSE 5000` and finally tell which command will be executed when we run the container image: `CMD ["python3", "hello-virtualization.py"]` The final Dockerfile should look like this: ``` FROM python:3.8-buster RUN pip3 install flask flask_restful WORKDIR /app COPY hello-virtualization.py /app EXPOSE 5000 CMD ["python3", "hello-virtualization.py"] ``` We will then need to build the container image and give it the name `hello-virtualization` for reference: `docker build . -t hello-virtualization` Will build our container image. To run the image we simply type: `docker run hello-virtualization` The command will give an output very similar to when we executed the script using only python. Validate that the application works by curling the endpoint that is printed by the command. ## Kubernetes If you use Docker Desktop, Kubernetes can be enabled through the UI. If you do not use Docker Desktop, there are multiple alternatives for setting up a Kubernetes cluster for development like: * [minikube](https://minikube.sigs.k8s.io/docs/start/) * [mikrok8s](https://microk8s.io/) * [KiND](https://kind.sigs.k8s.io/) For this demo all of them will work. Kubernetes needs to fetch the container image that we just build from a container registry. To make a solution that works for all the different versions, we will push the container image to a remote container registry. The easiest way to do this is to create an account on [Docker Hub](https://hub.docker.com/) and authenticate by typing `docker login` in your terminal. When this is done we need to retag the container image so that the Docker CLI knows that we want to push the image to a container registry owned by the user we just created. To tag the container image: `docker tag hello-virtualization:latest <DOCKER_HUB_USERNAME>/hello-virtualization:latest` To push the image to Docker Hub: `docker push <DOCKER_HUB_USERNAME>/hello-virtualization:latest` Now we are ready for some Kubernetes. Kubernetes resources are managed through manifest files written in Yaml. The smallest unit in Kubernetes is called a Pod, which is a wrapper around one or more containers. When you want to deploy a Pod in Kubernetes, you would often use another resource called a Deployment to manage the deployment of a Pod and the replication factor of the Pod. Create a new file called `hello-virtualization-deployment.yaml` to create a Deployment for our app: ``` apiVersion: apps/v1 kind: Deployment metadata: name: hello-virtualization-deployment labels: app: hello-virtualization spec: replicas: 1 selector: matchLabels: app: hello-virtualization-label template: metadata: labels: app: hello-virtualization-label spec: containers: - name: hello-virtualization-container image: <DOCKER_HUB_USERNAME>/hello-virtualization:latest ports: - containerPort: 5000 ``` A few things to notice: ``` replicas: 1 ``` This tells the Deployment that we want 1 instance of our container running. ``` selector: matchLabels: app: hello-virtualization-label ``` Tells the Deployment that it shall control the template with the label `app: hello-virtualization-label`. ``` template: metadata: labels: app: hello-virtualization-label ``` Is where we define that a template and set the label that binds this template to the Deployment. ``` template: ... spec: containers: - name: hello-virtualization-container image: <DOCKER_HUB_USERNAME>/hello-virtualization:latest ports: - containerPort: 5000 ``` We configure that this template is using the container image that we build and that the container listens on 5000 when created. To deploy this on the Kubernetes we use the tool `kubectl`: `kubectl apply -f hello-virtualization-deployment.yaml` You should be able to see the Pod being created by typing: `kubectl get pods` ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bybqsgu9kpliuaqdxdik.png) Kubernetes needs to pull the image and start the container in the Pod, but after a few seconds the pod should be have a Running status. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pzku1j5z21zwcc53t5l.png) The Pod has an IP address within the Kubernetes cluster, that we can use to test that the application works. To get the IP address type `kubectl get pods -o wide`: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgtvfsb0n716eliq4qj3.png) Since this IP address is only reachable from within the Kubernetes cluster we need to jump into a Pod: `kubectl run my-shell --rm -i --tty --image curlimages/curl -- sh` Now we can curl the IP address and see that the application works. This is not a stable solution for many reasons. First the IP of the Pod is short lived, meaning that the Pod with the application will not have the same IP every time we start it. Try to kill the Pod with `kubectl delete pod <POD_NAME>`. The pods name is listed when you type `kubectl get pods`. If you type `kubectl get pods -o wide` after having deleted the pod you will se that there is a pod running with a similar name and a new IP address. This happens because we have told Kubernetes that we wanted 1 replica of the Pod, and therefor Kubernetes will make sure that there is always 1 Pod alive with our application. If our application crashed, Kubernetes will simply deploy a new one! Though the IP will not be the same so we need to be able to contact our application another way. Introducing Kubernetes Services: Create a file `hello-virtualization-service.yaml` ``` apiVersion: v1 kind: Service metadata: name: hello-virtualization-service labels: app: hello-virtualization-label spec: ports: - port: 5000 targetPort: 5000 protocol: TCP selector: app: hello-virtualization-label ``` A few things to notice: ``` spec: ports: - port: 5000 targetPort: 5000 protocol: TCP ``` Tells our service that when we contact it on port 5000 it should forward the communication to port 5000 on the target using TCP. ``` spec: ... selector: app: hello-virtualization-label ``` Informs the service that the target is actually the Pods with the label `app: hello-virtualization-label`, just like we did in the Deployment. Labels and label selectors is how we bind resources in Kubernetes. We deploy the service just as we did the Deplyment: `kubectl apply -f hello-virtualization-service.yaml`. To see that the service was deployed type: `kubectl get svc`: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkrk6krvi6gkl8uckhc6.png) We then test that the service work by trying to contact our application through the service. Once again jump into a pod on the cluster: `kubectl run my-shell --rm -i --tty --image curlimages/curl -- sh` This time we will not use the Pods IP, but the name of the service to contact our application: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7ayop2lj9occumvjs1f.png) Inside the Kubernetes cluster you can always use the names of services to communicate between Pods.
jacobcrawford
821,879
29 Projects To Help You Practice HTML CSS Javascript 2021
Today we will go into learning about UI Page projects to increase design ability and how to apply...
0
2021-09-12T22:20:21
https://www.niemvuilaptrinh.com/article/29-project-giup-ban-thuc-hanh-html-css-javascript-2021
html, css, javascript, beginners
Today we will go into learning about UI Page projects to increase design ability and how to apply HTML, CSS, Javascript to actual website development! #Responsive Social Platform UI ![Responsive Social Platform UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Responsive%20Social%20Platform%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/RwWKYMO default-tab=html,result %} #Fox News Templates ![Fox News Templates](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Fox%20News%20Templates%20%281%29.png) You can see the results below. {% codepen https://codepen.io/havardob/pen/GRjPywY default-tab=html,result %} #Netflix Landing Page Clone ![Netflix Landing Page Clone](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Netflix%20Landing%20Page%20Clone%20%281%29.png) You can see the results below. {% codepen https://codepen.io/bradtraversy/pen/yWPONg default-tab=html,result %} #Book Store UI ![Book Store UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Book%20Store%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/JjGKKrP default-tab=html,result %} #Project Management Dashboard UI ![Project Management Dashboard UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Project%20Management%20Dashboard%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/OJRNbZp default-tab=html,result %} #Microsoft Homepage Clone ![Microsoft Homepage Clone](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Microsoft%20Homepage%20Clone%20%281%29.png) You can see the results below. {% codepen https://codepen.io/bradtraversy/pen/ZEGGNRb default-tab=html,result %} #Task Manager UI with CSS Grid ![Task Manager UI with CSS Grid](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Task%20Manager%20UI%20with%20CSS%20Grid%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/QWyPMgq default-tab=html,result %} #File Sharing Web App ![File Sharing Web App](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/File%20Sharing%20Web%20App%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/yLOxRyG default-tab=html,result %} #Messaging App UI with Dark Mode ![Messaging App UI with Dark Mode](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Messaging%20App%20UI%20with%20Dark%20Mode%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/ZEbXoRZ default-tab=html,result %} #Booking App UI ![Booking App UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Booking%20App%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/pobaKGX default-tab=html,result %} #Job Search Platform UI ![Job Search Platform UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Job%20Search%20Platform%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/jOqdNbm default-tab=html,result %} #Skateboard Video Platform ![Skateboard Video Platform](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Skateboard%20Video%20Platform%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/LYRKpWe default-tab=html,result %} #Instagram re-design ![Instagram re-design](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Instagram%20re-design%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/qeNvRM default-tab=html,result %} #VideoCall App UI ![VideoCall App UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/VideoCall%20App%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/pobbEYB default-tab=html,result %} #Gym Website - Tailwind Starter Kit ![Gym Website - Tailwind Starter Kit](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Gym%20Website%20-%20Tailwind%20Starter%20Kit%20%281%29.png) You can see the results below. {% codepen https://codepen.io/bradtraversy/pen/zYqVgXO default-tab=html,result %} #Task Management Dashboard UI ![Task Management Dashboard UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Task%20Management%20Dashboard%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/gOpbRPO default-tab=html,result %} #Internal Video Platform UI ![Internal Video Platform UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Internal%20Video%20Platform%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/VweooYQ default-tab=html,result %} #Gmail Redesign ![Gmail Redesign](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Gmail%20Redesign%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/xxKqyVO default-tab=html,result %} #Chat App UI ![Chat App UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Chat%20App%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aybukeceylan/pen/gVmZmJ default-tab=html,result %} #Responsive-Webpage ![Responsive-Webpage](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Responsive-Webpage%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/wLOejj default-tab=html,result %} #Dashboard Design with Flexbox ![Dashboard Design with Flexbox](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Dashboard%20Design%20with%20Flexbox%20%281%29.png) You can see the results below. {% codepen https://codepen.io/TurkAysenur/pen/YmVYYR default-tab=html,result %} #Services Section ![Services Section](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Services%20Section%20%281%29.png) You can see the results below. {% codepen https://codepen.io/ahmadnasr/pen/KKpvNGY default-tab=html,result %} #Spotify Artist Page UI ![Spotify Artist Page UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Spotify%20Artist%20Page%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/alowenthal/pen/rxboRv default-tab=html,result %} #Twitter Client UI in CSS + HTML ![Twitter Client UI in CSS + HTML](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Twitter%20Client%20UI%20in%20CSS%20+%20HTML%20%281%29.png) You can see the results below. {% codepen https://codepen.io/marceloag/pen/fDmtq default-tab=html,result %} #Responsive Movie App UI ![Responsive Movie App UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Responsive%20Movie%20App%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/nicklassandell/pen/soAyr default-tab=html,result %} #Twitch Redesign Mockup ![Twitch Redesign Mockup](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Twitch%20Redesign%20Mockup%20%281%29.png) You can see the results below. {% codepen https://codepen.io/colewaldrip/pen/aqpRmQ default-tab=html,result %} #Task Management UI ![Task Management UI](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Task%20Management%20UI%20%281%29.png) You can see the results below. {% codepen https://codepen.io/aaronmcg/pen/GRjaRva default-tab=html,result %} #Facebook Profile Page UI Concept ![Facebook Profile Page UI Concept](https://niemvuilaptrinh.ams3.cdn.digitaloceanspaces.com/UI/Facebook%20Profile%20Page%20UI%20Concept%20%281%29.png) You can see the results below. {% codepen https://codepen.io/himalayasingh/pen/bxoBZZ default-tab=html,result %} Related Articles: [HTML Practice Projects for Beginners](https://us.niemvuilaptrinh.com/article/21-html-css-projects-for-beginners) [Front End Developer Tools](https://us.niemvuilaptrinh.com/article/45-front-end-developer-tools) [Free Coding Practice Sites](https://us.niemvuilaptrinh.com/article/11-webiste-to-practice-code-online)
haycuoilennao19
820,025
Doodle 0.6.0 Supports Desktop
Doodle is a pure Kotlin UI framework for the Web (and Desktop), that lets you create rich...
0
2021-09-10T15:42:51
https://dev.to/pusolito/doodle-0-6-0-supports-desktop-25hl
kotlin, webdev, javascript, showdev
[Doodle](https://github.com/nacular/doodle) is a pure Kotlin UI framework for the Web (and Desktop), that lets you create rich applications without relying on Javascript, HTML or CSS. Check out the [documentation](https://nacular.github.io/doodle) and [tutorials](https://nacular.github.io/doodle-tutorials) to learn more. ## Highlights include ### Desktop Support (Alpha) Doodle now supports Desktop and leverages Skia for fast, accurate rendering. This means apps can target desktop via the JVM. Support is still early and not ready for production. There are some missing features--like Accessibility, and others that are partially implemented (i.e. drag-drop). However, overall support is sufficiently complete to begin testing with. So please try this out and report bugs. A key goal for Doodle is to provide as much cross-platform code sharing as possible. That's why Web and Desktop share the same rendering model, and therefore widgets. All widgets written in common code can be used on both platforms. * Desktop and Web share the same rendering model and widgets * Apps written in `common` code can be fully shared between Web and Desktop ### Kotlin 1.5.0 Support Kotlin support has been moved from 1.4.x to 1.5.30. ## APIs - New support for [font family lists](https://github.com/nacular/doodle/blob/master/Core/src/commonMain/kotlin/io/nacular/doodle/drawing/FontLoader.kt#L16) when loading Fonts - New UI Dispatcher ([web](https://github.com/nacular/doodle/blob/master/Browser/src/jsMain/kotlin/io/nacular/doodle/coroutines/Dispatchers.kt#L8), [desktop](https://github.com/nacular/doodle/blob/master/Desktop/src/jvmMain/kotlin/io/nacular/doodle/coroutines/Dispatchers.kt#L9)) for work that must be done on the UI thread - New ImageModule ([web](https://github.com/nacular/doodle/blob/master/Browser/src/jsMain/kotlin/io/nacular/doodle/application/Modules.kt#L130), [desktop](https://github.com/nacular/doodle/blob/master/Desktop/src/jvmMain/kotlin/io/nacular/doodle/application/Modules.kt#L87)) to encapsulate image support. - [ScrollPanelBehavior](https://github.com/nacular/doodle/blob/master/Core/src/commonMain/kotlin/io/nacular/doodle/controls/panels/ScrollPanel.kt#L26) now has access to [ScrollPanel](https://github.com/nacular/doodle/blob/master/Core/src/commonMain/kotlin/io/nacular/doodle/controls/panels/ScrollPanel.kt#L55) children, layout, etc.
pusolito
820,110
How to attach an external JavaScript file to the HTML file?
Originally posted here! To attach an external JavaScript file to the HTML template, we can use the...
0
2021-08-21T00:00:00
https://melvingeorge.me/blog/attach-external-javascript-file-to-html-file
html
--- title: How to attach an external JavaScript file to the HTML file? published: true tags: HTML5 date: Sat Aug 21 2021 05:30:00 GMT+0530 (India Standard Time) canonical_url: https://melvingeorge.me/blog/attach-external-javascript-file-to-html-file cover_image: https://melvingeorge.me/_next/static/images/main-0c6576df43a6d0e08ea60c2b2a9abdd0.jpg --- [Originally posted here!](https://melvingeorge.me/blog/attach-external-javascript-file-to-html-file) To attach an external JavaScript file to the HTML template, we can use the `script` tag and then use the `src` attribute and define the path to the external JavaScript file inside this attribute. ### TL;DR ```html <!DOCTYPE html> <html lang="en"> <!-- Define the path to the external JavaScript code using script tag and the src attribute --> <script src="js/app.js"></script> <body> Hello World from HTML </body> </html> ``` For example, first let's write a basic `index.html` template which outputs the `Hello World from HTML`. It can be done like this, ```html <!DOCTYPE html> <html lang="en"> <body> Hello World from HTML </body> </html> ``` Now let's write the external JavaScript code which logs `Hello World from JavaScript`. It cna be done leiek this, ```js // External JavaScript file with code console.log("Hello World from JavaScript"); ``` Let's also name this JavaScript file as `app.js` and place it under the `js` directory or folder. So the structure of both the `JavaScript` and `HTML` files may look like this, **Current files structure** ```bash - index.html - js - app.js ``` Now to include the `app.js` file in the HTML file, we can use the `script` tag and then use the `src` attribute to define the path to the JavaScript file. It can be done like this, ```html <!DOCTYPE html> <html lang="en"> <!-- Define the path to the external JavaScript code using script tag and the src attribute --> <script src="js/app.js"></script> <body> Hello World from HTML </body> </html> ``` - **You don't have to write the `script` tag before the `body` tag. You can insert the `script` tag anywhere in the HTML template and it will work. ** Now if you look in the browser console we can see the `Hello World from JavaScript` output which shows us that the JavaScript code has been executed successfully. See the above code live in [repl.it](https://replit.com/@melvin2016/attach-external-javascript-file-in-the-HTML-template#index.html). That's all 😃! ### Feel free to share if you found this useful 😃. ---
melvin2016
820,116
How to Seed Data Fast with the Faker Gem ⚡️🏃🏻💨
Table Of Contents Introduction What is Faker? Installation &amp; Usage Conclusion ...
0
2021-09-10T18:38:59
https://dev.to/maxinejs/seed-data-fast-with-the-faker-gem-nej
ruby, javascript, database, devops
## Table Of Contents * [Introduction](#1) * [What is Faker?](#2) * [Installation & Usage](#3) * [Conclusion](#4) ### Introduction <a name="1"></a> Chances are you're here because you saw the word combination *Seed Data Fast*, and I don't blame you! Creating a database is enough work itself, so coming up with custom seed data can become an unnecessary and time-consuming task. But all thanks to the Ruby **[Faker gem](https://github.com/faker-ruby/faker)**, seeding data can be done in a **quick**, **easy**, and **fun** way! ### What is Faker? <a name="2"></a> Faker is a Ruby gem written by Jason Kohles. Like many of us, Jason got sick of spending time writing out seed data, so he made a gem to make all of our lives easier. *Thanks, Jason!* Faker comes with a handful of generators that allow you to generate fake data such as names, emails, phone numbers, addresses, Twitter posts, job titles, and more! There are also methods available to provide you with [unique data](https://github.com/faker-ruby/faker#ensuring-unique-values). ### Installation <a name="3"></a> *This is a Ruby Gem and will only work for Ruby applications.* First, install the Ruby Faker Gem. ``` gem install faker ``` Once the gem has successfully installed, head over to the <code>seeds.rb</code> file, and require the gem at the top of the file. ```ruby require 'faker' ``` You're ready to go, all there's left to do is... *Seed*. *That*. *Data*. In your <code>seeds.rb</code> file, go ahead and write a small script using the Faker gem. ```ruby # generate 10 users 10.each do username = Faker::Esport.player name = Faker::Name.unique.name profession = Faker::Job.title email = Faker::Internet.unique.email address = Faker::Address.full_address phone = Faker::PhoneNumber.unique.cell_phone User.create(username: username, name: name, email: email, profession: profession, address: address, phone: phone ) end ``` Once you've created a beautiful script containing all your lovely data, seed it! In your terminal run: ``` rails db:seed ``` You can check everything was seeded correctly by confirming your data is present within the rails console, or if you have your server up and running, you can check your routes. *Note: If no seed data shows up, see that you are meeting all validations in your model that may be prohibiting the data from being created in the first place.* There you have it! ✨*Data*✨ If you need to create data that there are not necessarily generators for, get creative with ones that already exist! As you can see in the example script provided above, there was no username generator, so the Esport generator with the <code>.player</code> method was used instead. Most of the generators provide multiple methods for various types of, as well as unique data. ### Conclusion <a name="4"></a> Creating seed data can be a tedious task, but it doesn't have to be! The Faker gem is fantastic for fast, simple, and sometimes funny seed data. If you have any alternative ways/gems to seed data, feel free to share them below! Happy Seeding! 🌱
maxinejs
820,146
Getting Started With Angular Material
A lot of developers are using Angular Material in their Angular applications. But what is the best...
14,345
2021-09-10T19:23:36
https://hasnode.byrayray.dev/getting-started-with-angular-material
A lot of developers are using Angular Material in their Angular applications. But what is the best way to add the library, and why do you want to use ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## Table Of Contents - [What Is Angular Material?](#what-is-angular-material) - [Why Use Angular Material?](#why-use-angular-material) - [How To Add Angular Material?](#how-to-add-angular-material) - [How To Use Angular Material?](#how-to-use-angular-material) - [How To Load All Angular Material Components At Once](#how-to-load-all-angular-material-components-at-once) ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## What Is Angular Material? ![Screenshot_2021-09-02_at_15.59.01.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631300625484/dmyoLS4_h.png) Angular Material is an Angular Component library build and maintained by Google. It's a component library filled with a ton of easy to use Angular components. The library includes components like a [datepicker](https://material.angular.io/components/datepicker/overview), [input elements](https://material.angular.io/components/input/overview), [toggle switches](https://material.angular.io/components/slide-toggle/overview), [tables](https://material.angular.io/components/table/overview) and, [a lot more](https://material.angular.io/components/categories). Components support customization in various ways. You can use their pre-built themes or build your own with the custom color scheme. ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## Why Use Angular Material? Angular Material is updated simultaneously with Angular, which is one of the best advantages of using Angular Material and not other component libraries. Every time Google brings a new update for Angular, it will update Angular Material simultaneously. When you update your Angular application with `ng update`, it will also update Angular Material simultaneously, which is pretty handy. With Angular Material, you know for sure, as long as Google keeps developing Angular, it will stay Angular Material up-to-date. All the components have been tested for a long time. I've been an Angular Material user for a long time, but I've never had an actual error in an Angular Material component. But if you don't like the style of Angular Material, you can also change their styling. Picking another Angular component library is an excellent alternative if you want something different. ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## How To Add Angular Material? Before installing Angular Material in an existing project, we have to make sure you installed the Angular CLI. If you haven't, run this command. ```bash npm install -g @angular/cli ``` Let's start with installing Angular Material in an existing project. ```bash ng add @angular/material ``` When you perform the command above, you will get a few configuration options to choose from. Make the choice you want. After this process, you can use Angular Material in your Angular project. ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## How To Use Angular Material? Angular Material components can be used by importing the module. For example, if you want to use the [checkbox component](https://material.angular.io/components/checkbox/overview), you have to import the following module in the `app.module.ts` if you're going to use it in all the components across the entire application. ```tsx import {MatCheckboxModule} from '@angular/material/checkbox'; ``` This code can be found on every component page in the API tab. Now you can go to a component where you want to use your imported component. Check the examples tab for an example of the components and code sample's on how to use the component. ```html <mat-checkbox class="example-margin" [(ngModel)]="checked">Checked</mat-checkbox> ``` ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## How To Load All Angular Material Components At Once There is no default way to load all Angular Material component modules at once. I think there is a good reason for that. You can create an Angular Module to import all the Angular Material modules and import that module in your `app.module.ts`. The question is, are you going to use all the Angular Material components in your application? I don't think so because you're going to waste a whole lot of data that the user needs to download. I think it's wiser to load the module of the Angular Material component in the Angular Module where you need it. And not load them all at once. But if you want to do it, check out this [Gist on Github](https://gist.github.com/pimatco/d5b1891feb90b60ca4681011b6513873) which has all the available Angular Material modules for you. {% gist https://gist.github.com/pimatco/d5b1891feb90b60ca4681011b6513873 %} ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## Conclusion Angular Material offers a great set of well-tested and configurable Angular components. The most significant benefit is, it's developed simultaneously by the Angular team. I'm looking forwards to see all the projects you build with Angular Material! ![divider-byrayray.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629890886208/NhHYvPmBA.png) ## Thanks! ![hashnode-footer.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629789655319/nBF6anHH4w.png) *I hope you learned something new or are inspired to create something new after reading this story! 🤗 If so, consider subscribing via email (scroll to the top of this page) or follow me here on Hashnode. * > Did you know that you can create a [Developer blog like this one, yourself](https://hashnode.com/@devbyrayray/joinme)? It's entirely for free. 👍💰🎉🥳🔥 *If I left you with questions or something to say as a response, scroll down and type me a message. Please send me a [DM on Twitter @DevByRayRay](https://twitter.com/@devbyrayray) when you want to keep it private. My DM's are always open 😁*
devbyrayray
820,262
No ARIA > Bad ARIA
There are many misconceptions surrounding Web Accessibility, most of the times fueled by lack of knowledge (or interest) in the matter. This article is a collection of some of those accessibility misconceptions or myths.
14,558
2021-09-10T21:44:23
https://alvaromontoro.com/blog/67989/myths-about-web-accessibility#no-aria-bad-aria
a11y, webdev
--- title: No ARIA > Bad ARIA published: true description: There are many misconceptions surrounding Web Accessibility, most of the times fueled by lack of knowledge (or interest) in the matter. This article is a collection of some of those accessibility misconceptions or myths. tags: a11y,webdev cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hehcxw23j2eoyq2n2ep.jpg canonical_url: https://alvaromontoro.com/blog/67989/myths-about-web-accessibility#no-aria-bad-aria series: Myths about Web Accessibility --- <aside><blockquote><small><strong>There is an <a href="https://dev.to/alvaromontoro/myths-about-web-accessibility-237k">all-in-one article including every part from this series</a> (if you want prefer to read it all at once instead of "by installments")</strong></small></blockquote><hr/></aside> Before all accessibility experts start crying foul and cursing my name, let me clarify something: __**No ARIA is better than bad ARIA**__. ARIA is not supported by all browsers/screen readers, and it should be a last resort. The way to go should be using semantic HTML when possible. Unfortunately, using semantic HTML is not always possible and not enough to cover all the cases needed for a good experience. For example, there are widgets and patterns (e.g., tab panels again) that cannot be done using semantic elements and, in those cases, ARIA is a must. The myth/misconception of "No ARIA > Bad ARIA" is that it leaves out an important part of the equation: where does "Good ARIA" go? And the answer is actually quite simple: <figure><center><strong>Good ARIA &gt; No ARIA &gt; Bad ARIA<br/>&nbsp;</strong></center></figure> We can all agree that bad ARIA is bad. But no ARIA isn't ideal either: the solution to providing a bad experience (bad ARIA) should not be providing a subpar experience (no ARIA). There should be a good experience too! If a kid is learning to ride a bike and struggles and falls, we don't tell them, "stop! don't ever ride a bike!" Instead, we teach them. We encourage them to keep learning and trying... until they can do it by themselves. <figure> ![A little kid pushes a bicycle with the help of an adult](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jpmvqtmo6xqptfgnk8cg.jpeg) <figcaption>Sorry, Timmy. You fell once. No point on trying again. (picture: <a href="https://www.pexels.com/photo/full-body-of-father-and-child-in-protective-helmet-pushing-bike-along-road-3932890/"> Tatiana Syrikova</a>)<br/>&nbsp;</figcaption> </figure> "No ARIA > bad ARIA" perpetuates a false dichotomy. There's good ARIA, too. And we can learn ARIA, practice ARIA, improve ARIA... We won't be good at the beginning, but we will get better and provide a better experience with time and practice than with no ARIA.
alvaromontoro
820,764
What is Open Source Debt? And How to repay it?
If you are a developer, then I would say you're in debt to unknown people. Don't worry it is good...
0
2021-09-11T14:26:40
https://rajvirsingh1313.hashnode.dev/what-is-open-source-debt-and-how-to-repay-it
hacktoberfest, opensource, opensourcedebt
If you are a developer, then I would say you're in debt to unknown people. Don't worry it is good debt and there's nothing to be worried about whilst you repay it. But, yea, it is a never-ending one. Let me explain # What is Open Source First of all, what is open source?, and I would assume you know it. But if you are getting into this new world of programming so for that, here's the explanation. > Open-source software is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. ![](https://c.tenor.com/smOFBj4VakkAAAAC/spongebob-rainbow-open-source-opensource-linux.gif) Those were the first lines I could find when searching for open source. So now you know what open source is so let's talk about the good never-ending debt you are in. # What is Open Source Debt? Let me break this down into few pieces ## Why do people create things for free? Open source is all about accessibility for everyone, with ease. I am good with examples so let me show one example of me. I am working on this project named [Elecrue](https://github.com/RajvirSingh1313/elecrue). I started this project and made it public for two reasons, I created it for my own because I was looking for a good starter code for electron-react in js but I didn't found one so I created one myself, the second I wanted to repay my open-source debt as [React](https://github.com/facebook/react), [Electron](https://github.com/electron/electron), [Vue](https://github.com/vuejs/vue) is open source. And the third and bit selfish one, Is that I wanted to beef up my resume. By doing so, I helped a lot of other developers who were having the same problem, now they can use it to create other things to make more amazing things open source, like [Evan You](https://github.com/yyx990803) did by creating [Vue](https://github.com/vuejs/vue) open-source, and now hundreds of hundreds of developers and companies use it to create more amazing things. In this great loop, everyone makes their contributions. ## How to repay Open Source Debt? ![](https://c.tenor.com/vqtfwk0H9VgAAAAC/im-finally-gonna-be-able-to-pay-off-all-the-money-i-owe-stan.gif) Now you understood the debt, now let's talk about the ways you can repay it. There are many ways of repaying this debt, let's talk about some that are most common ways >If you want someone to give you something good then you need to give something good too 1. **By writing quality code for a problem and then sharing it**:- As the statement itself explains, You should just do like me or Evan You, write a code for a problem or a fun project that you think will both make your resume good and will help someone 2. **By Educating others**:- Another way is to educate others. There are many ways to do this, Like creating tutorials on youtube, writing blogs, or making Github repositories for storing and sharing learning material like [IoT Course from Microsoft](https://github.com/microsoft/IoT-For-Beginners) (which I am learning IoT from). 3. **By Helping Others**:- Open Source is all about helping each other, so if you happened to find any bug in your favorite library or framework, then create an issue on its repository. If you have a solution for that bug, make a pull request on it. By doing so you are helping yourself and others, And it will add a big pulse to your resume. 4. **By Sponsoring the creator or project**:- I used Evan You as an example a lot of times to let's use him as the last example too. If you have seen his [Github profile](https://github.com/yyx990803) and [Vue Github Page](https://github.com/vuejs/vue) then you would have noticed that there is an option to sponsor a project or the developer, it means you can pay the developer or the project via Github to backup developer. As the project grows large like Vue it needs a lot of maintenance so often developers don't have the time or energy to maintain the project as there is a lot of work and time they need to pour in. So the people or the companies who uses the project, pays the developer a small amount so the developer keep maintaining the project As October is coming up, there is a month-long celebration called [Hacktoberfest](https://hacktoberfest.digitalocean.com/), to promote open source. I think it is a good way to remember to repay your open-source debt. To learn about it more check out their website:- https://hacktoberfest.digitalocean.com/ That's it, I hope I explained well this great debt that never ends for good, If you liked it then share this article, If not then don't forget to give me feedback so I can improve myself. Have a good day, Rajvir Singh
rajvirsingh1313
820,933
How to build your virtual workspace
In this article I will teach how to use Docker containers as a development workspace using a real...
0
2021-09-11T18:04:56
https://dev.to/abdorah/how-to-build-your-virtual-workspace-84
docker, github, vscode, tutorial
> In this article I will teach how to use `Docker` containers as a development workspace using a real word example. I will go through multiple `Dev Ops` related topics. However, this is still an example that I have had the opportunity to work on during my open-sourcing journey. ## What do we want to achieve? One of the wonderful open source projects that I have got the opportunity to help in is [**the One Programming language**](https://github.com/One-Language) . The goal of this project is to create a programming language named **One**. To build the project and run tests you must have a list of hard to install and configure dependencies on your machine, e.g. `Make`, `LLVM`, etc. Moreover, we wanted to make it easy for developers to get involved and contribute easily in the project. That's why we considered having a docker image to build the code and run tests as priority. Hence, we created this beautiful [image](https://hub.docker.com/r/onelangorg/one) for our organization. In this article I am going to show you how we made it and also how you can make you own development image. ## Build the `Docker` image First things first, we need to build the image. Indeed there is nothing special in this section, because we will only write a [`Dockerfile`](https://github.com/One-Language/One/blob/master/Dockerfile) for our image. Yet, what make this image special are the pieces of software that will include. Generally, you ought to setup packages required to run your project and your tests, along side with a version control system like `git`. As far as I am concerned, I included the following packages in my lightweight `alpine` base image: ```dockerfile FROM alpine:latest LABEL The One Programming Language # LLVM version ARG LLVM_VERSION=12.0.1 # LLVM dependencies RUN apk --no-cache add \ autoconf \ automake \ cmake \ freetype-dev \ g++ \ gcc \ libxml2-dev \ linux-headers \ make \ musl-dev \ ncurses-dev \ python3 py3-pip \ git ``` Next I setup the reaming packages, like `LLVM` and ` pre-commit `. This last is a powerful framework for managing and maintaining multi-language `pre-commit` hooks. It is an important addition to your open source project. Since `Git hook` scripts are useful for identifying simple issues before submission to code review. We run our hooks on every commit to automatically point out issues in code such as missing semicolons, trailing whitespace, and debug statements. By pointing these issues out before code review, this allows a code reviewer to focus on the architecture of a change while not wasting time with trivial style nitpicks. ```dockerfile # Build and install LLVM RUN wget "https://github.com/llvm/llvm-project/archive/llvmorg-${LLVM_VERSION}.tar.gz" || { echo 'Error downloading LLVM version ${LLVM_VERSION}' ; exit 1; } RUN tar zxf llvmorg-${LLVM_VERSION}.tar.gz && rm llvmorg-${LLVM_VERSION}.tar.gz RUN cd llvm-project-llvmorg-${LLVM_VERSION} && mkdir build WORKDIR /llvm-project-llvmorg-${LLVM_VERSION}/build RUN cmake ../llvm \ -G "Unix Makefiles" -DLLVM_TARGETS_TO_BUILD="X86" \ -DLLVM_ENABLE_PROJECTS="clang;lld" \ -DCMAKE_BUILD_TYPE=MinSizeRel \ || { echo 'Error running CMake for LLVM' ; exit 1; } RUN make -j$(nproc) || { echo 'Error building LLVM' ; exit 1; } RUN make install || { echo 'Error installing LLVM' ; exit 1; } RUN cd ../.. && rm -rf llvm-project-llvmorg-${LLVM_VERSION} ENV CXX=clang++ ENV CC=clang # pre-commit installation RUN pip install pre-commit ``` Now as everything is perfectly configured, you can copy your project directory, build the code, and run your tests while showing significant logs: ```dockerfile # Work directory setup COPY . /One WORKDIR /One # CMake configuration & building RUN mkdir build RUN cmake --no-warn-unused-cli -DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=TRUE -DCMAKE_BUILD_TYPE:STRING=Debug -DCMAKE_C_COMPILER:FILEPATH=/usr/bin/gcc -DCMAKE_CXX_COMPILER:FILEPATH=/usr/bin/g++ -H/One -B/One/build -G "Unix Makefiles" RUN cmake --build ./build --config Debug --target all -j 6 -- # Change directory to build WORKDIR /One/build # Running example input.one RUN ./lexer ../src/input.one log RUN cat log # Running tests RUN ./lexer_test RUN ./parser_test RUN ./argument_test # Tests Dashboard CMD ctest --output-on-failure ``` ## Deploy it to `DockerHub` To do so, you will need a `DockerHub` account. Yet, only your account username and credentials are required. As we are going to deploy it using `GitHub Actions`. Similarly to `pre-commit`, using `GitHub Actions`, or any `CI\CD` tool is a good `Dev Ops` practice. Especially that we are going to configure our image to run `pre-commit` hooks, build the code, run tests, and deploy it the new image to `DockerHub`. In fact, you will do very minor changes to the following [`GitHub Workflow`](https://github.com/One-Language/One/blob/master/.github/workflows/docker-image.yml) to use it in any other project. Let's begging by configuring the `GitHub Workflow` that will run on every push or pull request: ```yaml name: Dockerize One Programming language on: push: branches: [master] pull_request: branches: [master] jobs: build-deploy: name: Build and Publish Docker image runs-on: ubuntu-latest ``` Next, we will add steps to configure needed `GitHub Actions` to deploy to `DockerHub`. Particularly, you won't need any other `GitHub Actions`. Because, you already have a `Dockerfile` with all the prerequisites! ```yaml steps: - name: Checkout code uses: actions/checkout@v2 - name: Set up QEMU uses: docker/setup-qemu-action@v1 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v1 ``` We shall continue by Sign into our `DockerHub` account: ```yaml - name: Login to DockerHub uses: docker/login-action@v1 with: username: ${{secrets.DOCKER_HUB_USERNAME}} password: ${{secrets.DOCKER_HUB_PASSWORD}} ``` Before we go to the next step you need to add `secrets.DOCKER_HUB_USERNAME` and `secrets.DOCKER_HUB_PASSWORD` to your `Github` repository: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwufx6cnb9h344tlppzs.png) Finally, publish your new image named `onelangorg/one:latest` to `DockerHub`: ```yaml - name: Build and Push to DockerHub uses: docker/build-push-action@v2 with: context: . push: true tags: onelangorg/one:latest ``` Don't forget to configure cache so that you won't need to go with all the unnecessary configuration steps everytime. Also, this will decrease the run time dramatically. In my case without cache the run time is about two hours, but with cache it often doesn't surpass one minute and a half! ```yaml cache-from: type=registry,ref=onelangorg/one:latest cache-to: type=inline ``` Consequently, you will cerate a `Docker` [repository](https://hub.docker.com/r/onelangorg/one) in your docker account. ## Use it as a Workspace In this section you will need to pull the `docker` image form `DockerHube` have `VSCode` with `Remote-Containers` installed: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elmj46w3491kbqx3df54.png) This awesome extension admit of getting into the `Docker` container itself, by opening a `VSCode` window inside it. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmmzpd6mgy2a35pxf5fy.png) After Opening the new window attached to your container you can open the development directory: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhnitytlgeoy3v8j9yql.png) And Here you go you have a workspace configured and ready to use! ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5j8jg8wnkaooq4giyfo.png) ## Conclusion Now that you come to the end of this article, you can see how important to use `Docker`, `DockerHub`, and `GitHub Actions`. As well as how easy to use are they. These technologies helps developers to be more productive and not bother with the repetitive configuration of the workspace. On every pull request, we get an updated `Docker` image with a clean code and successfully run tests thanks to `pre-commit`, `Github Actions`, and cache.
abdorah
820,951
Arch Linux Install Guide
I made this guide to easily install Arch Linux without having to go to many different links just to...
0
2021-09-11T18:55:00
https://dev.to/pigges/arch-linux-install-guide-42c6
I made this guide to easily install *Arch Linux* without having to go to many different links just to get it working. Then I decided to share it here for others who may be new to linux and may not completely understand the *[Arch Wiki](https://wiki.archlinux.org/title/Installation_guide)*. ![Arch Logo](https://raw.githubusercontent.com/Pigges/Arch-Linux-Install-Guide/main/img/arch-logo.png "Arch Linux") #### This is a guide about how to install *Arch Linux* from start to finish in one place. If you follow these steps you will end up with an clean install of *Arch Linux* without any graphical environment. ### Keep in mind that this guide is made for *BIOS* and not *UEFI* systems * Check if you have a *UEFI* system: ```shell $ ls /sys/firmware/efi/efivars ``` * If the command shows you the directory without any error, then you have a *UEFI* system. But you may still be able to follow this guide to some extent. * If you got an error that looks like this, then you are good to go and can follow this guide. ``` ls: cannot access '/sys/firmware/efi/efivars': No such file or directory ``` ## Pre install steps ###1. Set the keyboard layout * Find keyboard layout: ```shell $ ls /usr/share/kbd/keymaps/**/*.map.gz ``` * Set the keyboard layout example: ```shell $ loadkeys sv-latin1 ``` ### 2. Connect to the internet * Enable the network interface: ```shell $ ip link ``` * Verify that it works: ```shell $ ping pigges.xyz ``` ### 3. Update the system clock * Enable the clock service: ```shell $ timedatectl set-ntp true ``` * Check the service status: ```shell $ timedatectl status ``` ### 4. Partition the disks * Check what disks are connected: ```shell $ lsblk ``` |Example for `lsblk`: | |:-:| |![lsblk Example](https://raw.githubusercontent.com/Pigges/Arch-Linux-Install-Guide/main/img/lsblk-example.png "lsblk Example")| * Enter fdisk: ```shell $ fdisk /dev/sda ``` * Example partition layout for a 100GB disk: | NAME | SIZE | TYPE | MOUNT | |:-------|:-------:|:----:|--------:| | sda | `100G` | disk | | | ├─sda1 | `200M` | part | `/boot` | | ├─sda2 | `12G` | part | `[SWAP]`| | ├─sda3 | `30G` | part | `/` | | └─sda4 | `57.8G` | part | `/home` | ### 5. Format the partitions * Make filesystem for all the partitions except for the `SWAP` partition: ```shell $ mkfs.ext4 /dev/sdaX ``` * Make and enable `SWAP`: ```shell $ mkswap /dev/sda2 $ swapon /dev/sda2 ``` ### 6. Mount file systems * Mount the root partition to `/mnt`: ```shell $ mount /dev/sda1 /mnt ``` * Create directories for partitions: ```shell $ mkdir /mnt/boot $ mkdir /mnt/home ``` * Mount the other partitions ```shell $ mount /dev/sda1 /mnt/boot $ mount /dev/sda4 /mnt/home ``` ## Install steps ### 1. Run the install command with `pacstrap` ```shell $ pacstrap /mnt base base-devel linux linux-firmware nano ``` ### 2. Generate fstab file and Chroot to the disk * Generate an fstab file: ```shell $ genfstab -U /mnt >> /mnt/etc/fstab ``` * Chroot into the disk ```shell $ arch-chroot /mnt ``` ## Post install steps ### 1. Set Time Zone * Check Region and City: ```shell $ ls /usr/share/zoneinfo # Get the REGION $ ls /usr/share/zoneinfo/REGION # Get the CITY ``` * Set Region and City ```shell $ ln -sf /usr/share/zoneinfo/REGION/CITY /etc/localtime ``` * Run hwclock to generate `/etc/adjtime`: ```shell $ hwclock --systohc ``` ### 2. Setup localization * Edit `/etc/locale.gen` and uncomment the locales you may need: ```shell $ nano /etc/locale.gen ``` Example for `/etc/locale.gen`: ``` en_US.UTF-8 UTF-8 sv_SE.UTF-8 UTF-8 ``` * Generate the locales: ```shell $ locale-gen ``` * Create and setup the `/etc/locale.conf` file: ```shell $ nano /etc/locale.conf ``` Example for `/etc/locale.conf`: ``` LANG=en_US.UTF-8 ``` * Set keyboard layout: ```shell $ nano /etc/vconsole.conf ``` Example for `/etc/vconsole.conf`: ``` KEYMAP=sv-latin1 ``` ### 3. Network configuration * Install and enable `networkmanager`: ```shell $ pacman -S networkmanager $ systemctl enable NetworkManager ``` * Create the `/etc/hostname` file: ```shell $ nano /etc/hostname.conf ``` Example for `/etc/hostname`: ``` hostname # change to your liking ``` * Edit the `/etc/hosts` file: ```shell $ nano /etc/hosts ``` Example for `/etc/hosts`: Change "hostname" to your hostname ``` # Static table lookup for hostnames. # See hosts(5) for details. 127.0.0.1 localhost ::1 localhost 127.0.1.1 hostname.localdomain hostname ``` ### 4. Root password * Set a root password ```shell $ passwd ``` ### 5. Setup a bootloader 'GRUB' * Install `grub` ```shell $ pacman -S grub $ grub-install --target=i386-pc /dev/sda ``` * Configure `grub` ```shell $ grub-mkconfig -o /boot/grub/grub.cfg ``` * If you get this warning: ``` Warning: os-prober will not be executed to detect other bootable partitions. Systems on them will not be added to the GRUB boot configuration. Check GRUB_DISABLE_OS_PROBER documentation entry. done ``` Edit the `/etc/default/grub` file: ```shell $ nano /etc/default/grub ``` * Add this line: ``` GRUB_DISABLE_OS_PROBER=false ``` |Example for `/etc/default/grub`: | |:-| |![Grub Example](https://raw.githubusercontent.com/Pigges/Arch-Linux-Install-Guide/main/img/grub-example.png "Grub Example")| Then run this again: ```shell $ grub-mkconfig -o /boot/grub/grub.cfg ``` ### 6. Setup a user * Create the user: ```shell useradd -m -G wheel user #change user ``` * Set a user password: ```shell $ passwd user ``` * Edit the `sudo` config so users of the `wheel` group can use sudo: ```shell $ EDITOR=nano visudo ``` * Find the line where it says "`# %wheel ALL=(ALL) ALL`" and uncomment it. ### 7. Exit and shutdown * Exit from `chroot`: ```shell $ exit ``` * Unmount the disk: ```shell $ umount -R /mnt ``` * Shutdown the computer: ```shell $ shutdown now ``` ### 8. Done! You can now remove the install media and boot into your newly made arch install and be prompted with a login. |BTW I use Arch| |:-:| | ![BTW I use Arch](https://raw.githubusercontent.com/Pigges/Arch-Linux-Install-Guide/main/img/btw-i-use-arch.png "BTW I USE ARCH")|
pigges
821,071
Angular Security - Serve application locally over HTTPS
This article will walk you through setting Angular to use locally-trusted development certificate with ``mkcert``.
0
2021-09-09T00:00:00
https://0xdbe.github.io/AngularSecurity-ServeApplicationLocallyOverHttps/
security, appsec, angular
--- layout: post title: Angular Security - Serve application locally over HTTPS date: '2021-09-09' description: This article will walk you through setting Angular to use locally-trusted development certificate with ``mkcert``. published: true categories: [Angular] tags: Security, AppSec, Angular canonical_url: https://0xdbe.github.io/AngularSecurity-ServeApplicationLocallyOverHttps/ --- When you develop an Angular application, you will come to a point where you need to serve it on localhost over HTTPS. This is often the case if you need to interact with an identity provider such as Facebook, Auth0, ... And by the way, testing locally with HTTPS could be useful to detect mixed content issues that can break a production HTTPS website. This article will walk you through setting Angular to use locally-trusted development certificate with ``mkcert``. This is an easy way to expose an application over HTTPS without security warnings and you don't worry too much about OpenSSL. ## Install mkcert [mkcert](https://github.com/FiloSottile/mkcert) is a simple tool for making locally-trusted development certificates. This tool is written by Filippo Valsorda, Cryptographer and Go security leader. Follow instructions on https://github.com/FiloSottile/mkcert#installation to install this tool. ## Create local certificate authority Run this command to create a new local certificate authority (CA): ```shell mkcert -install ``` This command did the following: - Generate CA certificate and its key, and store them in an application data folder in the user home, such as ``~/.local/share/mkcert/`` - Add this new certificate authority in trust stores (system, Firefox, Chrome, ...). So, all certificates issue with this CA will be trusted by your browser Be aware that the ``rootCA-key.pem`` file that ``mkcert`` automatically generates gives complete power to intercept secure requests from your machine. So, do not share it to stay safe against MITM (Man-in-the-middle) attack. ## Get certificate Run the following commands from Angular project directory: ```shell mkdir tls mkcert \ -cert-file ./tls/localhost-cert.pem \ -key-file ./tls/localhost-key.pem \ -ecdsa \ localhost 127.0.0.1 ::1 ``` This certificate will expire in 3 months. In order to make renewal easier, we can create a shortcut for this command in ``package.json``: ```json { "scripts": { "start": "npm run cert & ng serve", "cert": "mkdir -p ./tls & mkcert -cert-file ./tls/localhost-cert.pem -key-file ./tls/localhost-key.pem -ecdsa localhost 127.0.0.1 ::1" } } ``` Thus, you get a new certificate each time you start your application Don't forget to add ``tls/*``in ``.gitignore`` to prevent publication of your private key. ## Serve application In order to serve Angular application securely, add ``ssl``, ``sslCert`` and ``sslKey`` options to ``serve`` command: ```shell ng serve \ --ssl=true \ --sslCert=./tls/localhost-cert.pem \ --sslKey=./tls/localhost-key.pem ``` To avoid to write these options each time that we want to run this app over HTTPS, we can set them in the ``angular.json`` file: ```json { "serve": { "builder": "@angular-devkit/build-angular:dev-server", "options": { "ssl": true, "sslCert": "tls/localhost-cert.pem", "sslKey": "tls/localhost-key.pem" } } } ``` Like so, you can easily serve your app by using ``npm start`` and then, browse [https://localhost:4200/](https://localhost:4200/) without security warning. But keep in mind that this could be used only for local development, not for production.
0xdbe
821,084
Programmer/Technical Podcasts
Podcasts are a great way to pass the time when you’re doing chores, doing your daily commute, or even...
0
2021-09-11T21:57:55
https://jamienordmeyer.net/2021/09/11/programmer-technical-podcasts/
general, podcast
--- title: Programmer/Technical Podcasts published: true date: 2021-09-11 21:52:09 UTC tags: General,podcasts canonical_url: https://jamienordmeyer.net/2021/09/11/programmer-technical-podcasts/ --- Podcasts are a great way to pass the time when you’re doing chores, doing your daily commute, or even walking the doggo’s. This will be a short post on my part, but I thought it would be worth posting which podcasts I listen to regularly and why. Since this is a technical blog, I’ll stick to the technical podcasts. But each of these are podcasts where I’m always looking forward to the next episode! ## Windows Weekly [Windows Weekly | Microsoft Tech Podcast | Windows, Office, Xbox | TWiT](https://twit.tv/shows/windows-weekly) While there are really awesome things about both Mac OS and Linux, I’m primarily a Windows user. I’m a huge fan of the Microsoft Surface line of computers, and while I hope that they’ll someday be better at running Linux natively, they’re obviously primarily Windows machines. That said, I do also like Windows a lot, despite some questionable decisions made by Microsoft over the past few years (sorry, Microsoft, I do NOT need or want ads or Candy Crush on my computer). I also use Windows at my office being a .NET developer (thanks to .NET Core, the requirement for Windows is diminishing, but since I have to support legacy apps written with the classic .NET Framework, Windows it is for now). That said, it’s great to keep up to date on what’s going on with it. Windows Weekly is hosted by [Mary Jo Foley](https://twit.tv/people/mary-jo-foley), [Paul Thurott](https://twit.tv/people/paul-thurrott), and [Leo Laporte](https://twit.tv/people/leo-laporte) on the awesome [Twit.tv](https://twit.tv/) network. The show is a fantastic combination of informative and funny and covers a wide range of topics. Despite being called Windows Weekly, they really are covering Microsoft overall, including Office, Teams, Visual Studio, Servers, Azure, and so forth. The three hosts have a natural chemistry that adds to the information that they’re sharing, and often ends up with some hilarious moments. I also appreciate that, while they’re fans of both Microsoft and Windows, they’re not apologists for either in any way. They’re honest. They’re blunt. And their arguments are well thought out. They’ll praise Microsoft when they deserve it, and they’ll rip them apart when they deserve it. Check out the show if you never have and have any interest in the what and why of Microsoft. ## .NET Rocks [.NET Rocks! vNext (dotnetrocks.com)](https://www.dotnetrocks.com/) Hosted by [Carl Franklin](https://twitter.com/carlfranklin) and [Richard Campbell](https://twitter.com/richcampbell), .NET Rocks has been around since the beginning of .NET itself, and is one of the oldest podcasts in existence, technical or otherwise, with show #1 being recorded in August 2002, and their most recent episode being episode #1756! They routinely have guests on the show that are well known in the industry as pioneers and/or advocates, and cover a wide range of topics related to .NET development, and will sometimes cover other development topics as well. One of my personal favorite topics covered is that, from time to time, Richard will research a subject and present the details in “geek out” episodes that typically don’t have anything to do with development, but are just fun “side adventures” into other topics. He’s done geek outs on BBQ, cold fusion, and aerospace technology, just to name a few. ## The .NET Core Podcast [The .NET Core Podcast (dotnetcore.show)](https://dotnetcore.show/) This .NET Core focused podcast is hosted by [Jamie Taylor](https://about.me/thejamietaylor), who works out of the United Kingdom. The content of this podcast, as you might guess, focuses primarily on .NET Core, the awesome cross-platform version of .NET. It clearly hasn’t been around as long as .NET Rocks has, but in its time, Jamie has produced a set of very high quality shows with fantastic guests and great content! ## What The Tech [What The Tech – Technology Podcast – GFQ Network](https://gfqnetwork.com/shows/whatthetech/) What the Tech is hosted on the [GFQ Network](https://gfqnetwork.com/), and stars [Andrew Zarian](https://twitter.com/AndrewZarian) and the aforementioned Paul Thurrott. In What the Tech, Andrew and Paul cover a much wider range of technologies than the other podcasts that I’ve mentioned above. While they do talk about Windows and Microsoft, they also cover Google, Apple, Amazon, Facebook, etc., whatever the tech news of the day me be. Just as with Pauls work on Windows Weekly, he and Andrew do this in a fairly even-handed way, calling out bad practices where they see them, and affording praise where it is deserved. The show is about delivering details and ideas about what is going on, not about generating preferences one way or another. ## Suggestions? What about you? What technical podcasts do you listen to, and why?
nordyj
866,581
When I am trying to logout from screen ,it is giving me the _id not defined error ...
"Please anybody can solve the issue. × TypeError: Cannot read property '_id' of null ,why this error...
0
2021-10-17T12:37:32
https://dev.to/amrita1295/when-i-am-trying-to-logout-from-screen-it-is-giving-me-the-id-not-defined-error--1i2j
"Please anybody can solve the issue. × TypeError: Cannot read property '_id' of null ,why this error is coming.This is my full code of reactjs and finding this error when I am logging out of the screen.Please anybody can solve this error. This is React js program ... " ``` import React,{useContext,useRef,useEffect,useState} from 'react' import {Link ,useHistory} from 'react-router-dom' import {UserContext} from '../App' import styled from 'styled-components' import {GrClose} from 'react-icons/gr'; import {CgProfile} from 'react-icons/cg'; import {FiCamera} from 'react-icons/fi'; import {BiHomeAlt} from 'react-icons/bi' import M from 'materialize-css' import ScrollToTop from './screens/ScrollToTop/index' import '../App.css' import GoToTop from './screens/ScrollToTop'; const StyledButton = styled.button` font-size: 1.7rem; ` const StyledIcon = styled.div` font-size: 1.5rem; margin-left: -10px; margin-top: 2px; @media only screen and (max-width: 800px) { margin-left: -15px; } ` const StyledIcon1 = styled.div` font-size: 1.5rem; margin-left: -35px; margin-top: 2px; ` const Navbar = () => { const searchModal = useRef(null) const [search,setSearch] = useState('') const [userDetails,setUserDetails] = useState([]) const {state,dispatch} = useContext(UserContext) const history = useHistory() useEffect(()=>{ M.Modal.init(searchModal.current) },[]) const renderList = ()=>{ if(state){ return [ <li key="1"><StyledIcon1><i data-target="modal1" className="large material-icons modal-trigger" style={{color:"black"}}>search</i></StyledIcon1></li>, <li key="2"><Link to={state?"/":"/signin"}><StyledIcon><BiHomeAlt /></StyledIcon></Link></li>, <li key="3"><Link to="/profile"><StyledIcon><CgProfile /></StyledIcon></Link></li>, <li key="4"><Link to="/create"><StyledIcon><FiCamera /></StyledIcon></Link></li>, <li key="5"> <button className="btn #c62828 red darken-3 ok" onClick={()=>{ localStorage.clear() dispatch({type:"CLEAR"}) history.push('/signin') M.toast({html: "Logged out successfully",classes:"#43a047 green darken-1"}) }} > Logout </button> </li> ] } else{ return [ <li key="6"><Link to="/signin">SignIn</Link></li>, <li key="7"><Link to="/signup">SignUp</Link></li> ] } } const fetchUsers = (query)=>{ setSearch(query) fetch('/search-users',{ method:"post", headers:{ "Content-Type":"application/json" }, body:JSON.stringify({ query }) }).then(res=>res.json()) .then(results=>{ setUserDetails(results.user) }) } return ( <nav className="navbar"> <div className="nav-wrapper white"> <Link to={state?"/":"/signin"} className="brand-logo left">Instagram</Link> <ul id="nav-mobile" className="right"> { renderList() } </ul> </div> <div id="modal1" className="modal style-width" ref={searchModal} style={{color:"black"}}> <div className="modal-footer"> <StyledButton className="modal-close waves-effect waves-white btn-flat right" onClick={()=>setSearch('')}><GrClose /></StyledButton> </div> <div className="modal-content"> <input type="text" placeholder="Search users" value={search} onChange={(e)=>fetchUsers(e.target.value)} /> { userDetails && userDetails.length>0 ? <ul className="collection"> {userDetails.map(item=>{ return ( <Link to={item._id !== state._id ? "/profile/"+item._id:'/profile'} onClick={()=>{ M.Modal.getInstance(searchModal.current).close() setSearch('') }}><li className="collection-item"> <img className="search-image" src={item.pic} alt={item._id}/> <span className="search-name">{item.username}</span> <span className="search-email">{item.email}</span> {/* <span className="search-name">{item.name}</span> */} </li></Link> ) })} </ul> : <ul>No result found</ul> } </div> </div> <ScrollToTop /> <GoToTop /> </nav> ) } export default Navbar ``` ) ![this is the error coming](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fbmg1fmnpr5viqqg39sr.png)
amrita1295
821,223
EKS Anywhere: The What, The Why and The How
AWS recently made the headlines with the launch of Amazon EKS Anywhere, a new and much-awaited...
0
2021-09-13T02:00:44
https://dev.to/abhaykrishna/eks-anywhere-the-what-the-why-and-the-how-1h67
kubernetes, showdev, aws, tutorial
AWS recently made the headlines with the launch of Amazon EKS Anywhere, a new and much-awaited deployment option for Amazon Elastic Kubernetes Service (EKS). But what is it and how can you benefit from it? Read on to find out! # The What Amazon EKS Anywhere is an open-source offering through which customers can host and operate secure, reliable Kubernetes clusters on-premises. It allows you to stay completely off AWS infrastructure (why, you don't even need an AWS account to get started) while offering a cluster management experience on par with EKS. EKS Anywhere builds on the strengths of [Amazon EKS Distro](https://github.com/aws/eks-distro), the same open-source distribution of Kubernetes that is used by Amazon EKS on the cloud, thus fostering consistency and compatibility between clusters both on AWS as well as on-premises. # The Why This section covers the motivation for using EKS Anywhere. To understand better how EKS Anywhere may be more suited to customer needs, we will first need to understand the high-level architecture of EKS clusters. An Amazon EKS cluster consists of two primary components: * The Amazon EKS control plane, consisting of nodes running components such as the Kubernetes API Server, Controller Manager, Scheduler, `etcd`, etc. * Worker nodes that are registered with the control plane and run customer workloads. The control plane is provisioned on AWS infrastructure in an account managed by EKS, while the worker nodes run in customer accounts, thus providing a managed Kubernetes experience on AWS. However, some customers may have applications that need to run on-premises due to regulatory, latency, and data residency requirements as well as requirements to leverage existing infrastructure investments. With EKS Anywhere, both control plane and application workloads run on the customer infrastructure, thus providing complete flexibility to the cluster administrator. Also, customers can make use of the [EKS Connector*](https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html) to connect EKS Anywhere clusters running on their infrastructure to the EKS console, for a centralized view of their on-premises clusters and workloads along with EKS clusters. *\*In public preview* # The How EKS Anywhere currently supports customer-managed vSphere infrastructure provider as the production-grade deployment environment for Kubernetes clusters, with bare-metal support coming in 2022. For local development and testing, it also supports the Docker provider, wherein the control plane and worker nodes are provisioned as Docker containers. The Docker provider is not intended to be used in production environments. In this section, I shall demonstrate a step-by-step walkthrough of creating an EKS Anywhere cluster with the Docker provider. Fasten your seatbelts for an *EKS*-iting adventure! ## Installation At its core, EKS Anywhere provides an installable CLI `eksctl-anywhere` that allows users to create a fully-functional Kubernetes cluster in a matter of minutes. The CLI is provided as an extension to `eksctl`, a command-line tool for creating clusters on Amazon EKS. These two binaries and a running Docker environment are all you need to create an EKS Anywhere cluster. You can install both `eksctl` and `eksctl-anywhere` directly using Homebrew on MacOS and Linux. In addition, it is a good idea to install `kubectl` for interacting with your cluster post-creation ```shell brew install aws/tap/eks-anywhere brew install kubectl ``` ## Cluster creation The first step in creating an EKS Anywhere cluster is to generate a cluster config for the desired infrastructure provider. This is a manifest containing the cluster spec that allows you to declaratively manage your EKS Anywhere cluster. Before we proceed, let us give our cluster a suitable name that will be used as a reference for all future operations. ```shell export CLUSTER_NAME=eks-anywhere-test ``` The following command generates the cluster config for the Docker provider, with default replica counts, networking and external `etcd` configurations. ```shell eksctl anywhere generate clusterconfig $CLUSTER_NAME -p docker ``` Running the above command will generate the following output. ``` apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: eks-anywhere-test spec: clusterNetwork: cni: cilium pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12 controlPlaneConfiguration: count: 1 datacenterRef: kind: DockerDatacenterConfig name: eks-anywhere-test externalEtcdConfiguration: count: 1 kubernetesVersion: "1.21" workerNodeGroupConfigurations: - count: 1 --- apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: DockerDatacenterConfig metadata: name: eks-anywhere-test spec: {} --- ``` If desired, you may modify the spec as per your requirements. EKS Anywhere supports both stacked and unstacked `etcd` topologies, with the latter being the default. If you prefer to use stacked `etcd`, you can remove the `externalEtcdConfiguration` section from the spec. For the purpose of this tutorial, we shall use the default values generated by the command. In order to use the config for cluster operations, the cluster config must be stored in a file. ```shell eksctl anywhere generate clusterconfig $CLUSTER_NAME -p docker > $CLUSTER_NAME.yaml ``` Now for the fun part - actually creating the cluster! ```shell eksctl anywhere create cluster -f $CLUSTER_NAME.yaml ``` The above command will kick-start the cluster creation and update the progress on each step in the creation workflow. A detailed explanation of the workflow is provided [here](https://anywhere.eks.amazonaws.com/docs/concepts/clusterworkflow/). Optionally, you can set an appropriate verbosity level (0 through 9) using the `-v` flag for more verbose logging and for a deeper understanding of what is going on behind the scenes. ```shell Performing setup and validations ✅ Docker Provider setup is valid Creating new bootstrap cluster Installing cluster-api providers on bootstrap cluster Provider specific setup Creating new workload cluster Installing networking on workload cluster Installing storage class on workload cluster Installing cluster-api providers on workload cluster Moving cluster management from bootstrap to workload cluster Installing EKS-A custom components (CRD and controller) on workload cluster Creating EKS-A CRDs instances on workload cluster Installing AddonManager and GitOps Toolkit on workload cluster GitOps field not specified, bootstrap flux skipped Writing cluster config file Deleting bootstrap cluster 🎉 Cluster created! ``` Woot, we have created our first EKS Anywhere cluster! The whole process should take around 8-15 minutes or so. The CLI creates a folder with the same name as the cluster and places a kubeconfig file with Admin privileges inside this folder. This kubeconfig file can be used to interact with our EKS Anywhere cluster. ```shell export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig ``` Let us look at the pods to verify that they are all running. ```shell $ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE capd-system capd-controller-manager-659dd5f8bc-wj4hl 2/2 Running 0 1m capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-69889cb844-m87x8 2/2 Running 0 1m capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-6ddc66fb75-hz4hm 2/2 Running 0 1m capi-system capi-controller-manager-db59f5789-sjnv5 2/2 Running 0 1m capi-webhook-system capi-controller-manager-64b8c548db-kwntg 2/2 Running 0 1m capi-webhook-system capi-kubeadm-bootstrap-controller-manager-68b8cc9759-7zczt 2/2 Running 0 1m capi-webhook-system capi-kubeadm-control-plane-controller-manager-7dc88f767d-p7bbk 2/2 Running 0 1m cert-manager cert-manager-5f6b885b4-8l5f9 1/1 Running 0 2m cert-manager cert-manager-cainjector-bb6d9bcb5-jr7x2 1/1 Running 0 2m cert-manager cert-manager-webhook-56cbc8f5b8-47wmg 1/1 Running 0 2m eksa-system eksa-controller-manager-6769764b45-gw6sp 2/2 Running 0 1m etcdadm-bootstrap-provider-system etcdadm-bootstrap-provider-controller-manager-54476b7bf9-8fr2k 2/2 Running 0 1m etcdadm-controller-system etcdadm-controller-controller-manager-d5795556-d9cmz 2/2 Running 0 1m kube-system cilium-operator-6bf46cc6c6-j5c8v 1/1 Running 0 2m kube-system cilium-operator-6bf46cc6c6-vsf79 1/1 Running 0 2m kube-system cilium-q4gg6 1/1 Running 0 2m kube-system cilium-xgffq 1/1 Running 0 2m kube-system coredns-7c68f85774-4kvcb 1/1 Running 0 2m kube-system coredns-7c68f85774-9z9kn 1/1 Running 0 2m kube-system kube-apiserver-eks-anywhere-test-29qnl 1/1 Running 0 2m kube-system kube-controller-manager-eks-anywhere-test-29qnl 1/1 Running 0 2m kube-system kube-proxy-2fx4g 1/1 Running 0 2m kube-system kube-proxy-r4cc8 1/1 Running 0 2m kube-system kube-scheduler-eks-anywhere-test-29qnl 1/1 Running 0 2m ``` Using the following command, we can fetch the container images running on our pods, and verify that the control plane images, i.e., API server, Controller Manager, etc are all vended by EKS Distro. ```shell kubectl get pods -A -o yaml | yq e '.items[] | .spec.containers[] | .image' - | sort -ur ``` ```shell public.ecr.aws/eks-anywhere/brancz/kube-rbac-proxy:v0.8.0-eks-a-1 public.ecr.aws/eks-anywhere/cluster-controller:v0.5.0-eks-a-1 public.ecr.aws/eks-anywhere/jetstack/cert-manager-cainjector:v1.1.0-eks-a-1 public.ecr.aws/eks-anywhere/jetstack/cert-manager-controller:v1.1.0-eks-a-1 public.ecr.aws/eks-anywhere/jetstack/cert-manager-webhook:v1.1.0-eks-a-1 public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/capd-manager:v0.3.23-eks-a-1 public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/cluster-api-controller:v0.3.23-eks-a-1 public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/kubeadm-bootstrap-controller:v0.3.23-eks-a-1 public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/kubeadm-control-plane-controller:v0.3.23-eks-a-1 public.ecr.aws/eks-anywhere/mrajashree/etcdadm-bootstrap-provider:v0.1.0-beta-4.1-eks-a-1 public.ecr.aws/eks-anywhere/mrajashree/etcdadm-controller:v0.1.0-beta-4.1-eks-a-1 public.ecr.aws/eks-distro/coredns/coredns:v1.8.3-eks-1-21-4 public.ecr.aws/eks-distro/kubernetes/kube-apiserver:v1.21.2-eks-1-21-4 public.ecr.aws/eks-distro/kubernetes/kube-controller-manager:v1.21.2-eks-1-21-4 public.ecr.aws/eks-distro/kubernetes/kube-proxy:v1.21.2-eks-1-21-4 public.ecr.aws/eks-distro/kubernetes/kube-scheduler:v1.21.2-eks-1-21-4 public.ecr.aws/isovalent/cilium:v1.9.10-eksa.1 public.ecr.aws/isovalent/operator-generic:v1.9.10-eksa.1 ``` Upon retrieving the nodes, we can see that our cluster has one control plane ("master") node and one worker node as specified in our manifest. ```shell $ kubectl get nodes NAME STATUS ROLES AGE VERSION eks-anywhere-test-29qnl Ready control-plane,master 4m v1.21.2-eks-1-21-4 eks-anywhere-test-md-0-7796db4bdd-4wmd5 Ready <none> 3m v1.21.2-eks-1-21-4 ``` To log onto a node, we can simply run ```shell docker exec -it <node name> bash ``` ## Testing Let us test our EKS Anywhere cluster by deploying a simple Nginx service. ```shell apiVersion: apps/v1 kind: Deployment metadata: name: eks-anywhere-nginx-test spec: selector: matchLabels: app: nginx replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: public.ecr.aws/nginx/nginx:latest ports: - containerPort: 8080 ``` We can create the Nginx workload using the following command. ```shell kubectl apply -f eks-anywhere-nginx-test.yaml ``` This will provision 3 pods for our application in the `default` namespace. ```shell NAME READY STATUS RESTARTS AGE eks-anywhere-nginx-test-7676d696c8-c5ths 1/1 Running 0 1m eks-anywhere-nginx-test-7676d696c8-c76lf 1/1 Running 0 1m eks-anywhere-nginx-test-7676d696c8-m25r5 1/1 Running 0 1m ``` To test our application, we can use the following command to forward the deployment port to our host machine port 80. ```shell $ kubectl port-forward deploy/eks-anywhere-nginx-test 8080:80 Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 ``` Then, when we navigate to `localhost:8080` on the browser, we are greeted by the Nginx welcome page. ![Nginx welcome page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3mixbk8eszp0i9smzmyo.png) Alternatively, we can fetch the contents of the site using `curl`. ```shell $ curl localhost:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> ``` Thus, we have successfully created and tested our EKS Anywhere cluster. If you wish to go one step further, you can deploy the Kubernetes Dashboard UI for your cluster using the instructions [here](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/). ## Cluster deletion After testing, the cluster can be deleted using the command ```shell eksctl anywhere delete cluster -f $CLUSTER_NAME.yaml ``` ## Conclusion That brings us to the end of this walkthrough. Thank you very much for reading and I hope you will give EKS Anywhere a spin. The complete documentation is available [here](https://anywhere.eks.amazonaws.com/). If you are interested in contributing, please open an issue or pull request on the [EKS Anywhere GitHub repo](https://github.com/aws/eks-anywhere). Let me know your thoughts in the comments below. If you have more questions, feel free to reach out to me on [LinkedIn](https://www.linkedin.com/in/abhayk96/) or [Twitter](https://twitter.com/abhay_krishna96).
abhaykrishna
821,328
Serverless projects over the years
I decided to write a post about the serverless architectures I have created over the years. When I...
0
2021-09-12T08:04:24
https://dev.to/aws-builders/serverless-projects-over-the-years-1me
aws, serverless, lambda, eventbridge
--- title: "Serverless projects over the years" cover_image: "https://jimmydqv.com/assets/img/post-serverless-over-the-years-thumb.png" tags: aws, serverless, lambda, eventbridge published: true --- ![image](https://jimmydqv.com/assets/img/post-serverless-over-the-years-thumb.png) I decided to write a post about the serverless architectures I have created over the years. When I tried to pick one to write about I figured out that it was going to be impossible to pick just one. I created my first [AWS Lambda][lambda-link] function late 2015, just months after the service became general available. At re:Invent [AWS Step Functions][step-functions-link] was announced, I was there when it happened. Just months after that we were running our first Step Function in production at Sony. With the introduction of [Amazon EventBridge][eventbridge-link] it has just exploded. So I decided to write a brief post about some of the architectures me and my different teams has built over the years. It was impossible to pick one. Just to be clear! Some of the architectures in this post is obsolete now due to introduction of other services and functionality, but back when they were created it was relevant. Therefor I decided to write about them anyway. So buckle up! Now we start! ## 2017 - The SQS Poller This is one of the architectures that would be obsolete. We built and used this pattern back at Sony in 2017. We needed to poll work from a SQS queue and invoke a Lambda function to perform the actual work. This was way before SQS could invoke Lambda functions so that was not an option. The solution consists of an "Auto Scaler" Step Function that is invoked every 5 minutes while there are messages in the queue. If number of messages increased from 0 the Auto Scalar was directly triggered. The Auto Scalar checks number of messages that was in the queue, how many instances of the worker Step Function that was running, and decided the required scaling action. To keep track of all instances of the worker Step Function it was assigned a unique ID by the Auto Scaler when invoked and started. A register of all running workers was kept in a DynamoDB table. A worker was considered alive as long as it had reported "home", updated the DynamoDB table within the latest 5 minutes. The worker Step Function start by updating a field in the DynamoDB table, the calling "home". Next it polled a message from the SQS queue and invoked a Lambda to do the actual work. It was possible to do both synchronous and asynchronous work. Below is a picture of the overall architecture. ![image](https://jimmydqv.com/assets/img/post-serverless-over-the-years-sqs-poller-pattern.png) ## 2020 GitLab Auto Scaled Self Hosted Runners In the end of 2020 I decided to build something quite different. There was a need for a Auto Scaled solution to run self hosted GitLab runners. Since there was a need to build Docker images this needed to be done on plain old EC2 instances, since building Docker in Docker is not recommended. But at the same time there was some parts that was plain Node builds. For these builds it would be interesting to do that in a different way than on EC2. Fun part was that Docker support in Lambda was released during re:Invent 2020, this made a perfect match. GitLab can call a webhook on different events. These events can be used to determine if a build need to be started, if it just finished, and things like this. The webhook was setup with direct integration between API Gateway and EventBridge. Here EventBridge was used to be able to invoke different Step Functions based on the event type. When the webhook was called with an event that a new job needed to be started EventBridge would invoke a Step Function to determine the type of runner that was needed. Metadata attached to the job in the GiteLab pipeline would be used to either start a new EC2 instance or invoke a lambda function to do the build. When the webhook was called with an event that a job had finished EventBridge would trigger a different Step Function that would terminate the EC2 instance, if the job was run on a EC2 instance. The really fun part here was that building in a Lambda function was really great. Number of builds that could be run in parallel was out of this world. The entire setup is available on [GitHub][github-gitlab-link] and below is the overall architecture pictures. ![image](https://jimmydqv.com/assets/img/post-serverless-over-the-years-gitlab-runners-start.png) ![image](https://jimmydqv.com/assets/img/post-serverless-over-the-years-gitlab-runners-stop.png) ## 2021 - Serverless Iot Fast forward to present day and a serverless IoT architecture I'm working on. In this design the Iot devices post events to AWS IoT Core, that will trigger an Iot Rule. The IoT Rule will invoke a Lambda Function that will post the IoT events into EventBridge. I can say that it's a bit annoying having this Lambda function, as it doesn't do anything else than just passing the data. But at the time of writing there is not direct integration between IoT Core and EventBridge so I have no choice. Different EventBridge rules will then invoke different StepFunction that are registered as targets. One StepFunction enrich and store the data, a different invoke an alarm if needed. EventBridge play a huge role here, it make it possible to route events to different services in a simple and managed way. On top of everything there is an webapp that displays the data. The webapp fetch the data over GraphQL, of course AWS AppSync is used for that. So what are these IoT Devices then? It's a BBQ thermometer that I'm building using a Raspberry Pi running AWS IoT Greengrass. More on this will come in later blog posts. Below is an overall picture of the architecture. ![image](https://jimmydqv.com/assets/img/post-serverless-over-the-years-serverless-iot.png) ## Favorite Service? Do I have any favorite service among all of the serverless services AWS offers? I would say yes! [Amazon EventBridge][eventbridge-link] is there right at the top. Why? Easy, is enables decoupling between serverless services. A service producing data can just post event into a common EventBus and then doesn't need to know, or care, about the consumers. As an example, we have a producer service that store data in an Amazon S3 bucket. A different service need to read the created files, process the data, and send it over to a different external system. With EventBridge the producer could either push a message to EventBridge that a new file is available. Yes, you can use the default EventBus and rely on events from CloudTrail that files are created, but that adds some extra latency that need to be considered. Sure you can use SNS and SQS instead of EventBridge but I doesn't feel that it become as nice solution as with EventBridge. So, EventBridge is by far my favorite service since it make decoupling so simple and elegant! ## Summary This was just three of the serverless architectures I have built during the last 6 years. One things is sure, there are more coming! I will never stop being serverless first. There are just so many benefits with serverless that it's impossible to stop. Hope you enjoyed the read and got some inspiration! [step-functions-link]: https://aws.amazon.com/step-functions/ [eventbridge-link]: https://aws.amazon.com/eventbridge/ [lambda-link]: https://aws.amazon.com/eventbridge/ [github-gitlab-link]: https://github.com/JimmyDqv/gitlab-runners-on-aws
jimmydqv
821,346
Announcing @jnxplus/nx-gradle
I like to use Nx workspace to manage my projects. I don't put all my projects in one workspace but I...
0
2021-09-12T12:01:09
https://dev.to/gridou/how-to-add-spring-boot-and-gradle-multi-project-builds-capabilities-to-your-nx-workspace-53cd
nx, springboot, gradle
I like to use Nx workspace to manage my projects. I don't put all my projects in one workspace but I use one workspace per multi-module project. It works nicely with node frameworks like angular, react, nest... But when its come to other languages and frameworks, we need to use custom plugins. In this article I will use my plugin `@jnxplus/nx-gradle` to add support of spring boot and Gradle multi-project builds to Nx workspace. `@jnxplus/nx-gradle` add to the Spring Boot world, an opinionated way to integrate Spring Boot apps/libs inside Nx workspace using Gradle multi-project builds. Let's show you how to use it. We start by creating a workspace : Open a terminal and run this command to create a new workspace : ```bash npx create-nx-workspace@latest ``` When asked provide my-org as name and choose an empty workspace : ```bash devs> npx create-nx-workspace@latest npx: installed 48 in 3.278s √ Workspace name (e.g., org name) · my-org √ What to create in the new workspace · empty √ Use Nx Cloud? (It's free and doesn't require registration.) · No > NX Nx is creating your workspace. ``` Now and in the same terminal go inside my-org folder : ```bash cd my-org ``` ### 1. Install the plugin In the workspace root run this command to install the plugin : ```bash npm install --save-dev @jnxplus/nx-gradle ``` ### 2. Add Spring boot and Gradle wrapper support The following command adds Spring boot and Gradle support (Gradle wrapper and config files) to the workspace. This only needs to be performed once per workspace. ```bash nx generate @jnxplus/nx-gradle:init ``` I choose the version of Java supported by my operating system and the default value for Gradle root project: ```bash my-org> nx generate @jnxplus/nx-gradle:init √ Which version of Java would you like to use? · 11 √ What rootProjectName would you like to use? · boot-multiproject CREATE checkstyle.xml CREATE gradle/wrapper/gradle-wrapper.jar CREATE gradle/wrapper/gradle-wrapper.properties CREATE gradle.properties CREATE gradlew CREATE gradlew.bat CREATE settings.gradle UPDATE nx.json UPDATE .gitignore ``` As you see, the command added the following files : * `checkstyle.xml` for linting. * Gradle wrapper and Gradle executables for windows and Linux : Using Gradle Wrapper we can distribute/share a project to everybody to use the same version and Gradle's functionality(compile, build, install...) even if it has not been installed * `gradle.properties` : This file contain Java, Spring Boot and dependency management versions that we will use for all apps and libs inside Nx worspace. * `settings.gradle` : Here we will add our apps and libs later so Gradle will be able to perform its tasks. We also updated `nx.json` file to add the plugin for dep-graph feature and `.gitignore` so we can ignore Gradle build and cache folders. ### 3. Usage Generate an application ```bash nx generate @jnxplus/nx-gradle:application my-app ``` When asked, provide answers or choose default : ```bash my-org> nx generate @jnxplus/nx-gradle:application my-app √ What groupId would you like to use? · com.example √ What version would you like to use? · 0.0.1-SNAPSHOT √ Which packaging would you like to use? · jar UPDATE workspace.json UPDATE nx.json CREATE apps/my-app/build.gradle CREATE apps/my-app/src/main/java/com/example/myapp/HelloController.java CREATE apps/my-app/src/main/java/com/example/myapp/MyAppApplication.java CREATE apps/my-app/src/main/resources/application.properties CREATE apps/my-app/src/test/java/com/example/myapp/HelloControllerTests.java CREATE apps/my-app/src/test/resources/application.properties UPDATE settings.gradle ``` Check the `settings.gradle`, here we added a entry for my-app : ```bash rootProject.name = 'boot-multiproject' include('apps:my-app') ``` Build the app ```bash nx build my-app ``` If you look carefully to the console, you will see that we run the command : ```bash Executing command: gradlew.bat :apps:my-app:bootJar ``` Since we choose a `jar` packaging, we run this command. For war packaging, we run `bootWar`. Serve the app ```bash nx serve my-app ``` Here under the hood we run the command : `bootRun`. Open http://localhost:8080 to see app working: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxhxo65wmsoq3m8sjppm.PNG) Test the app To test the app run this command : ```bash nx test my-app ``` ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nr1g48yrse7qqpe2z4k0.PNG ) What I like to do to see if test command is really working is breaking a test, and run it again : So change the test and add a comma between Hello and World : ```bash @Test public void shouldReturnHelloWorld() throws Exception { this.mockMvc.perform(get("/")).andDo(print()).andExpect(status().isOk()) .andExpect(content().string(containsString("Hello, World"))); } ``` Now rerun the same command : ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08kf1id8zzdap3sekus9.PNG) Perfect, now the test target is failing. Revert change and let's generate this time a library. To generate a library use this command : ```bash nx generate @jnxplus/nx-gradle:library my-lib ``` When asked, provide answers or choose default : ```bash my-org> nx generate @jnxplus/nx-gradle:library my-lib √ What groupId would you like to use? · com.example √ What version would you like to use? · 0.0.1-SNAPSHOT UPDATE workspace.json UPDATE nx.json CREATE libs/my-lib/build.gradle CREATE libs/my-lib/src/main/java/com/example/mylib/HelloService.java CREATE libs/my-lib/src/test/java/com/example/mylib/HelloServiceTests.java CREATE libs/my-lib/src/test/java/com/example/mylib/TestConfiguration.java UPDATE settings.gradle ``` Check the `settings.gradle`, here we added a entry for my-lib : ```bash rootProject.name = 'boot-multiproject' include('libs:my-lib') include('apps:my-app') ``` Like an app, we can build it and test it : Build : ```bash nx build my-lib ``` Test : ```bash nx test my-lib ``` But we can't run it with serve target : ```bash my-org> nx serve my-lib > NX ERROR Cannot find target 'serve' for project 'my-lib' ``` Hope you like this article, and give this plugin a try :) You can find the Github code here : https://github.com/khalilou88/my-org
gridou
821,351
New way of bundling JS/CSS in Rails
Apologies for misleading you with a vague title, but I wouldn't call this approach the new way,...
0
2021-09-12T09:53:16
https://dev.to/abeidahmed/new-way-of-bundling-js-css-in-rails-3661
rails, tailwindcss, tutorial, beginners
Apologies for misleading you with a vague title, but I wouldn't call this approach the new way, because under the hood `webpacker` and `sass-rails` were doing all of the bundlings without us needing to tinker with the configuration. For simpler applications, this was rock solid, but for complex applications, `webpacker` kind of stood in the way. But all is good now since the advent of two gems [`jsbundling-rails`](https://github.com/rails/jsbundling-rails) and [`cssbundling-rails`](https://github.com/rails/cssbundling-rails). So let's see what these gems have to offer. We'll start with a bare-bone version of rails without any JavaScript. ```bash rails new myapp --skip-javascript ``` After initializing the rails app, go to the Gemfile and add these two gems that we'll be using. ```ruby gem 'jsbundling-rails' gem 'cssbundling-rails' ``` Run `bundle install` after that. In this demo, we'll be using `webpack` and `tailwindcss` as the example. ```bash ./bin/rails javascript:install:webpack ./bin/rails css:install:tailwind ``` These generators should create a `Procfile.dev`, `webpack.config.js`, and more. Go to the `app/javascript/application.js` and write some JavaScript code. Now run `./bin/dev` and with any luck, your JavaScript and CSS should be bundled and compiled into `app/assets/builds` directory into two files, namely: `application.css` and `application.js`. This is so much better because now we can use ```erb <%= stylesheet_link_tag "application", "data-turbo-track": "reload", media: "all" %> <%= javascript_include_tag "application", "data-turbo-track": "reload", defer: true %> ``` into our `layouts/application.html.erb` file. Having `webpack.config.js` in our root directory is even better. It's all plain vanilla `webpack` without any fancy wrappers around it and hence we can customize it to our hearts. However, the above-mentioned approach to using `tailwind` in our app is limited. For example, we cannot use `@layer` or any other fancy `tailwind` syntax because it depends on `postcss` and other jazz. Again, for simpler applications it's fine, but a complex one may need those syntaxes and may need to write `scss` instead of `css` alone. Although we can revert the changes that we made above easily, for ease of understanding let's spin up a new rails app instead. ```bash rails new myapp2 --skip-javascript ``` This time we'll only be installing the `jsbundling-rails` gem. ```bash ./bin/rails javascript:install:webpack ``` As usual, the generator will create the `Procfile.dev`, `webpack.config.js`, and more. So in this approach to installing TailwindCSS, we'll be installing `postcss` and its plugins to handle all situations. ```bash yarn add tailwindcss autoprefixer css-loader mini-css-extract-plugin postcss postcss-import postcss-loader ``` That was quite a lot. But you can add or remove these libraries depending on your application's need. Our approach will be the same. We'll output two files, namely: `application.css` and `application.js` in the `app/assets/builds` directory so that we can use the `stylesheet_link_tag` and `javascript_include_tag` in our `layout/application.html.erb` file. Go to the `app/assets/stylesheets/application.css` and rename this file to `application.scss`, and also paste the following lines of code. ```scss @import 'tailwindcss/base'; @import 'tailwindcss/components'; @import 'tailwindcss/utilities'; ``` Let's configure our `webpack.config.js` ```js const path = require('path') const webpack = require('webpack') const MiniCssExtractPlugin = require('mini-css-extract-plugin') module.exports = { mode: 'production', entry: { application: [ './app/javascript/application.js', './app/assets/stylesheets/application.scss', ], }, output: { filename: '[name].js', path: path.resolve(__dirname, 'app/assets/builds'), }, module: { rules: [ { test: /\.s[ac]ss$/i, use: [MiniCssExtractPlugin.loader, 'css-loader', 'postcss-loader'], }, ], }, plugins: [ new MiniCssExtractPlugin({ filename: '[name].css', }), new webpack.optimize.LimitChunkCountPlugin({ maxChunks: 1, }), ], } ``` As you can see, we have defined two entry points inside the `application` array. If you have separate assets for your admin page or a marketing page, you can define another array of entry points and `webpack` will spit out the compiled assets into separate files. Also, take note of the `plugins` array. The `MiniCssExtractPlugin` is responsible for compiling the stylesheet and then spitting it out into the defined directory. Now the final piece of the puzzle is to configure `postcss` and `manifest.js`. Create a file called `postcss.config.js` in the root of the project directory and write down the following lines. ```js module.exports = { plugins: [ require('postcss-import'), require('tailwindcss'), require('autoprefixer'), ], } ``` ```scss // app/assets/config/manifest.js //= link_tree ../images //= link_tree ../builds ``` Now run `./bin/dev` and check the `app/assets/builds` directory for your compiled assets. It wasn't that hard, was it. I'm excited about where rails is heading to. We started with `hotwire` and now we have new asset bundling options. I wonder what's next. Now is the time to hop into the rails ecosystem if you haven't already. Cover picture: https://unsplash.com/photos/cvBBO4PzWPg
abeidahmed
821,363
Code splitting in SCSS
Last post I talked about what code splitting was and how you'd do it, and gave an example with CSS....
14,574
2021-09-12T10:11:55
https://dev.to/nicm42/code-splitting-in-scss-4jg
scss
Last post I talked about what code splitting was and how you'd do it, and gave an example with CSS. This post I'm going to talk about how to do it in SCSS. We're going to stick with our simple website that has three sections: header, main, footer. And we're going to be using a bundler to put all the files into one CSS file. Partials You can't just write a load of SCSS files and stick them together, you have to write partial files. To do this, you name your file starting with an underscore, eg _header.scss, rather than header.scss. That's it, there's nothing different you need to do inside the file. Index file You then need a file that tells it where to find all those partial files you used. This one isn't named with an underscore. You can call this file whatever you like, eg index.scss. Then inside it you have: @use 'header'; @use 'main'; @use 'footer'; Your files will be called _header.scss, _main.scss, _footer.scss. In your index you don't use the scss or the underscore. Then the bundler will look at the index file, find the three files you've told it to use, stick them together in one file and convert them to CSS. More complicated sites You might have a file with your variables in and another file with your mixins, it gets a little more complicated. I recommend Coder Coder's video - she goes through it all, why it should be done that way and how it works.
nicm42
821,381
Java - Storing Data (Part 2: Data Types)
Hey, there👋 Welcome back to Part 2 of Java Learnings Vault (If you haven't checked out Part 1, you...
14,575
2021-09-12T10:59:52
https://dev.to/gauravgupta/java-storing-data-part-2-data-types-2j3h
java, beginners, programming
Hey, there👋 Welcome back to Part 2 of Java Learnings Vault (If you haven't checked out Part 1, you can go right [here](https://rakurai.hashnode.dev/java-the-beginning-part-1-the-theory)). In this blog post, I'll be telling a bit about the various data types of Java. Before starting with what are data types, we will first need what are variables. ## What are Variables? As we know, data is an important part of a program, basically the main ingredients of a program, and we need to store this data somewhere, right? We use variables to store this data. ![int.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631430515713/iTt7WUViJ.png) The image above is an example of an integer variable (we'll discuss this in a moment) of the name "number" which stores the value 20. ## What are Data Types? As data need not always be an integer ie., a number, there are different types of data like a string (ex: HashNode), a decimal number (ex: 3.14), or simply a character (ex: z). For java to know what type is the data we are providing, we use "Data Types". So the above example can be simply given in the format: ![datatype format.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631430957234/v78mxcqrH.png) A formal definition of data types is: > Data types specify the different sizes and values that can be stored in the variable. Data types are broadly categorized into 2 categories: 1. **Primitive Data Types**: The primitive data types include int, char, boolean, float, double, long, etc. 2. **Non - Primitive Data Types**: The non-primitive data types include String, Arrays, etc. In this blog post, we'll be discussing just the primitive data types. ### Primitive Data Types The primitive data types are the most basic in Java. Now, talking about different primitive data types. **Boolean**: This data type is used to store only 2 possible values: true and false. This data type is usually used as flags to track simple true/false conditions. The Boolean data type specifies 1 bit of information. ![Boolean.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631438058671/1dJtsW4zg.png) **Integer**: This data type is used to store 32-bit signed whole numbers (signed numbers include both negative and positive values) between the range -2^31 to 2^31 - 1. The Integer data type takes 4 bytes of space. ![int2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631438746092/Q7olLNY2V.png) **Byte**: The byte data type can store whole numbers from -128 to 127. This data type takes 1 byte of space (just like its name). ![byte.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631438755479/dz6F-vvjd.png) **Short**: The short data type can be used to store whole numbers from -32768 to 32767. This data type takes 2 bytes of space. ![short.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631440370303/Y8tG9Q2Fb.png) **Long**: The Long data type is used when we need to store values more than the range int can support. This data type takes 8 bytes of memory space. ![Long.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631439499500/iLHqiQaVJ.png) > You may have noticed in the image that the value is followed by an uppercase 'L'. This is done to explicitly state that the number is of long data type and not an int. The ending need not be an uppercase 'L' but can even be a lowercase 'a, but it can often be mistaken as '1' (one) so uppercase is usually preferred. **Float**: The float data type is used to store decimal numbers. Its value range is unlimited. The float data type takes 4 bytes of space. ([source](https://www.javatpoint.com/java-data-types#:~:text=754%20floating%20point.-,Its%20value%20range%20is%20unlimited.,-It%20is%20recommended)) ![float.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631440945397/djA4SuMQr.png) > Note that we follow the value with an 'f' or 'F' to explicitly state it as a floating-point number. **Double**: The double data type is also used to store decimal numbers. It also has an unlimited value range. ([source](https://www.javatpoint.com/java-data-types#:~:text=754%20floating%20point.-,Its%20value%20range%20is%20unlimited,-.%20The%20double%20data)) But it shouldn't be used to store precise values like currency. The double data type takes up 8 bytes of space. ![double.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631442463882/V9TYMxgUo.png) **Character**: The char data type is used to store a single character. We surround the value of this data type with single quotes (' '). ![char.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631441457809/eXWjs7wvI.png) ### Why so many data types to store whole numbers and decimals?🤔 After seeing these data types, you may be wondering why do we even need so many data types just to store a whole number? Why not just use long to store all whole numbers? The reason is simple, if you look again I have mentioned that each data type takes a different amount of memory space. So, using the long data type to store all whole numbers results in the use of a larger amount of memory. To fully utilize memory we use different data types. The same reason applies to the use of float and double. > In simple terms, why would you want to use a box, the size of a fridge, just to store a small candy? So, we use appropriate-sized containers to store the items to efficiently manage space. ## Summary To sum up data types and their classification you can have a look at the below diagram. ![datatypes.PNG](https://cdn.hashnode.com/res/hashnode/image/upload/v1631443375251/mxei_xOFX.png) To see the amount of memory size each data type takes, you can have a look at the table below. ![datatype size.PNG](https://cdn.hashnode.com/res/hashnode/image/upload/v1631443426272/55ZXg7wpD.png) This marks the end of this blog post. Any feedback and suggestions are greatly appreciated. If my posts are helping you, do comment and show some love by sharing them with your friends.💖 PS: You can also find me on [HashNode](https://hashnode.com/@Rakurai) Bye, until next time👋
gauravgupta
821,526
What is DB:transaction and how to use it in laravel
Hello, in this blog we are going to see for what purpose and why we use DB:transaction and advantage...
0
2021-09-12T13:42:05
https://dev.to/snehalkadwe/what-is-db-transaction-and-how-to-use-it-in-laravel-1i4m
laravel, beginners, womenintech, codenewbie
Hello, in this blog we are going to see for what purpose and why we use DB:transaction and advantage of using it. **What is Database Transaction?** Database transaction is provided by DB facade to run a set of operation within a database transaction. It gives us the powerful ability to safely perform a set of data-modifying SQL queries such as insert, update, delete. It made safe because we can easily rollback all queries made within the transaction at any time. **Why we use it?** Let us consider we have an application on which admin can see all the posts and its user, which is associated with each other. When admin deletes post/user which is totally dependent on one another, and if any one of its operation fails we need to rollback previously successful operation to prevent error causing issues and send a error message back to the admin. Let us see an example: `Issue causing scenario` ```php // delete user and all of its post $user = auth()->user(); $user->posts(); $user->delete(); ``` In the above example user will be deleted but it's posts are not deleted, which will cause error where we are using post with its user_id, or these posts are still there in database which is now unused and unnecessary. To prevent this we use Db transaction, so if the posts are not deleted we cannot delete its user (because the operation are dependent on each other) and it will rollback the transaction. We can now use the below code. ```php // delete user and all of its post DB::transaction( function () { $user = auth()->user(); $user->posts()->delete(); $user->delete(); }); ``` This is how we can handle the issue with DB transaction. If both transaction succeed then with will return success message. Thank you for reading. :unicorn: :unicorn: :lion: 😍
snehalkadwe
821,529
Image Creation, Management, and Registry(Part 2)
Tagging Docker Images Docker tags convey useful information about a specific image...
0
2021-09-24T12:20:03
https://dev.to/dporwal/image-creation-management-and-registry-part-2-514n
docker, linux, devops, microservices
### Tagging Docker Images Docker tags convey useful information about a specific image version/variant. They are aliases to the ID of your image which often look like this: 8f5487c8b942 _Assigning tag while building image_ ```shell docker build -t demo:v1 . ``` ![tagwhilebuilding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66oevbn09oar0jucqwfi.png) _Assigning tag if no tag default_ Lets build a image without tag. ```shell docker build . ``` ![imagewithouttag](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9fdul4imevpfqv4fwok.png) Now lets assign tag to the existing image without tag. ```shell docker tag adc07a47930e demo:v2 ``` ![tagtonontagimage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9cld0l6hi2x6ppebzl19.png) _tag for existing tag of the image_ This will create another image with the same IMAGE ID. ```shell docker tag demo:v2 demo2:v3 ``` ![existingtagofimage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b256kouqt0fqyxa6f2p1.png) ### Docker Commit Whenever you make changes inside the container, it can be useful to commit a container’s file changes or settings into a new image. _By default, the container being committed and its processes will be paused while the image is committed._ Syntax: ``` docker container commit CONTAINER-ID myimage01 ``` Created a container containing context01.txt file in root directory. Then committed the container to the images and then creating another container from the same image, where we can see the same file/changes made present. ![imagecomitting](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pi3p9ioln0dpubjg8np.png) We can also define the Commands while committing the images from the containers. The ***--change*** option will apply Dockerfile instructions to the image that is created. Supported Dockerfile instructions: * CMD | ENTRYPOINT | ENV | EXPOSE * LABEL | ONBUILD | USER | VOLUME | WORKDIR command ```shell docker container commit --change='CMD ["ash"]' modified-container ``` ![commandcommit](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tbq76wfliek4up6a5mn.png) ###Docker Image Layers A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Here is the best resource I found for the presentation. ![Docker Layer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z78h3fi0c3xzo6gde3k.png) As, the container have the R/W layer a top layer which is connected with the base image. OR we can say that one base image is connected to different containers by Writable layer. Lets, see some scenarios. Will build a docker image using dockerfile and will check the layers. Dockerfile: ```dockerfile FROM ubuntu RUN dd if=/dev/zero of=/root/file1.txt bs=1M count=100 RUN dd if=/dev/zero of=/root/file2.txt bs=1M count=100 ``` Command to check the layers of the image, where _layerdemo01_ is the image name. ```shell docker image history layerdemo1 ``` Below Screenshot clearly shows how 2 layers are of same volume, as they are not adding the volume of previous layer. Every layer has its separate size according to the command or the changes. ![layerdemo1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5v63hvqjy8xk5e0l00k.png) Now, lets try to remove these files and I found that 2 layers were created with *0B* but the image still contains the same *282MB*. So, Basically here we have 5 layers. Layer1 -> FROM Ubuntu Layer2 -> RUN dd if=/dev/zero of=/root/file1.txt bs=1M count=100 Layer3 -> RUN dd if=/dev/zero of=/root/file2.txt bs=1M count=100 Layer4 -> RUN rm -f /root/file1.txt Layer5 -> RUN rm -f /root/file2.txt Here Only Layer4 and Layer5 is dealing with the removing of those files, but the file are still there in Layer3 and Layer2. That is the reason image still have the same volume. Dockerfile: ```dockerfile FROM ubuntu RUN dd if=/dev/zero of=/root/file1.txt bs=1M count=100 RUN dd if=/dev/zero of=/root/file2.txt bs=1M count=100 RUN rm -f /root/file1.txt RUN rm -f /root/file2.txt ``` ![layerdemo2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i87878a7umfaimy78ele.png) For the requirement where we need to remove the files after creation, we can use ***&&*** to run both the command at one-go. This will conserve the volume of the image ```dockerfile FROM ubuntu RUN dd if=/dev/zero of=/root/file1.txt bs=1M count=100 && rm -f /root/file1.txt RUN dd if=/dev/zero of=/root/file2.txt bs=1M count=100 && rm -f /root/file2.txt ``` ### Managing Images using CLI So basically the best practice of using the command for images is ``` docker image <command> ``` * docker pull ubuntu == docker image pull ubuntu * docker images == docker image ls * docker image build * docker image history * docker image import * docker image inspect * docker image load * docker image prune * docker image push ![dockerImageCLI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3328zrmkkk4ha6w2dxh.png) ###Inspecting Docker Images A Docker Image contains lots of information, some of these include: * Creation Date * Command * Environment Variables * Architecture * OS * Size docker image inspect command allows us to see all the information associated with a docker image. Suppose we need to get the particular field from the inspect data. e.g _Hostname_ we can use ```shell docker image inspect ubuntu | grep 'Hostname' docker image inspect ubuntu --format='{{.Id}}' ``` There are certain things that have parent child details. Like, ContainerConfig have Hostname, Domainname etc. But, this will only gives the values not the key. ```shell docker image inspect ubuntu --format='{{.ContainerConfig}}' ``` If you want the key and value both. ```shell docker image inspect ubuntu --format='{{json .ContainerConfig}}' ``` If you just want the hostname value you can use below caommand to filter out the information from the inspect data. ```shell docker image inspect ubuntu --format='{{.ContainerConfig.Hostname}}' ``` ![imageInspect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/huttfah416t7y0oo259a.png) ###Docker Image prune Docker image prune command allows us to clean up unused images. By default, the below command will only clean up dangling images. ***Dangling Images = Image without Tags and Image not referenced by any container*** To prune all the images that has no container refrenced, we can use below commands. ```shell docker image prune -a ``` If you want to remove all the images only which don't have tag associated, you can use below commands. ```shell docker image prune ``` Before Prune we had these many images. ![before prune](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p858p1yrkhbqig7x6m57.png) After running prune command. ![after prune](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufr24srx8nbityzmgx42.png) Those images got removed which were not referenced to the container. Here is the below command the image without the tag ( _<none>_ ) tag which is a Dangling image, but it can't be prune because it has containers associated. ![imageprune](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4k9ilj35snt15x8e1vox.png) ### Flattening Docker Images Modifying Image in a single Layer or specific Layer. As we know ubuntu has many layers. So, to merge all layers to the single layer. There is one approach that to, *Import and Export to a container* Commands: ```shell docker export myubuntu > myubuntudemo.tar cat myubuntudemo.tar | docker import - myubuntu:latest ``` ![layerscompress](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb563g3p1e8cunng1dyj.png) ### Building Docker Registry A Registry a stateless, highly scalable server-side application that stores and lets you distribute Docker images. Docker Hub is the simplest example that all of us must have used. There are various types of registry available, which includes: Docker Registry Docker Trusted Registry Private Repository (AWS ECR) Docker Hub To push the image to a central registry like DockerHub, there are three steps: * 1. Authenticate your Docker client to the Docker Registry Refrence for setting up Docker Registry https://hub.docker.com/_/registry/ ```shell docker run -d -p 5000:5000 --restart always --name registry registry:2 ``` ![setupdockerregistry](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gj55o4pybeg22bgaei5.png) * 2. Tag Docker Image with Registry Repository and optional image tag. ```shell docker tag myubuntu:latest localhost:5000/myubuntu ``` * 3. Push Image using docker push command To push the docker image to the ***AWS ECR***. Refrence: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html ```shell docker push localhost:5000/myubuntu ``` ![dockerpush](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5n9z7p4qfptfv4fe0v6h.png) Now, lets pull the image from the registry. For that first we need to untag the registy located image and then delete the image. ```shell docker pull localhost:5000/myubuntu ``` ![dockerpull](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fozzdim19jz8ym0mukul.png) ### Pushing Docker Image to Docker Hub I have my account on Docker Hub and created had 1 repository. ![dockerhubrepo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/03n9xtmtkrod7d94f12m.png) So, first we will login to the docker hub by CLI and then create the tag and push it. Then before pulling the image I have removed the tag to the image and removed all the containers associated to that container and then pull the image from the Docker Hub Repository. ```shell docker login docker tag busybox deepakporwal95/mydemo:v1 docker push deepakporwal95/mydemo:v1 docker pull deepakporwal95/mydemo:v1 ``` ***Pushing out custom image to docker hub.*** ![pushingtodockerhub](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ylb4uxcjgn32jrjenrj.png) ***Pulling the image from Docker Hub.*** ![PullingdockerImage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkwsup11b3p6wtwdmn6x.png) ### Searcing and Filtering Images from Docker Hub | Description |Command | |:-----|:--------:| | Search for Busybox image | docker search busybox | | Search for Busybox image with Max Result of 5 | docker search busybox --limit 5 | | Filter only official images | docker search --filter is-official=true nginx | On searching nginx images. ![searchnginx](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9x2ttuiivlbf9329vnp.png) We will limit the number of results. ![limit search](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rqcoejp597zwc844wmw.png) On searching images from Docker Hub we get many results. We can filter those results by three filter supporters. 1. stars 2. is-automated 3. is-official This will bring to us specific required result only. ![niginx official](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syjoirsygccneej8ty8b.png) ### Moving Images Across Hosts Suppose we want to send docker image to other hosts or instances from admin box or master server. In this case we save the image as a zip and then transfer that image to host and at last will load that image in the host. ![movingimageaccroos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u60rh7uyc7emwd82las5.png) ### Cache in Docker While building a container or image it uses the cache of each layer which has been already been there. Here is the Dockerfile and requirements.txt details that we will be using for this usecase. ***Dockerfile*** ```dockerfile FROM python:3.7-slim-buster COPY . . RUN pip install --quiet -r requirements.txt ENTRYPOINT ["python", "server.py"] ``` ***requirements.txt*** ``` certifi==2018.8.24 chardet==3.0.4 Click==7.0 cycler==0.10.0 decorator==4.3.0 defusedxml==0.5.0 ``` ![Requirements](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erjzx6gbjfrly9755ncz.png) Here are the commands that will be using to build the images. ```shell docker build -t without-cache . docker build -t with-cache . ``` ***without cache*** ![withoutcache](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2zo5xp70bl1iylv8ux7.png) ***with cache*** ![with cache](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tibdgqoxq7027wqohda.png) References: [Official Docker](https://docs.docker.com/get-docker/) [Udemy Course](https://www.udemy.com/course/docker-certified-associate/) Credit: [Zeal Vora](https://in.linkedin.com/in/zealvora) [Prev: Image Creation, Management, and Registry(Part 1)](https://dev.to/dporwal/image-creation-management-and-registry-part-1-pk9) [Next: Docker Networking](https://dev.to/dporwal/docker-networking-5ef0)
dporwal
821,724
SHARINGAN
A post by raedon707
0
2021-09-12T17:42:54
https://dev.to/raedon707/sharingan-5090
codepen, sharingan, css, html
{% codepen https://codepen.io/raedon707/pen/GRmYzMG %}
raedon707
821,741
Unit Testing - Pipes
(To follow along, download the project from Github and use the master branch). In the following...
13,802
2021-09-12T18:31:54
http://angular-tips.com/blog/2021/09/unit-testing-pipes/
angular, testing
(To follow along, download the project from [Github](https://github.com/Foxandxss/datepicker-tutorial) and use the master branch). In the following sections we are going to develop a `Calendar`. It will allow us to see the current month or navigate to an specific date. As mentioned in the introduction, the Calendar is cumbersome to manually test. We need to check the current month to see that we don't have repeated or missing days. We need to check that the algorithm doesn't degrade if we generate a Calendar for a date in the future. We also need to check that the February of a leap and non leap years is generated correctly as well. Sounds like a waste of time in my book. The first thing we are going to develop is the Calendar's header. We give it a date and we get something like `September of 2021`. Yes, we are speaking about a pipe here. Pipes are the easiest component in Angular and they are also the easiest to test. We are going to follow a TDD approach in this tutorials, so let's open our `calendar.spec.ts` and code a basic skeleton. File: `libs/calendar/src/calendar.pipe.spec.ts`: ```typescript import { CalendarPipe } from './calendar.pipe'; describe('CalendarPipe', () => { let pipe: CalendarPipe; beforeEach(() => { pipe = new CalendarPipe(); }); }); ``` Here we are importing our pipe and creating an instance before each test. Let's code one test: File: `libs/calendar/src/calendar.pipe.spec.ts`: ```typescript import { CalendarPipe } from './calendar.pipe'; describe('CalendarPipe', () => { let pipe: CalendarPipe; beforeEach(() => { pipe = new CalendarPipe(); }); it('transforms 2021/06 to "June of 2021"', () => { expect(pipe.transform('2021/06')).toBe('June of 2021'); }); }); ``` We defined our API for this pipe in the test. We use a string with the date and it returns us another string with the date in English. > Note: To run the test, type: `npm run test:all -- --watch` so it runs all tests and enables watch mode. This obviously fails. Even when our pipe exists, it just returns null. ![first test fails](http://angular-tips.com/images/posts/testing/pipes/1.png) Let's code it: File: `libs/calendar/src/calendar.pipe.ts`: ```typescript import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'calendar', }) export class CalendarPipe implements PipeTransform { transform(value: string): string { const dateParts = value.split('/'); const date = new Date(+dateParts[0], +dateParts[1]); return `${date.toLocaleDateString('en-us', { month: 'long'})} of ${date.getFullYear()}`; } } ``` We split the date in two strings that we use to create a new date object. Then we construct a string out of it. The test pass correctly: ![first test still fails](http://angular-tips.com/images/posts/testing/pipes/2.png) Ah, it doesn't. ... Oh, I see, dates in Javascript are 0 based, so if we send `06` it means July and not June. Since it is a good practice to provide an easy API, we are going to modify our code so the API is **not** 0-based. File: `libs/calendar/src/calendar.pipe.ts`: ```typescript transform(value: string): string { const dateParts = value.split('/'); const date = new Date(+dateParts[0], +dateParts[1] - 1); return `${date.toLocaleDateString('en-us', { month: 'long'})} of ${date.getFullYear()}`; } ``` ![first test pass](http://angular-tips.com/images/posts/testing/pipes/3.png) Much better. > Fun fact: This API is quite odd. Having to split a string in two, cast the strings into numbers and then create a date object is well, not too smart. As we follow through this course, we will see many times that this API is horrible. This is a good proof that a well tested code doesn't imply that it is better or easier to use. It just means that it works as expected. With our first test working, let's use the API in other ways to see if it behaves: File: `libs/calendar/src/calendar.pipe.ts`: ```typescript it('transforms 2040/8 to "August of 2040"', () => { expect(pipe.transform('2040/8')).toBe('August of 2040'); }); ``` ![second test pass](http://angular-tips.com/images/posts/testing/pipes/4.png) Without the extra 0, it still works as expected. What happens if we pass a malformed date? File: `libs/calendar/src/calendar.pipe.ts`: ```typescript it('transforms 2021 to "Unknown date"', () => { expect(pipe.transform('2021')).toBe('Unknown Date'); }); ``` ![third test fail](http://angular-tips.com/images/posts/testing/pipes/5.png) `Invalid Date of NaN`. Yeah, that is what I get when I input my holidays. Jokes aside, let's fix that: File: `libs/calendar/src/calendar.pipe.ts`: ```typescript import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'calendar' }) export class CalendarPipe implements PipeTransform { transform(value: string): string { const dateParts = value.split('/'); if (dateParts.length !== 2) { return 'Unknown Date'; } const date = new Date(+dateParts[0], +dateParts[1] - 1); return `${date.toLocaleDateString('en-us', { month: 'long' })} of ${date.getFullYear()}`; } } ``` We simply check if the string is malformed and if so, we return an error string. ![all test pass](http://angular-tips.com/images/posts/testing/pipes/6.png) ## Conclusions Testing pipes is really easy. It is not any different to our `Calculator example`. We instantiate it, we write some tests and that is it. Next, we will code our Calendar's heart, the service.
foxandxss
882,145
Laravel 8 with Bootstrap (Part 2)
Before we developed the laravel site with bootstrap (Part 1) Click Here.This time we developing the...
0
2021-10-30T16:21:45
https://dev.to/jsandaruwan/laravel-8-with-bootstrap-part-2-4hg8
Before we developed the laravel site with bootstrap (Part 1) [Click Here](https://dev.to/jsandaruwan/laravel-8-with-bootstrap-58bm).This time we developing the our website livewire table with bootstrap. ![Image Bg](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bu18szanpu95r1zxjab6.jpeg) We use the users for that task. That to esay way to show you how to implement the livewire table with boosatrap. Let's go. First of we will go to the terminal on project. Go to the our project file. ``` cd bootstrap-app ``` After install the today magical package for our project. ``` composer require rappasoft/laravel-livewire-tables ``` Now we can create the livewire component. ``` php artisan make:livewire user-component --inline ``` ###### Tip:Creating the livewire you want only developing the Livewire compont without balde file you can use --inline. Our livewire component extends our new magical package ![Image magical](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fwg9pi6o22m5sfg5hr3.png) After see that has implement the columns function it return the our livewire table columns.Then we build the query functio. It rerun on query. Thats the today magical. Ohh no not end. ``` php artisan vendor:publish --provider="Rappasoft\LaravelLivewireTables\LaravelLivewireTablesServiceProvider" --tag=livewire-tables-config php artisan vendor:publish --provider="Rappasoft\LaravelLivewireTables\LaravelLivewireTablesServiceProvider" --tag=livewire-tables-views php artisan vendor:publish --provider="Rappasoft\LaravelLivewireTables\LaravelLivewireTablesServiceProvider" --tag=livewire-tables-translations ``` Use these publish all package config,views and translations. in config/livewire-tables.php we can change the css framework. (tailwind, bootstrap-4 , bootstrap-5) ![Image table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jbjlg9bj0i9m5p894lwy.png) Let's meet again for the brand new tutorial.
jsandaruwan
822,127
How to receive the message by telegram bot sendMessage
broswer input url:api.telegram.org/bot/sendMessage?chat_id=&amp;text=TEST my bot in telegram will...
0
2021-09-13T06:24:38
https://dev.to/stevehan/how-to-receive-the-message-by-telegram-bot-sendmessage-582a
broswer input url:api.telegram.org/bot<my token>/sendMessage?chat_id=<my chat id>&text=TEST my bot in telegram will receive the message"TEST".I wanna know if there is another way to receive the content"TEST" (like using a new url index or by python/php code)
stevehan
822,330
What is Diode? An Introductory Guide
The diode is a crucial device in electrical engineering. Diodes come in a range of forms and sizes...
0
2021-09-13T08:21:28
https://dev.to/elizabethjones/what-is-diode-an-introductory-guide-4k05
The diode is a crucial device in electrical engineering. Diodes come in a range of forms and sizes and it could be utilized for a range of applications. Diodes are typically discovered in rectifiers, which are used to transform AC to DC transmitters. Among the applications are energy conversion, radio modulation, logic gates, temperature monitoring, and current management. This article will go over the complete basics on everything you need to know about diodes. ##Definition of a Diode ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jtdkz9bgub3p66ar6yx.png) A diode is a device that permits a unidirectional movement of electricity and possesses the greatest opposition to electricity travelling in the other path. It features two ends: an anode and a cathode. The positive end is the anode, and the negative end is the cathode. As a result, electricity could travel from the positive end to the negative end. ##Symbol of a Diode ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eopq9yyvnqm8oylzob54.png) The pointer depicts the route of the typical travelling of electricity in a situation where it is forward biased. The anode is attached to the p side, whereas the cathode is attached to the n side. By doping pentavalent or donor impurities in an area of a silicon or germanium crystal block and trivalent or acceptor impurities in another, we might build a basic PN junction diode. In the block's center, these dopings form a PN junction. A PN junction can also be created using a unique fabrication method to link a p-type and an n-type semiconductor combined. The anode is the part that gets attached to the p-type. The cathode is the end that connects the n-type side. ##Diode Working During situations whereby two semiconductor components are combined, a temporary transfer of electric charge happens, leading to a depletion layer formation. Since no electricity is delivered to any terminals, you can refer to it as Zero Biasing State. It features 2 additional biasing phases in functional situations: * Forward biased * Reverse biased ### Diode as Forward Biased Although the PN Junction created at the intersection of 2 areas is modest, it is sufficiently potent that it can prevent free electrons from going through. So, if you are capable of applying some outside force to these electrons, they would be capable of breaking through this boundary and entering the P-Type zone. ### Diode as Reverse Biased During situations where the applied power's polarity is reversed, whenever the battery's positive end is linked to the cathode(-), and the negative end is attached to the anode(+), the depletion region expands. The diode is regarded to be reverse biased when it boycotts electricity from passing through it. It functions as though it were an open switch. ## Characteristics of a Diode For clearer understanding, both the static and dynamic characteristics of a diode will be discussed here. On looking at the diagrammatic illustration below, you wouldn't need an elaborate interpretation. Although, here are some keywords needed for the description of a diode's characteristics. The main static characteristics of a diode are: * Forward voltage VF * Forward current IF * Reverse voltage VR * Reverse current IR ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ychdsvjgv1wtx9if46f.png) The useful region of rectification diodes is indicated by orange broken lines in the schematic illustration. It is the region inside the permissible IF level, and also the breakdown voltage span in the opposite path. The part covered by the green broken lines is the usable part for zener diodes. Other diodes cannot be utilized in this region, and if it is penetrated with no IR limitations, gadget malfunction could ensue. The two best dynamic characteristics are: * The reverse recovery time trr * The static capacitance Ct ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/845wlesbe14qzgthltwc.png) trr is the time needed for a voltage to be enforced in the forward path, and IF to travel before there would be alteration in the path of the voltage and the IR would go back to its steady state, usually zero. The capacitance Ct is the diode's own capacitance, and it functions similarly to a capacitor. ##Types of Diode There are several types of diodes, some of which are: * [Zener Diode](https://www.theengineeringprojects.com/2019/09/what-is-the-zener-diode.html) It is a part that operates on the Zener Breakdown principle. It was invented by Clarence Zener in 1934 and worked similarly to a regular diode in forward bias, letting electricity to pass. However, it only transmits in reverse bias in situations wherein the enforced current hits Zener Breakdown value. * Light Emitting Diode (LED) Electrical energy is transformed into light energy by these diodes. In 1968, the initial batch was manufactured. The forward bias situation goes through an electroluminescence phase. * Schottky Diode The junction is achieved by connecting the semiconductor part with metal. The forward voltage loss experiences a reduction to the absolute minimum as a result of this. N-type silicon performs like it is the positive terminal, while metals like chromium, platinum, and tungsten serve as the negative. * [Shockley Diode](https://www.allaboutcircuits.com/textbook/semiconductors/chpt-7/shockley-diode/) It was among the first semiconductor gadgets created. The Shockley diode comprises four layers. It is also regarded as a PNPN diode. It is similar to a thyristor without the need for a gate terminal, which implies it is not hooked. Because there is no trigger input, it is only capable of conductivity by supplying forward current. * Tunnel Diode It is utilized as an elevated controller, featuring switching times in the nanosecond range. It operates relatively quickly in the microwave frequency range owing to the tunneling impact. It features an extreme number of dopants. * Varactor Diode Varicap diodes are another name for these. It functions similarly to a variable capacitor. The majority of actions are carried out in the reverse bias state. It is popular for its capacity to change the capacitance varies inside a circuit while maintaining a steady current circulation. * Peltier Diode Heat is accomplished by a semiconductor's P-N junction, and it travels via one end to the reverse. This circulation only has one path, that is identical to the present electricity path. ##Applications of a Diode * A waveform clipper is a tool utilized for the reduction of a waveform's size. * Included for extraction of the amplitude signal. * It is utilized for the regulation of electricity passage. * Rectifiers are constructed by utilizing this device for the conversion of AC signals to DC signals. ##Conclusion All diodes feature their advantages and uses. Some are popularly utilized in numerous manners all over several domains, while some are only utilized for limited applications.
elizabethjones
822,355
How to Become a Better Programmer
How to become a better programmer? Practice, practice, practice, practice, practice, practice,...
0
2021-09-13T09:24:15
https://dev.to/bigcoder/how-to-become-a-better-programmer-17g0
programming, productivity
How to become a better programmer? Practice, practice, practice, practice, practice, practice, practice, practise. programming is one of those skills where you can always get better. But how do you make better an already complicated skill? Most programmers will agree that there is always room for improvement. This is the way it should be. We strive to make ourselves better because we want to be better. Of course, there are also reasons beyond that: we want to get promotions, we want to get raises, we want to get featured on Hacker News, we want our code to be used. No doubt you've heard people repeating the mantra: "Practice makes perfect". ## Master vim Most programmers use their keyboard when they code. But to be “good”, you have to use it much more than that. You should be using your keyboard like a reflex. You should be able to think in code and code in code. You should know the editor you use fluently. vim is my favorite editor. I use it every day, and it has completely changed the way that I program. You can [learn vim](https://vim.is) online, master the keyboard shortcuts and focus on your code. There are a lot of good reasons to use vim. It’s ubiquitous. If you are on a Linux machine or even if you are on OSX, vim is probably already there for you. The resource footprint is low, it’s incredibly configurable, and most importantly, it’s highly efficient. ![vim versus soydev](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfvu762tcbc3amw28g9b.jpeg) ## Tutorials The best way to learn to code is to code! However, that’s easier said than done. Career paths are littered with pitfalls that get in the way of someone learning how to code. Luckily, there are ways to navigate these hazards and have a clearer path toward success. There are many tutorials and courses online on places like [udemy](https://udemy.com), [youtube](https://youtube.com) and others. Of course watching videos doesn't make you a great programmer, but it can help you knowing what's possible. ![deleting code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqkdrzwsnfq95jd7ddr3.jpeg) ## Side Projects Taking on a side project is a great way to get into new technologies and programming paradigms. Here are some things that I learned through my experiences: 1. Choose something you're already interested in. 2. Automate what would otherwise be manual. 3. Use a lightweight framework. 4. Don't be afraid to experiment. 5. Expect to learn more than you plan for. Take advantage of the time you have to learn new technologies. ## Learn Linux Learn about linux and how it can make you a better programmer. Linux is an operating system. You can use this program to surf the web, listen to music, or watch movies. But you can also do much more with it. Linux is used very often in things like servers, which are powerful computers that store and send information back and forth to other servers. Linux is also used to make smart phones work. Key to understanding how to use Linux, is using the [Linux shell](https://bsdnerds.org/what-is-linux-shell/) ## Learn multiple languages and technologies Learning new programming languages is the most obvious thing to do, but you also need to learn about things outside of your program. I've known many programmers who are great at one language, but fall short when it comes to others. The reason for this is that there are various parts of programming that are generally common to all languages. If you can understand these concepts, you'll be able to pick up new languages very quickly (and you'll also gain a lot of insight into how programming works). ![programming](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gakxkax7yw41lafzbjkw.jpeg)
bigcoder
822,610
JSON logging in Elixir using a Custom Formatter
Elixir logging is awesome and extensible. You can write your own backend or a Formatter and make...
0
2021-09-13T16:06:29
https://dev.to/darnahsan/logging-json-messages-in-elixir-using-a-custom-formatter-25ld
elixir, json, logging, plug
Elixir logging is awesome and extensible. You can write your own backend or a Formatter and make endless combinations with unlimited possibilities. JSON logs are the norm nowadays and there are plenty of ways to get JSON logs in #elixir. Ink :muscle: is an awesome hex package to make that happen. Add it to your project and all your logs will become JSON. There were 2 shortcomings though for me 1. It only writes to stdout, though that would work in most cases but when you have to write to file it hits a wall. 2. Is not an Ink issue per se but a Plug "feature" where it logs 2 entries, first of the endpoint invoked and second of the repsonse and execution time. The 2nd issue makes it difficult to have all the information to a request on a single object and also it repeats data such as request-id and process. Also it doesn't even adds the user-agent to the request. To do that you need to add it your self in the metadata. Fret not there is a solution to the logging of plug request :sunglasses: plug_json_logger FTW. {% github bleacherreport/plug_logger_json %} plug_json_logger solves the 2nd issue as it combines both Plug requests and even adds user agent to it and even solves the 2nd one as well almost, as it works as a formatter so it can be integrated with any backend and can be used with `LoggerFileBackend` for output to file. Almost to good but there is a catch and it ain't that it uses Poison 😅 but that if log message is not from Plug it get printed out as plain text with no formatting , no JSON 😱. Once again fret not this is where Logger's custom formatter power comes to play. You can define a formatter that can help you format your non JSON-ish messages before they output to your backend here is a quick and dirty formatter to make it happen I hacked after reading timber.io blog, link on the gist. {% gist https://gist.github.com/ahsandar/5e38031b494730658b8d9f1d3271d7f2 %} `plug` request log ``` {"function":"log_message/4","level":"info","line":119,"message":{"client_ip":"X.X.X.X","client_version":"okhttp/4.8.1","date_time":"2021-09-13T11:26:15Z","duration":0.251,"handler":"N/A","method":"POST","params":{"glob":[]},"path":"/query","request_id":null,"status":200},"module":"Elixir.Plug.LoggerJSON","timestamp":"2021-09-13T11:26:15.794Z"} ``` `non plug` log ``` {"function":"ip/0","level":"info","line":17,"message":{"msg":"Ipify detected IP: X.X.X.X"},"module":"Elixir.Services.Ipify.Api","timestamp":"2021-09-13T11:27:49.324Z"} ``` and just make this all happen with adding config as below ``` use Mix.Config config :plug_logger_json, filtered_keys: ["password", "authorization"], suppressed_keys: ["api_version", "log_type"] config :logger, backends: [{LoggerFileBackend, :info_log}], utc_log: true, handle_otp_reports: true # configuration for the {LoggerFileBackend, :info_log} backend config :logger, :info_log, format: {Formatter.Log, :format}, metadata: :all, path: "/src/log/app.log", level: :info ``` Elixir Logger is just amazing and super flexible with tons of formatters and backends available and if you don't find something satisfying your needs just hack one out. Hope this gave you enough info to hack your solution
darnahsan
822,623
To GraphQL or not to GraphQL? Pros and Cons
So you want to know if GraphQL is a great fit for your project? Using GraphQL can be a consequential...
0
2021-09-13T15:45:20
https://slicknode.com/blog/graphql-or-not-graphql-pros-and-cons/
graphql, architecture, security
So you want to know if GraphQL is a great fit for your project? Using GraphQL can be a consequential decision, for better or worse. If you have never used GraphQL in a complex project, how are you supposed to know if you are going to regret it later or celebrate yourself for making the right call? It can be hard to see through all the hype articles like *"GraphQL is the successor of REST"* or *"GraphQL is overkill for most projects"* and understanding what GraphQL would mean for your particular project. I want to save you some trouble with this post and put everything I have learned about GraphQL in the past years into one blog post that can help you make a more informed decision. Can you trust my opinion? Maybe, always make your own judgment! For reference, here are a few things I have worked on in the GraphQL ecosystem: - I have built thousands of GraphQL APIs in the process of creating [Slicknode](https://slicknode.com), a framework and headless CMS to rapidly create GraphQL APIs - I have written [graphql-query-complexity](https://github.com/slicknode/graphql-query-complexity) the most popular open-source library to secure GraphQL servers written in NodeJS against DoS attacks. It is also used by some of the big open-source frameworks like TypeGraphQL and NestJS - I have migrated a 12-year-old codebase to GraphQL - I have ported a [library to create Relay compatible GraphQL APIs](https://github.com/ivome/graphql-relay-php) from JavaScript to PHP - I have removed GraphQL from a service where it turned out to be not a great fit So keep in mind that I am a huge GraphQL fan, but I'll try my best to talk about the tradeoffs as well. ## Is GraphQL Just A Hype? At this point, it is probably safe to say that GraphQL is here to stay. In the years following its open-source release in 2015, it has gained an incredible amount of traction. It has been adopted by a large number of fortune 500 companies, and those applications don't tend to disappear overnight. GraphQL has also sparked the interest of investors: Businesses that build products around GraphQL have raised hundreds of millions of dollars in venture capital. There are now GraphQL clients and servers in all major programming languages available and the GraphQL library for Javascript alone has currently 5.5 million weekly downloads. The tooling around GraphQL has been a joy to work with and gives you a lot of options to get creative and solve real-world problems. ## What is GraphQL? [The official website](https://graphql.org/) describes GraphQL as "A query language for your API". The process is explained as "Describe your data ➜ Ask for what you want ➜ Get predictable results". I like to think of it as **the SQL for APIs**. When you work with SQL, you can write your SQL query in a declarative way, describe what data you want to load or change, and then let the SQL server figure out the best way to perform the actual operation. You don't care what is happening under the hood, like which blocks are read from disk, caches, or networks, this is all nicely hidden from you. When there is a new release of your SQL server available, you can safely update the database and profit from all the performance improvements, etc. without having to update a single SQL query in your codebase. If there were new features added, you can use those features in new parts of your application, but you don't have to update existing parts of your application, as the old queries will still work the same. This is very similar to how GraphQL works, only for APIs: You write a GraphQL query in a declarative way, send it to your server, and the server figures out the best way to load that data. It is then returned in the exact format that you specified in your query. Now, **if you need different data, you just change the query, NOT the server**! This gives API consumers unprecedented powers. You can send any valid query to your GraphQL server, and it will on-demand return the correct response. It's like having unlimited REST API endpoints without changing a single line of code in your backend. A good analogy: **When REST is like ordering a la carte in a restaurant, GraphQL is the all-you-can-eat buffet**. You can mix and match any dish that is served at the buffet on a single plate and take as much or as little as you want. With REST, the dish always has the same portion size, you first have to ask the waiter if they could combine multiple dishes, they have to check back with the kitchen and might come back telling you that they don't do that, thinking you are an annoying customer for not respecting their menu. Also, when you want to add new capabilities to your GraphQL server, you just add more fields and types to your schema. All the existing GraphQL queries in your client applications keep working without requiring any changes. This enables you to evolve your APIs and client applications independently, but we will look at that in more detail later. If you are not that familiar with GraphQL yet, I recommend checking out the [introduction on the official GraphQL website](https://graphql.org/learn/), so you can see some GraphQL queries and schema definitions in action. ## Why GraphQL? So why was GraphQL created in the first place and what problems was it supposed to solve? GraphQL was initially created at Facebook to solve some major challenges with the API communication between the servers and their plethora of client applications for all kinds of device sizes, operating systems, web apps, etc. As you can imagine, evolving APIs at that scale is extremely hard, especially if you are using a REST-style architecture. There is [a documentary about GraphQL](https://www.youtube.com/watch?v=783ccP__No8) where the GraphQL creators and other early adoptors talk about why they created and adopted GraphQL. Let's look at the most important reasons in some more detail. ## Overfetching A problem with REST APIs is that you have a predefined response format that is pretty inflexible. One URL always returns the requested resource in its entirety. Sure, there are ways to mitigate that problem like passing the fields to include in the response via parameters, etc., but this is not standardized, therefore needs documentation and it has to be implemented in the backend, which adds unnecessary complexity. This can become a problem over time, especially as you evolve your API and add new features or deprecate obsolete data. For example: Let's say you want to add a new field to your user object with the current online status to the REST endpoint of the user: ``` GET https://example.com/users/2 { "username": "ivo" } ``` You could simply add the field to the response, which then becomes: ```json { "username": "ivo", "isOnline": true } ``` This is great and works. However, the problem with this approach is that now the online status is sent to every single client application even if they don't need it. Do this a few times and you end up with bloated, mostly useless responses that you send to every client, consuming their bandwidth and making your application slower over time. ## Underfetching Another challenge you might be familiar with when building rich user interfaces is something called underfetching: You don't get all the data you need to display in a single API request and have to call the API again to load related data. This adds additional server roundtrips, increases the latency, and can lead to poor user experience. Let's look at a very simple example: Say you want to create the backend for a blog post detail page where you want to display the following data: ```json { "post": { "title": "GraphQL is awesome!", "text": "GraphQL solves so many problems...", "author": { "name": "Ivo" }, "comments": [ { "text": "100% !!!", "author": { "name": "John" } }, { "text": "Couldn't agree more", "author": { "name": "Jane" } } ] } } ``` When you want to implement that with REST, you have to ask a few questions: Do you include the author in the response data of the post? - Why not put it in a dedicated `/user/23` API resource to avoid redundancy and only return a reference in the post response? - What about comments? Do you return them as well? - What about the author of the comments? - Where does it end? To keep it [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) you might implement the response like this: ```json { "post": { "title": "GraphQL is awesome!", "text": "GraphQL solves so many problems...", "author": "/user/1", "comments": "/comments?post=345" } } ``` When the client gets this response, it does not contain all the data that we need. We have to make additional requests to fetch the author and the comments, which adds additional latency. We could create a custom API endpoint where we return the data exactly in the shape that we need. But that might add redundancy in our backend and make the API less flexible (a mobile application might not want to load the comments initially). GraphQL eliminates this problem by giving frontend developers the ability to request exactly the data that they need on-demand and let the GraphQL server do the heavy lifting of loading (or not loading) references automatically. ## Feature Deprecation As your project evolves and requirements change, you might want to deprecate a feature and remove it from your API to not have to maintain obsolete or redundant services. This can be a major challenge with a REST architecture, especially in more complex projects. Do you release a new version for every feature that is removed? How do you make sure a removed feature is not still in use by some client application? When can you shut down the old API versions? In a lot of cases, it is easier and cheaper to just keep the old feature in place than to go through the significant engineering effort of implementing a solid migration strategy. The problem is, you are forcing all this useless data through the bandwidth-limited connections of your users, with no easy way out, or you have to maintain multiple versions of your API. GraphQL has a built-in way to solve this problem. You can just mark a field as deprecated and add information on how to migrate client applications. This information is available to all client applications in a standardized way, you can run a script in your CI pipeline that automatically checks if deprecated fields are used and migrate your client applications accordingly. As soon as all deprecation notices are fixed in your client applications, you can safely remove the field from your GraphQL API. The benefits are obvious: - No need to create, run and maintain multiple versions of your API. Just one GraphQL API that evolves with your project over time. - An automated and self-documenting way to manage feature deprecations. ## Decoupling Frontend & Backend Development By introducing GraphQL in a project, you are eliminating an enormous amount of friction by completely decoupling frontend and backend development. Any number of frontend applications can be developed and changed completely independent of the backend. There is no need to create or update a specific REST endpoint for a particular view, the power shifts to the frontend developers as they can just request data on demand. To get back to the restaurant analogy: The chefs can just place a new dish on the buffet and users can come mix and match it with anything else they pick up when they fill their plate. There is no coordination needed. Compared to the REST style a-la-carte ordering, changing the menu to combine multiple dishes requires coordination with the chef, and possibly other restaurant staff. As an example of what this can mean in practice: In one project I was working on, a team created an entire mobile application without changing anything in the GraphQL backend. ## The GraphQL Killer-Feature There is one feature of GraphQL that gets way too little attention and is oftentimes not even mentioned in articles examing the pros and cons of GraphQL. In my opinion, this is **the killer feature that makes GraphQL invaluable**, especially in larger projects. All the GraphQL advantages that we have looked at so far can somehow be worked around, with some ugly compromises to be fair, but nothing was a show stopper. However, I am not aware of any widely adopted technology that even comes close to how GraphQL solves this particular problem: **Co-Location Of Data Dependencies** Let's look at an example to illustrate the problem. We have a React component somewhere in a codebase where we display the user name: ```javascript export function UserName({user}) { return ( <span>{user.username}</span> ); } ``` Now we want to display the online status next to the username. What sounds like a simple task can quickly escalate into a nightmare. It raises all kinds of questions: - Is that online status even available in the user object? - How do you know or find out? Is there documentation available? - If the data is coming from an API, how do you make sure it is included in every single API endpoint that returns the user object for the component? You need a vast amount of knowledge that is not readily available where you want to implement the feature, and also not necessarily related to the task at hand. For a developer that is new to the codebase, this can be particularly problematic. You might have to identify and change lots of API endpoints and include the online status in all the API responses that include the user object. This becomes a complete non-issue with GraphQL APIs because you can co-locate your data dependencies with your frontend components using GraphQL fragments: ```javascript export const UserNameFragments = { user: gql` fragment UserName on User { username #Just add the online status to the fragment here: isOnline } ` } export function UserName({user}) { return ( <span>{user.username} ({user.isOnline ? "online" : "offline"})</span> ); } ``` You define the data dependencies right in your UI components with GraphQL fragments. The parent components can then include those fragments in their GraphQL queries that load the data, and you have a guarantee that the `UserName` component receives the online status, no matter where in your application it is located. This makes it incredibly easy to extend your application no matter how complex it is. You don't have to have any knowledge about the rest of the codebase and can confidently implement a feature without leaving the component. With the right tooling, you even get autocomplete functionality, type validation, and documentation in your IDE. ## Performance & Security With great power comes great attack surface. By exposing a GraphQL API to the internet, you are giving clients an enormous amount of power that can have big implications with regards to security and performance. Clients have access to all your data and functionality at once, on-demand. This significantly increases your attack surface and can easily be exploited if not considered from the start. Let's look at a few problematic queries... Load a ridiculous amount of data: ```graphql query LotsOfPosts { posts(first: 100000000) { title } } ``` Load deeply nested data that requires millions of DB queries: ```graphql query DeeplyNestedData { user(id: 2) { name friends { name friends { name friends { name } } } } } ``` Launching a brute force attack in a single request: ```graphql mutation BruteForcePassword { attempt1: login(email: "victim@example.com", password: "a") attempt2: login(email: "victim@example.com", password: "b") # ... attempt100000: login(email: "victim@example.com", password: "xxxxx") } ``` The problem is that those queries are not prevented by commonly available rate limiters. You can send a single request to a GraphQL server that completely overwhelms the servers. To prevent such queries to GraphQL APIs, I wrote [graphql-query-complexity](https://github.com/slicknode/graphql-query-complexity), an extensible open-source library that detects such queries and rejects pathological queries before consuming too many resources on the server. You can assign each field a complexity value, and queries that exceed a threshold will be rejected. In [Slicknode](https://slicknode.com) this protection is added automatically based on the number of nodes that are being returned. Another common approach is to register an allow-list of queries that are permitted and reject all other queries to the API. This might be more granular and secure than dynamic rules, but it limits the flexibility of your GraphQL API and you have to keep your query registry up to date with all your client applications, which requires additional setup and maintenance. Optimizing the requests to your internal data stores can be another challenge. The responsibility to optimize the data loading process lies 100% with the GraphQL API and can't easily be offloaded to a CDN or reverse proxy. With a REST API, you have very good control over how many and what database queries are executed, thanks to the limited scope of the REST endpoint. With GraphQL a client can request any number of objects that might have to be loaded from different DB tables. [Slicknode](https://slicknode.com) automatically combines multiple requested objects into a single database query by analyzing the GraphQL query and generating the SQL query dynamically, but your average ORM is probably not equipped to do that out of the box. This is also related to the N+1 problem, where nested queries make the number of database requests explode. If you want to learn more about this problem, I recommend [this video](https://www.youtube.com/watch?v=OQTnXNCDywA) and checking out [dataloader](https://github.com/graphql/dataloader), a library released by Facebook to help with batching queries and solving this problem. ## Caching vs GraphQL Caching is always a hard challenge and that can be especially true for GraphQL servers. A lot of the tools we usually rely on don't work well with GraphQL out of the box. Take CDNs for example. The most common way to deploy GraphQL APIs is via an HTTP server. You then send your GraphQL requests via a POST request to the API and retrieve your response. The problem is that POST requests are not cached by default in the most common CDNs. You could potentially also send your requests via a GET request, but you will quickly hit the request size limit as GraphQL queries can get huge. If you are using a query allow list, you can send a query ID or hash to the server instead of the full query to get around this limit. Cache invalidation can also be more challenging with GraphQL APIs. All the queries are usually served via the same URL, so invalidating resources by URL is off the table. Furthermore, one data object can be included in any number of cached responses. One strategy to solve this is to attach cache tags to responses and then later invalidate responses based on those tags instead of URLs. This is the approach that is used in the [Slicknode](https://slicknode.com) Cloud to cache GraphQL responses around the globe. A great way to add caching to your GraphQL APIs is to add a layer behind the GraphQL API itself and implement it in front of your data sources. Combine this with the dataloader mentioned in "Performance & Security" and you can fully customize the cache behavior. ## Single Point of Failure One thing to keep in mind is that your GraphQL API becomes your API Gateway replacement as the way to access all your functionality, data, and services. If the GraphQL API goes down, your entire application is offline. This is not that different from a REST architecture, but good to know that the GraphQL API will be a critical part of your infrastructure and treat it accordingly. ## Best (and worst) Use-Cases for GraphQL If you have a hammer, every problem looks like a nail. It is really easy to fall in love with all the benefits that GraphQL provides. It makes your life as a frontend developer so much easier compared to previous technologies. But some types of applications are more suitable for GraphQL than others. I have burnt my fingers too and have removed GraphQL from some parts of an application, while for other applications I would highly recommend it. In my experience, the best use-case for GraphQL is the purpose it was originally built for: Providing the data and functionality for rich user interfaces. A central place that contains all your data and functionality in one unified GraphQL API, easily accessible for any number of teams, always up to date, and self-documenting. It dramatically reduces the complexity of your frontend code. Where you previously had to implement lots of API calls with all the complexity that asynchronous functionality entails (loading states, error handling, etc.), you can now simply define your data dependencies and let GraphQL take care of the rest. You can validate all your API calls at build time and implement solutions with end-to-end type safety. Even though GraphQL is awesome to use as a single developer, the bigger your application and team become, the more you'll enjoy working with GraphQL. So when should you consider alternatives to GraphQL? I would consider alternatives for applications where you want to physically separate different services. GraphQL is great for combining a lot of functionality in one place. This might be a problem if you want to isolate certain services for example at the network or hardware level and only make them accessible to a subset of services. You might be better off looking at other architectures. ## Conclusion GraphQL is an awesome addition to a developer's toolbelt, especially for powering user interfaces. It is a joy to work with and I am excited to see the GraphQL ecosystem gaining more and more traction. To make it easier for developers to build GraphQL APIs I created [Slicknode](https://slicknode.com). It automates all the hard parts and gets you up and running in minutes. Come join the awesome GraphQL community! I also have some pretty big news to share about [Slicknode](https://slicknode.com) soon, so make sure to [follow me on Twitter](https://twitter.com/intent/user?screen_name=ivomeissner) and subscribe to the newsletter so you'll be the first to know.
ivomeissner
822,684
Web Frameworks
COMMONALITIES They are component-based JavaScript frameworks. StencilJS, however, is just a...
0
2021-09-13T16:58:17
https://dev.to/yuellian/web-frameworks-53m1
COMMONALITIES 1. They are component-based JavaScript frameworks. StencilJS, however, is just a toolchain that creates projects. 2. VueJS, React, and Stencil use a virtual DOM while Angular uses the browser's DOM. 3. StencilJS and Angular both use TypeScript files DUPLICATE/OVERLAPPING 1. package.json file 2. All have a source directory EASIEST DX React. It offers a lot of flexibility and is easy to test. It also has a strong community so if a coder needs help, there is a large group of developers and supporters who might've already answered their question online. PREFERENCE I like the Angular framework because it is a full-fledged solution, however it is great for large-scale applications. I would prefer using React for our projects because it is really flexible and great for light weight apps. It is easier to get started and relatively small file size (100 KB). ORGANIZATION'S BIOLERPLATE https://github.com/IST402/boilerpoint MY BIOLERPLATE FOR ANGULAR https://github.com/yuellian/boilerpoint
yuellian
823,919
CodePen no VS code
CodePen &amp; VSCode CodePen é um editor de código on-line útil e libertador para...
0
2021-09-14T22:48:52
https://dev.to/rrodrigues345/codepen-no-vs-code-1mcj
codepen, vscode, alura, imersaodev
## CodePen & VSCode CodePen é um editor de código on-line útil e libertador para desenvolvedores de qualquer nível de habilidade, e particularmente capacitante para pessoas que estão aprendendo a programar. Usando apenas seu navegador, permite que você escreva códigos principalmente em linguagens de front-end como HTML, CSS, JavaScript e veja os resultados à medida que os constrói. Além disso, é principalmente um ambiente de desenvolvimento social pois permite uma maneira fácil de compartilhar seu projeto com a comunidade dev. Entretanto, em alguns casos, pode ser que você queira exportar o seu projeto para um outro editor de código-fonte, como o Visual Studio Code, desenvolvido pela Microsoft e assim praticar em um editor diferente. ## Primeiros Passos ### Crie uma pasta para armazenar o projeto no seu computador: ![imagem pasta](https://github.com/rrodrigues345/rrodrigues345.github.io/raw/main/codepen-to-vscode/02-criar-pasta-projeto.png) ### Abra a pasta com o VS code: Clique com o botão direito do mouse em cima da pasta e escolha a opção "Abrir com --> Visual Studio Code" ![imagem abrir-com-vscode](https://github.com/rrodrigues345/rrodrigues345.github.io/raw/main/codepen-to-vscode/01-abrir-com-vscode.png) ### Crie os arquivos para fazer a importação do conteúdo O VS code abrirá a pasta do projeto, porém não há ainda nenhum arquivo. ![imagem abrir-com-vscode](https://github.com/rrodrigues345/rrodrigues345.github.io/raw/main/codepen-to-vscode/03-criar-arquivos.png) Vamos então, criar 3 arquivos, referentes às 3 colunas do CodePen contendo o HTML, o CSS e o JAVASCRIPT . - index.html; - style.css; - app.js; Feito isso, cole o conteúdo do CodePen correspondente a cada arquivo. No CodePen o conteúdo estava separado, já no VS code precisaremos referenciar os conteúdos para que fiquem interligados. Fazemos isso editando o arquivo index.html. No arquivo index.html, editamos o `<head>` e adicionaremos o caminho do arquivo `style.css`. No `<body>` adicionaremos o link referente ao `app.js`. Veja a imagem, onde destaquei o código em amarelo: ![imagem editar-html](https://github.com/rrodrigues345/rrodrigues345.github.io/raw/main/codepen-to-vscode/04-editar-html-01.png) Confira o resultado abrindo o arquivo index.html pelo seu navegador. Aqui no exemplo, usaremos o Google Chrome: ![imagem abrir-com-google](https://github.com/rrodrigues345/rrodrigues345.github.io/raw/main/codepen-to-vscode/05-abrir-com-google.png) Veja como está abrindo direitinho no Google Chrome: ![Visualizando site no Google Chrome](https://github.com/rrodrigues345/rrodrigues345.github.io/raw/main/codepen-to-vscode/05-abrir-com-google2.png) Agora você poderá praticar também no VS code e depois exportar seus projetos para o CodePen. Fica a seu critério! Essa dica também é útil nos casos em que você ficará por um período sem internet, então nada melhor do que ter seus arquivos disponíveis *offline* =) ## Seja meu amigo de bolso! - [twitter](https://twitter.com/rrodrigues345) - [instagram](https://www.instagram.com/rrodrigues345/) ## Referências: - https://www.alura.com.br/artigos/codepen-o-que-e-e-como-usar - https://www.youtube.com/watch?v=xvkuNF_8Coc - https://www.youtube.com/watch?v=j6S1Izj5mqM
rrodrigues345
829,343
Learning Javascript
Hey everyone i am learning javascript , looking for open source contribution . Target of FOSS ASIA .
0
2021-09-17T08:05:03
https://dev.to/arq0017/learning-javascript-1l73
Hey everyone i am learning javascript , looking for open source contribution . Target of FOSS ASIA .
arq0017
841,239
การใช้งาน Lottie animations ร่วมกับ React
Lottie Animation เป็นไฟล์ภาพอนิเมชั่นที่เบา เร็ว และมีภาพเคลื่อนไหวที่มีคุณภาพสูง คือ เร็วกว่า .gif...
0
2021-09-26T16:57:45
https://dev.to/_tnkshcc/lottie-animations-react-3190
Lottie Animation เป็นไฟล์ภาพอนิเมชั่นที่เบา เร็ว และมีภาพเคลื่อนไหวที่มีคุณภาพสูง คือ เร็วกว่า .gif หรือ png ที่เรียงซ้อนกัน หรือแม้กระทั่งวีดีโอที่เป็นไฟล์ใหญ่โดยปัจจุบันเว็บต้องการการโหลดเว็บที่เร็วแต่ยังต้องมีลูกเล่น การใช้ Lottie Animation เพื่อตอบโจทย์ในส่วนนี้ ในบทความนี้จะไม่พูดถึงการสร้างไฟล์ Lottie Animation แต่จะเป็นการพูดถึงการใช้งานไฟล์ Lottie ร่วมกับ React ซึ่งแน่นอนจะต้องมีการสร้างไฟล์ Lottie ที่เป็นไฟล์นามสกุล .json ขึ้นมาแล้ว ซึ่งทาง Lottie Animation จะมีตัวช่วยในการสร้างแบบออนไลน์อย่าง Lottie Editor แทนการสร้างในโปรแกรม Adobe After Effect ![lottie editor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxu2aj07p0e6faba29w4.png) [สร้างไฟล์ Lottie Animation ด้วย Lottie Editor](https://lottiefiles.com/editor) ต่อมาหลังจากที่เราได้ทำการสร้างไฟล์ .json ขึ้นมาแล้ว เนื้อหาในไฟล์จะอยู่ในรูปแบบของ json ซึ่งสิ่งที่สำคัญอีกส่วนหนึ่งในการใช้งานเพื่อ Render ไฟล์ .json ของเราให้อยู่ในรูปแบบ Animation บนหน้าเว็บนั้นในส่วนนี้เราจะใช้ Libraly ที่ชื่อว่า react-lottie ![react-lottie](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m7m7meuz86gd8ee7w274.png) เริ่มต้นโปรเจค React ด้วย create-react-app ในที่นี้จะยกตัวอย่าง app ว่า app-use-lottie > หากผู้อ่านไม่เคยใช้งาน create-react-app สามารถอ่านบทความก่อนหน้านี้ได้ที่ [เริ่มต้นโปรเจคด้วย React Typescript + Tailwind Css ](https://dev.to/_tnkshcc/react-typescript-tailwind-css-4j59) ทำการติดตั้ง react-lottie ด้วยคำสั่ง ``` npm i --save-dev @types/react-lottie ``` ![@types/react-lottie](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n68tv3wkc7npsxk8jkqg.png) ทำการ Import Lottie จาก react-lottie และไฟล์ .json ของเรา ในที่นี้คือ arrow-down.json ```javascript import Lottie from "react-lottie"; import LottieExample from "./arrow-down.json"; ``` ![import](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qlm7ysqbd2b6zkvsyat8.png) เพิ่มส่วนชองการ Render ไฟล์ lottie * options คือ ส่วนของการตั้งค่าการทำงานต่างๆขแง Lottie * loop รับค่าเป็น Boolean ว่าต้องการให้ทำการวนซ้ำการทำงานหรือไม่ * autoplay รับค่าเป็น Boolean ว่าต้องการให้ทำการเริ่มต้นการทำงานแบบอัตโนมัติหรือไม่ * animationData ไฟล์ .json ที่ได้ทำการ Import เข้ามา * height ส่วนสูงของไฟล์ Lottie * width ความกว้างของไฟล์ Lottie ```javascript <Lottie options={{ loop: true, autoplay: true, animationData: LottieExample, }} height={400} width={400} /> ``` หากไม่เกิดข้อผิดพาดขึ้นมาจะปรากฏ Animation ที่ได้จากการ Render ไฟล์ Lottie Animation ตามที่เราสร้างขึ้นมา ![Animation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jc0ied91atzbhuu43yrk.png)
_tnkshcc
842,736
Favicon for Next.js and TypeScript
I'm still learning Next.js and after having hard time to find out of the box solution to create...
0
2021-09-27T20:36:19
https://dev.to/jcubic/favicon-for-next-js-and-typescript-9gk
react, nextjs, javascript, typescript
--- title: Favicon for Next.js and TypeScript published: true description: tags: reactJS, NextJS, JavaScript, TypeScript //cover_image: https://direct_url_to_image.jpg --- I'm still learning Next.js and after having hard time to find out of the box solution to create favicon that will work everywhere, after done this myself, I've decided to create quick article about this. ## What is favicon Favicon is small icon that is usually shown in browser tab next to title of the site. Some Operating System display it differently, like Android or MacOS. ## Best way to get Favicon that will just work The best way to get favicon that will work on every device is to use generator that will do that for you. I personally use [Real Favicon Generator](https://realfavicongenerator.net/) I trust that It create favicon for every possible use case. To generate favicon you just upload an image. If you want best possible favicon you can generate one for MacOS and Windows that often need different background. I personally always create favicon (that is just based on logo) in vector format (using Free Libre Open Source program [Inkscape](https://inkscape.org/)). When you generate favicon remember to use `/favicon` directory. ## Where to put the files The files should be extracted into `/public/` directory of Next.js project. So the files will be in in `/public/favicon/`. If you didn't use `/favicon` path when creating icons you will need to create one. ## Next.js component Now you need to add favicon that you've generated with Favicon Generator. The best idea is to create component called `<Favicon/>` that you can use in rest of the application: ```tsx // /compoponents/Favicon.tsx const Favicon = (): JSX.Element => { return ( <> {/* copy paste the html from generator */} </> ); } export default Favicon; ``` When you copy paste the html it will look similar to this: ```tsx // /compoponents/Favicon.tsx type FaviconProps = { name: string; }; const Favicon = ({ name }: FaviconProps): JSX.Element => { return ( <> <link rel="apple-touch-icon" sizes="180x180" href="/favicon/apple-touch-icon.png"/> <link rel="icon" type="image/png" sizes="32x32" href="/favicon/favicon-32x32.png"/> <link rel="icon" type="image/png" sizes="16x16" href="/favicon/favicon-16x16.png"/> <link rel="manifest" href="/favicon/site.webmanifest"/> <link rel="mask-icon" href="/favicon/safari-pinned-tab.svg" color="#5bbad5"/> <meta name="apple-mobile-web-app-title" content="Snippit"/> <meta name="application-name" content={name}/> <meta name="msapplication-TileColor" content="#ffc40d"/> <meta name="theme-color" content="#ffffff"/> </> ); } export default Favicon; ``` You may need to close each tags, so they are a proper JSX. ## Using component To use new `<Favicon/>` component you need to update `_document.tsx` file. Here is base document that you can use and extend. Or modify and just add `<Favicon/>` into `<Head>` tag. You also need to provide the name of your app in the name prop. ```tsx // /pages/_document.tsx import Document, { Head, Html, Main, NextScript, DocumentContext } from "next/document"; import Favicon from '../components/Favicon'; class MyDocument extends Document { static async getInitialProps(ctx: DocumentContext): Promise<Record<string, unknown> & {html: string}> { const initialProps = await Document.getInitialProps(ctx); return { ...initialProps }; } render(): JSX.Element { return ( <Html> <Head> <meta charSet="utf-8" /> <Favicon name="My Awesome Page"/> </Head> <body> <Main /> <NextScript /> </body> </Html> ); } } export default MyDocument; ``` And that's it. If you want some better performance you can use compression when generating favicon. If you like this post, you can follow me on twitter at [@jcubic](https://twitter.com/jcubic) and check my [home page](https://jakub.jankiewicz.org/). And here you can find some [NextJS jobs](https://www.whatjobs.com/jobs/nextjs).
jcubic
937,415
JavaScript setInterval and setTimer
These are both timers in JavaScript. For some reason I always used to get these two confused, even...
0
2021-12-27T10:45:38
https://dev.to/nicm42/javascript-setinterval-and-settimer-5ia
javascript
These are both timers in JavaScript. For some reason I always used to get these two confused, even though there's a clue in the name as to which does which. ## setInterval This is used to do something repeatedly after a certain amount of time. ```javascript setInterval(runFunction, 1000) ``` This will run the function called runFunction every 1000 milliseconds, ie 1 second. It will keep doing it until you tell it to stop. To stop it you use clearInterval - but you have to have given the setInterval a name first. ``` let interval; document.querySelector('.startButton').addEventListener('click', function() { interval = setInterval(runFunction, 1000) }) document.querySelector('.stopButton').addEventListener('click', function() { clearInterval(interval) }) function runFunction() { console.log('Running!') } ``` This will print "Running!" to the console every second after you press the startButton, and stop once you press the stopButton. You don't have to run a function from setInterval, you can use an anonymous function: ```javascript setInterval( function() { console.log('Running!') }, 1000 ) ``` Or with an arrow function: ```javascript setInterval( () => { console.log('Running!') }, 1000 ) ``` ## setTimeout This is used to do something after a certain amount of time and then stop. So this will print "Running!" to the console once after 1 second: ```javascript setTimeout( () => { console.log('Running!') }, 1000 ) ``` Similarly, you can clear the interval afterwards: ```javascript const timeout = setTimeout(runFunction, 1000) function runFunction() { console.log('Running!') clearTimeout(timeout) } ``` ## Conclusion setInterval and setTimeout are very similarly structured. The main difference is that setTimeout runs once after the timer times out and setInterval runs multiple times on the interval set.
nicm42
845,427
An Epic, Excellent, Eclectic Episode with Kiran Oliver
Relicans host Aaron Bassett talks to Technical Community Builder with Camunda, Kiran Oliver, about...
0
2021-09-29T13:42:41
https://dev.to/newrelic/an-epic-excellent-eclectic-episode-with-kiran-oliver-ibe
podcast, speaking, kubernetes, devrel
[Relicans](https://therelicans.com) host [Aaron Bassett](https://twitter.com/aaronbassett) talks to Technical Community Builder with [Camunda](https://camunda.com/), [Kiran Oliver](https://twitter.com/kiran_oliver), about giving [The Diana Initiative](https://www.dianainitiative.org/) keynote this year on how to pick yourself up when you've been down. They also talk about [how to get started contributing to Kubernetes](https://kubernetes.io/docs/contribute/), being neurodivergent, and making webcomics and [candlesticks](https://starcrossedsundries.com/) with their wife! Should you find a burning need to share your thoughts or rants about the show, please spray them at devrel@newrelic.com. While you're going to all the trouble of shipping us some bytes, please consider taking a moment to let us know what you'd like to hear on the show in the future. Despite the all-caps flaming you will receive in response, please know that we are sincerely interested in your feedback; we aim to appease. Follow us on the Twitters: [@PolyglotShow](https://twitter.com/PolyglotShow). {% podcast https://dev.to/polyglot/an-epic-excellent-eclectic-episode-with-kiran-oliver %} Jonan Scheffler: Hello and welcome to [Polyglot](https://twitter.com/polyglot), proudly brought to you by [New Relic's](https://newrelic.com) developer relations team, [The Relicans](https://therelicans.com). Polyglot is about software design. It's about looking beyond languages to the patterns and methods that we as developers use to do our best work. You can join us every week to hear from developers who have stories to share about what has worked for them and may have some opinions about how best to write quality software. We may not always agree, but we are certainly going to have fun, and we will always do our best to level up together. You can find the show notes for this episode and all of The Relicans podcasts on [developer.newrelic.com/podcasts](developer.newrelic.com/podcasts). Thank you so much for joining us. Enjoy the show. Aaron Bassett: Hello, everyone. Welcome to another episode of the [Polyglot Podcast](https://twitter.com/PolyglotShow). On this episode, I'm joined by [Rin Oliver](https://twitter.com/kiran_oliver). Rin is a writer, journalist, producer, speaker, also a candlestick maker as well as Technical Community Builder with [Camunda](https://camunda.com/). Hello and welcome to the show, Rin. Kiran Oliver: Thank you so much for having me. I really appreciate it. Aaron: I'm so glad you could be here. I've been really looking forward to this interview because I do have to ask, and you probably get asked this all the time, but the candlestick maker. I know you have it on your site and in brackets (no, really). So, can you fill us in on that? What kind of candlesticks is it that you make? It seems like a really interesting hobby. Kiran: Yeah. My wife and I actually have a small business. We called it [Starcrossed Sundries](https://starcrossedsundries.com/), and we do have a [Shopify](​​https://www.shopify.com/) store. I'm going to shamelessly plug that. We make candles, we make wax melts, and it's pretty great. We took a little break. We actually lost my stepdad earlier in August. Aaron: I'm very sorry to hear that. Kiran: It's okay. We're not currently open to a lot of new orders, but we will take some. We just have a longer-than-usual turnaround current time at the moment. But we make a whole bunch of candles and wax melts. We're hoping to explore a lot of different fandoms out there in pop culture in the future. Right now, we do a line about some [Harry Potter](https://en.wikipedia.org/wiki/Harry_Potter) stuff. Aaron: Wow. [laughs] Kiran: We do a lot of secret menu. Yes, we do a lot of Harry Potter candles. We do some stuff that's just… Aaron: I have to admit, my partner is a huge Harry Potter fan. So I will definitely be getting to check out your...you said you got an Etsy store? Kiran: We have a Shopify store, actually. Aaron: Shopify. Kiran: It's [starcrossedsundries.com](https://starcrossedsundries.com/). And if you go to [my Twitter](https://twitter.com/kiran_oliver), I've got it all over there if you look on that. It should be actually on [my website](https://ckoliver.com/) as well. But yeah, I make candles and wax melts along with my wife, and it's pretty fun. We really enjoy it here. So you earn some change. Aaron: I love that kind of thing where people have something that's like a craft or a hobby that they're really interested in that they can share with other people. There are lots of these kinds of platforms now to do it. But I should have asked as well; that’s not your primary business. You are a developer advocate. Is that correct? Sorry, community builder. Kiran: I am a Technical Community Builder, yes indeed, at [Camunda](https://camunda.com/). I am a part of their developer experience team, which is under the umbrella of development relations. Aaron: For everybody listening, can you give us like, what is [Camunda](https://camunda.com/)? What's the elevator pitch there? Kiran: Basically, the elevator pitch of [Camunda](https://camunda.com/) is automate any process anywhere. That is our elevator pitch. It's end-to-end orchestration, business collaboration. We're very developer-friendly. We've got open architecture, and we've actually got [CamundaCon](https://www.camundacon.com/) coming up, and you can register now for that. That is live. And you go to [camundacon.com](https://www.camundacon.com/), and that's on September 22nd and 23rd, 2021, if this episode comes up. Aaron: Oh, so that is coming up pretty soon, or it's in the past depending on when people are listening to the podcast. [laughs] Kiran: Or it's in the past, depending on when you're listening to this. It's either about to happen or has already happened, either one. [laughs] Aaron: If you're listening to this after September 2021, then check it out in 2022. [laughs] Kiran: Exactly. Check out CamundaCon 2022. Yes indeed. Aaron: So it's all by server orchestration then. Kiran: It's more business process model automation. It's a lot of what we do, process automation for the digital enterprise. And it's about automation and improving processes that provide better customer experiences and increase business agility. So that is [Camunda](https://camunda.com/) in a nutshell. Aaron: Yeah, I've seen there's a lot of different stuff covered on this, everything from IoT to microservices to AI. Kiran: That is correct. Aaron: Yeah. You really are trying to live up to this automate any process anywhere. Kiran: You can indeed. You can automate anything. Aaron: With such a broad surface area, what is your key focus then, or do you have one as the Technical Community Builder? Is there an area that you are really keen about or that you focus on at work, or is it just you're really across the whole board? Kiran: I am really cross-functional in my role. I touch a little bit of everything. My main project for this year, when I started at [Camunda](https://camunda.com/), was working on our [Camunda Community Hub](https://github.com/camunda-community-hub), and that is a centralized location for all of our community-contributed extensions. So if it's built by the community or even built by [Camunda](https://camunda.com/), but an open-source community maintained project, it lives in the community hub now. So we have a central location where people can find these community extensions. And it's really awesome. It's a lovely little [GitHub](https://github.com/) organization, and it shows your extension lifecycle so you know if something's incubating, if it's just a proof of concept, if it's unmaintained and needs a maintainer, et cetera. Aaron: So, what kind of extensions are people contributing to that then? Is there one in particular that you've really been impressed by or? Kiran: I'm impressed with all of them. Aaron: [laughs] Kiran: But if I have to pick some, we have our [Keycloak](https://github.com/keycloak/keycloak) extension, which is pretty great. Aaron: You said [Keycloak](https://github.com/keycloak/keycloak). Kiran: [Keycloak](https://github.com/keycloak/keycloak). Yes indeed. And we have… Aaron: What is that one? That sounded interesting. Kiran: It is a [Camunda](https://camunda.com/) identity provider plugin. It's an open-source identity access management platform including features such as user federation, identity brokering, and social login. So that is what that does. The way that [Keycloak](https://github.com/keycloak/keycloak) is described in its README is that single sign-on is sufficient if you want only authentication but have no further advanced security roles. If you need to use [Camunda IdentityService](https://docs.camunda.org/javadoc/camunda-bpm-platform/7.3/org/camunda/bpm/engine/IdentityService.html) APIs or want to see users in groups showing up in cockpit, a customer identity provider needs to be implemented as well, and that's what [Keycloak](https://github.com/keycloak/keycloak) does. [Keycloak](https://github.com/keycloak/keycloak) provides the basis and is the identity management solution that provides a read-only identity provider. So it's a fully integrated solution for using [Keycloak](https://github.com/keycloak/keycloak) as an identity provider and Camunda receiving users and groups from [Keycloak](https://github.com/keycloak/keycloak). Aaron: I can see here [Keycloak](https://github.com/keycloak/keycloak) as well is open source too. Kiran: It is, yeah. Yes indeed. Aaron: It's a [Red Hat](https://www.redhat.com/) project. Is that right? Kiran: I honestly don't know that part. Aaron: Oh, maybe it's just sponsored by them. It may not be under their umbrella. Kiran: It's sponsored by [Red Hat](https://www.redhat.com/), yeah. Aaron: So it's open source. Obviously, you have this community hub. Kiran: Yes. Yes. Aaron: And so you're integrating with things like [Keycloak](https://github.com/keycloak/keycloak), which are open source as well. Kiran: That is correct, yes. Aaron: So is open source a really key part of [Camunda](https://camunda.com/) or their philosophy on things or how they approach stuff? Kiran: I would say yes, it is, and it's part of my job, actually. I've been doing some research in the potential for [Camunda](https://camunda.com/) to join platforms such as [the TODO Group](https://todogroup.org/). And that's something that I've been evaluating and is moving onto the next steps in that evaluation process. There are components of [Camunda](https://camunda.com/) that are open source. We have [BPMN.io](https://bpmn.io/) open-source community hub, open-source, and some of our code is source-available. So it's something that I think that we definitely have in front of mind. And I think it's something that is definitely going to grow in the future. Aaron: So it is a difficult kind of balance. Obviously, my role here with [New Relic](https://newrelic.com/), a lot of what we do is open source, but we have a lot of proprietary code as well. Kiran: Exactly. Yes. Aaron: And it's very much the same even with my previous role at [MongoDB](https://www.mongodb.com/) where you have the community edition, which is obviously open-source, but then they have their own managed cloud, which has some proprietary features, et cetera. It is interesting trying to balance that and make sure you're giving back to the community as much as you can while also protecting your own commercial interests, et cetera, et cetera. Kiran: Exactly. Aaron: It's a difficult line to walk sometimes. [laughs] Kiran: A very delicate balance. Aaron: Yes, very delicate balance. So you say it's definitely something that's in the forefront of your mind, especially with this [Camunda Community Hub](https://github.com/camunda-community-hub). Kiran: Absolutely. Aaron: As a Technical Community Builder, what do you think are the key ways that you can support these communities then through your role? Kiran: So one of the things that I've done involves working on things like release automation tools. I've been partnering with our infrastructure team to make some steps toward automated releases for people working in Java using [GitHub Actions](https://github.com/features/actions). And we're working on, hopefully, potentially in the future crafting some [GitHub Actions](https://github.com/features/actions) to work with, for example, [React JS](https://reactjs.org/), [Python](https://www.python.org/), other things such as that. And also, we did a new release yesterday of the [Community Action Maven Release](https://github.com/camunda-community-hub/community-action-maven-release), which is our [GitHub Action](https://github.com/features/actions) for automated releases to [Sonatype Nexus](https://www.sonatype.com/products/repository-pro). And that actually introduced [Aqua Security](https://www.aquasec.com/). [Trivy security scanning](https://www.aquasec.com/products/trivy/) was our previous release. We introduced that, and you can run that in a bash script. And in the new release, we actually introduced the option to have those results uploaded to the security tab in [GitHub](https://github.com/) so that people are aware if your extension fails a security test what exactly it did. If it had a critical or high vulnerability, it will populate those results to the GitHub security tab. Aaron: Nice. So I'm just trying to make sure I get this automate any process anywhere and what the flow is then for working with [Camunda](https://camunda.com/) but looking at it working with different extensions and languages and integrating with [GitHub Actions](https://github.com/features/actions). And I'd take it with the security...is that part of GitHub's pull request or just the notification you get on a repo in general? It looks like a lot of different moving parts. Is there a standard process of getting started with [Camunda](https://camunda.com/), or is it really dependent on what you're trying to automate? Or where would you recommend people get started? Kiran: I would recommend that there are actually...we have a lot of really good getting started guides, actually. If you go to our website, there is [camunda.com](https://camunda.com/), and you go to [camunda.com/developers/getting-started/](https://camunda.com/developers/getting-started/). Yeah, [camunda.com/getting-started](https://camunda.com/developers/getting-started/). And we have some quick starts. We have wonderful tutorial videos, and we have amazing documentation. We also have a lot of developer resources and an amazing newsletter you can sign up for. So we have wonderful getting started guides, and some tutorials that are rolling out, which I think will help people get up and running quickly and efficiently. Aaron: I'm just looking for some of them as well. It really does cover such a wide range of different things in there with the quick starts and the documentation. Kiran: It sure does. Aaron: It is very detailed. So I know it's different at different companies. I'm not sure if this falls under developer relations there, but whoever is involved, they're doing a great job. Kiran: I agree. Aaron: But it's not just [Camunda](https://camunda.com/). We don't want to talk about it the whole time about that as well. You are a conference speaker; would that be fair to say too? Kiran: That is correct. Yes, absolutely. Aaron: How have you found it recently with this move away from in-person conferences? Kiran: I actually enjoy virtual conferences. I recently did a closing keynote at [The Diana Initiative 2021](https://www.dianainitiative.org/). That was all virtual, but the conference platform was really interactive, and everyone was very talkative. And it was actually really great to have that virtual experience. And I've had as warm a reception virtually if not warmer than I would've had in person. So I see no difference personally. I've also been a speaker at some really great events. I've spoken at [MozFest](https://www.mozillafestival.org/). I've spoken over at [The Docks](https://docksexpo.com/), Portland. I've spoken at [Deserted Island DevOps](https://desertedisland.club/) this year. I've done a lot of virtual talks this year, and they've all been really great. And that's because of people that were there attending, and without attendees, you've got nothing. So I think that it's been really great to see people come out and support these speakers. And to have such an invested audience has been really nice. Aaron: It's obviously been very different for a lot of organizers as well. And I think people have really been stepping up to the mark to get these conferences out and ensure that they're still happening. So I'm consistently and constantly impressed by all the work many times from volunteers that are going into keeping these conferences alive during these times. But I did want to just ask you about [The Diana Initiative](https://www.dianainitiative.org/) because it sounds like a wonderful thing that you're doing here in this conference, helping those underrepresented in information security. Kiran: Yes, indeed. I'm actually also an advisory board member there this year. So I've been helping out a lot behind the scenes as well. It's been really great to get to see the inner workings of that and to get to help with that sort of thing. It's been really wonderful, and I'm very pleased to have been a part of it. Aaron: Is it an event that's been going on for a while? Kiran: Yes. Oh gosh, yes. It's been going on for years now, many years. I'm pretty sure that it's been going on for, I want to say, at least five, possibly ten years, if not more. Aaron: Oh, wow. So that is pretty old then in terms of tech conferences. [laughs] Sometimes, they tend to appear and disappear pretty quickly. Kiran: The first event was in 2016. So it's been going on for six years now. Aaron: Nice. And it does sound like a really great thing they're doing there with it. And you said you're on the advisory board as well. Kiran: I am indeed, yes. Aaron: So what kind of things does that entail and on the lead up to one of these conferences? Kiran: That was a lot of talking about how to set up things like the career villages, making sure that volunteers had all of their information they needed, a little fine-tuning of language on the website, lots of document review, lots of fine-tuning the verbiage and making sure contracts were right, et cetera. Aaron: People just don't realize the amount of admin that goes on behind the scenes for a lot of these kinds of things. Kiran: Lots of admin. Aaron: I've not been involved in actually organizing conferences. But with the [Django Software Foundation](https://www.djangoproject.com/foundation/), we get a lot of requests for grants and sponsorship of different conferences. And obviously very involved with [DEFNA](https://www.defna.org/), The Django Events Foundation for North America, which is a separate body. But yeah, just hearing some of the things that they have to deal with, everything from organizing vendors to dealing with code of conduct, it is such a wide range that these volunteers give their time to. So to be part of organizing one of those conferences, as I said earlier, I've always been constantly impressed with people who volunteered time for this. So I just want to make sure I'm thanking you on the stream as well for taking part in that. To ask you about your talk then, you said you keynoted? Kiran: Yes, I was the closing keynote. Yes, indeed. That is available on YouTube. [I gave a closing keynote called Rising From the Ashes](https://www.youtube.com/watch?v=SlitPrPIH-A). And it was about essentially how to pick yourself up when you've been down and failure is temporary. While today might be your worst day, it's not the end of the world. And it won't be the absolute worst. It will get better. Aaron: [chuckles] That is a great sentiment to carry. I think we've all been there at some point or other. And you say that that's already on YouTube. [People can go watch that talk](https://www.youtube.com/watch?v=SlitPrPIH-A). Kiran: Yes, everyone could. Aaron: Wonderful. We'll link to [The Diana Initiative](https://www.dianainitiative.org/) in the show notes as well. And is it linked from there, your keynote? Kiran: [It is actually up on YouTube](https://www.youtube.com/watch?v=SlitPrPIH-A), and I will happily send it your way. It's pretty great. I'm very, very pleased about it. It's really awesome. And a lot of people actually have sent me amazing messages about it. Just the amount of people that have reached out to me and said, "Your keynote changed my life," has been great. I've gotten messages from people that have said things on it. Aaron: That's so nice. Kiran: They said, "The closing keynote was life-changing." I've had people say, "I've been struggling with unemployment for a year, and your keynote inspired me to keep going," et cetera. That's been really great. And it's been really nice to see that it's made an impact on people. I've had so many messages, and the outpouring of love and support and people saying, "This really impacted me. Thank you," has been so cool. Aaron: Yeah, that must be so lovely to hear. Getting any kind of feedback sometimes as a speaker can be difficult but getting something as lovely as that, that's really sweet. It makes you feel very good. Kiran: It does, indeed. It really does. It was really awesome. Aaron: I don't want to speak for anybody else here, but that's why a lot of us got into developer relations and why we do what we do is because we do want to help people. Kiran: I agree completely. Yeah, I do. I want to help people. I want to make sure that they have the best experience they can. I want to make sure that they are empowered and enabled to do what they do in a way that makes them feel good and makes them feel accomplished. And I want to make sure that they are having a good experience and that they feel valued, and their contributions are respected more than anything. Aaron: That's so important as well. And it harks back to what we were talking about with open source. People are giving up their time, and ensuring that they do feel that their contributions are respected, and are important, and are valued is credibly important. Kiran: Absolutely. I agree. I think that's very important, and I think that people need to be aware that there are people that are giving their time, and they're giving their resources. And for example, not even necessarily contributors, volunteers, anyone behind the scenes that makes something happen or anyone that's taking part in these events or helping out just value people's contributions; it's important. Aaron: Yeah. So you've mentioned you've been on different conferences. You're saying many people might've seen you from [KubeCon](https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/). Is that a community that you're also very involved with? Kiran: Yes and no. I'm a little less involved this year. Aaron: So I have to admit the whole [Kubernetes](https://kubernetes.io/) community is not one that I'm super familiar with. I know I'm working for [New Relic](https://newrelic.com/) and probably should be something a little more. I'm very embedded in the [Python](https://www.python.org/) community. Kiran: That's actually something I could help you with because actually, the part of [Kubernetes](https://kubernetes.io/) I'm most active with is contributor experience and getting contributors involved in [Kubernetes](https://kubernetes.io/), and they always need help. And if that's something you're interested in, I would love to help you get involved in contributing to [Kubernetes](https://kubernetes.io/). There are a lot of wonderful people there that make the contributor experience better. Aaron: So for somebody like myself then who would be brand new to [Kubernetes](https://kubernetes.io/), what would be the onboarding? Where should I start? Kiran: I would say first things first, read [the contributor guide](https://kubernetes.io/docs/contribute/). Read the contributor guide. It's so important, and it's so useful. And everything you need to know to get started is there. If you read one thing, absolutely read the contributor guide. Aaron: So [contributor guide](https://kubernetes.io/docs/contribute/), and then it seems like such a broad, very technical technology to come into. Kiran: It is, but it isn't in a lot of ways in that it's very technical, but you've got to remember that just because it's a technical project doesn't mean that skills from all realms aren't needed. We still need documentation. We still need localization. We still need community and contributor experience. We need people to do things like organize our contributor [YouTube](https://www.youtube.com/) channels, et cetera. And we need people to...there's an upstream marketing team. Write blogs. Talk to the SIGs, et cetera. Join a SIG, get involved. There are so many SIGs in Kubernetes that you can join. If you have an interest, there's probably a SIG about it. Aaron: I think probably for me, the most important question I do have to ask about contributing to Kubernetes is what do I need to contribute to get one of the funky sailor hats? Kiran: The sailor hats, oh gosh. Aaron: [laughs] Kiran: I think what those were something really special for the contributors or the SIG cheerleads for a particular year. I think that was a custom thing. I actually don't know. I've been wondering that myself. I have an in on sailor hats because my dad's actually a ship captain. Aaron: Wow. Kiran: So at any point, I could go back home and grab a real ship captain's hat, so I'm fine. They have the little gold bars on them. They're pretty great. Yeah, love them. Aaron: For anybody who is not aware, Kubernetes' logo there is...there's probably a term for this. I'm getting it wrong here. [laughs] It's like a steering wheel for a ship. There’s probably an actual term for that that's not just steering wheel. [laughs] Kiran: There is, yes. Aaron: But whatever it is, this is how nautical I am. [laughs] Kiran: It's actually a helm. [laughs] That's why [The Helm Project](https://github.com/helm) is called [The Helm Project](https://github.com/helm). Aaron: That makes sense. Yeah, the logo for [Kubernetes](https://kubernetes.io/) is the helm is what I'm learning. There's definitely some kind of nautical-themed swag and things that have been out before. And I was just noticing looking at the contributors' website there are lots of people with these kinds of sailor hats. Kiran: Yeah, those were something special. I know that much. It was something really special, and I think it was...I wish I remembered. I'm actually in this picture, which is really funny. [laughter] I have way different hair now, but I am in this picture. And it's fun to me because I look at this, and I see a whole bunch of people that I know that I only see a couple of times a year, but I love them all so much. And they're so great. And if you're a part of this open-source community, these are some really awesome people. And people don't come to [the Kubernetes community](https://kubernetes.io/community/) for [Kubernetes](https://kubernetes.io/). They come because it's a welcoming, fun community that people really enjoy being a part of and that people are so nice. They're just so nice. I have met so many wonderful people, and I highly recommend that people check it out because it's a wonderful, wonderful place to be. Aaron: It sounds very much like the reasons why I enjoy [the Python community](https://www.python.org/community/) so much. So I definitely will check it out. And honestly, one of the things that's been so hard with the lack of in-person conferences for me is that lack of the hallway track and getting to see my friends. Kiran: Exactly, the hallway track. I love it. I love the hallway track. I miss the hallway track. It's pretty great. Aaron: Yeah, it's so strange to be like, yeah, my friends are on like six different continents. And I get to see them at conferences, and that's really it. So having lack of travel for a while has been difficult and missing everybody. For any listeners here who haven't heard the term hallway track before, it's the conference that happens between the talk, so in the hallways as you're walking from one stage to the next. Would that be fair to say? Is that a good summarization? Kiran: I would say that, or it's the one that happens when you're waiting for talks to happen or during lunch, or when you're waiting between things that you have to do between panels. Aaron: I don't want to call anybody out on the stream, but there was quite a well-known figure in the [Python community](https://www.python.org/community/) who was actually saying, "Whenever you go to [PyCon](https://pycon.org/), skip the talks, and just go there to see people and to make these connections and talk. Because all the talks will be recorded and you can view them online later," which was a quite controversial, I have to say, take. Being a speaker yourself, you'll know that one of the worst feelings is the speaker is talking to an empty room. Kiran: That is the absolute worst. Yes, indeed. Aaron: Yeah. So encouraging people to only go for hallway talk as much as I personally love the hallway track, I can't indulge that thought of only going to a conference for that. Go and support the speakers as well. They put an awful lot of work into their talks. Be in the room. Kiran: Absolutely. I agree completely. Please support your speakers. Aaron: And there's something different about seeing a talk in person than watching a video, at least for me. Kiran: I agree completely. I think there's a lot of difference, and I actually haven't given an in-person talk since 2019. It was my very first one. Aaron: Wow. Kiran: So I think I've gotten a lot better at giving talks since then. So I'm actually really looking forward to those. This will be my first non-virtual talk since 2019. Aaron: I have to admit that giving talks in person and doing pre-records, I actually find pre-records a lot harder. Kiran: I like pre-records, but I do find them very difficult. They are not my favorite thing. Aaron: I'm not sure if you're the same as me but knowing that I can re-record something...If I'm giving it live, I'm giving it live. If I make a mistake, if I stumble a word, if I um or ah too much, I just press on. There's nothing else I can do. But with a pre-record and knowing that I can just stop the recording and go back and start again, I must record some of my talks maybe 20, 30 times. Kiran: Yep. I've done that. I've done that. And I have a lot of clips on my work computer right now that are just me getting three minutes in saying something expletive and then just stopping the recording. Aaron: [laughs] I was tempted at one point to put together almost a blooper reel of the fumbles that I have during pre-records that cause me to stop. Kiran: I should do that. That sounds fun. I could do that. Aaron: [laughs] My partner can comprehend. I don't know if you do this. I know some people have different ways of recording it, but I have friends who will record talks, pre-records in sections. So they'll break it up by maybe 5 or 10 minutes or by topic area. So if they do fumble, they can just re-record that section. Kiran: That seems like a way better way to do it. Aaron: Yeah, I 100% agree, and I just can't do it that way myself. [laughs] My partner thinks I'm putting a lot more work on myself in doing it this way, but I just can't seem to get the flow right if I'm restarting from a particular section. Which if you get 54 minutes into an hour-long talk and then flub something and have to restart, it is heartbreaking. Kiran: Yeah, I think I'm going to have to start doing that, though. It sounds like a much better avenue for me. [laughs] Aaron: I did get one great tip on it, though. It never started natural for me whenever I would stop in between and re-record something. And a friend who does a lot of recordings for audiobooks pointed out that it was the way in which I was breathing because, in some recordings, I would breathe out and then stop the recording and start the recording again. And at the start of recording, I was breathing out again. And it's like, you don't notice the fact that you never breathed in in between those. But whenever you're listening to it subconsciously, you can pick up on those kinds of things. Kiran: That's a good point, yes. I know the other thing I've noticed too is just being aware of the fact that you need to breathe; that’s a big one. Aaron: [laughs] Yeah. Kiran: I tend to not breathe, and I just need to remind myself to slow down so I can breathe, and it'll still be there. If they leave, they leave. Aaron: Getting the speed right can be difficult as well. I'm originally from Ireland. We talk very fast. Having to learn to slow myself down, especially when you're nervous. Kiran: Same. I'm from New England. We do the same thing. You and I are talking about the same speaking cadence. I'm like, ooh, someone else that speaks fast. That's great. [laughs] Aaron: [laughs] You're originally from New England. Kiran: Yes. Yes. Aaron: You spent some time in New Zealand, I believe, as well. Kiran: That is correct. I spent four years in New Zealand. I am a permanent resident, and my wife is a citizen. Aaron: Oh, wow. It looks like a beautiful country. I've never been. Kiran: It's very pretty. We lived in Oakland. We lived in Mission Bay. It was pretty great. Aaron: And now you're back in the U.S. Kiran: Sure. Yes, indeed. We just bought a house. We live in the Northwest corner of Louisiana. Aaron: I don't think I've been to Louisiana. I've been to a lot of different states. Before COVID, we did a drive from Seattle to Florida. Kiran: Oh, cool. Aaron: Trying to get probably the farthest you can do in the U.S. [laughs] Kiran: Wow. That is a lot of driving, ooph. Aaron: But I don't think we stopped in Louisiana. The other thing I was going to ask just before we run out of time here is you do have this five-year goal list on your site. Kiran: I sure do. Aaron: So the first one there is to write a successful webcomic. Kiran: Yeah. [chuckles] It sure is. Aaron: How is that coming along? Kiran: That's coming along. It's happening kind of. It's in the works I guess I could say. My wife and I are collaborating on it, but we keep getting distracted by other webcomics we want to make. So that's a little bit of a problem, too many webcomics. Oh no. [laughs] Aaron: Can I ask what the topic of the comic is, or what's the theme? Kiran: It's a queer romance high fantasy dragons and badass princesses sort of tropy thing. We love it. Aaron: [laughs] Is there any other ones in the same kind of niche that people might recognize the names of, anybody you're drawing influence from or inspiration? Kiran: Gosh, I don't know. Honestly, I'd have to ask my wife. She's very much well-versed in comics things, and I'm not so much. I do the writing part, and my wife and I collaborate on writing as well. She does the illustrations. I cannot illustrate my way out of a paper bag. Aaron: [laughs] That's the other thing I was going to ask of who's doing the illustrations? And the other one you've got, and this is something close to my heart as well, is net in your five-year goal is to build something that helps neurodivergent folks. Kiran: That is correct. Aaron: For anybody who doesn't know who's listening, I have ADHD type C, which is the combined type. I've talked about this a lot at different conferences. So it's always something that's very, very close to my heart. So I'd be very interested in it too. So have you started building on this? Have you got any plans for it? Kiran: That's awesome. I actually have ADD, the inattentive type. [laughs] And I'm also autistic, and I have dyspraxia and dyscalculia. I am a multiply neurodivergent, as I like to say, which is super fun. I've kind of started working with that. And I think I've given a lot of talks about that as well. So I guess I've not necessarily built something as I've gotten the message out there. In terms of building things, I have a resource out there on GitLab, which is about breaking down the barriers to open source for neurodivergent contributors. So I guess in theory, yes, I have built something. Aaron: Yeah, definitely. Kiran: I have. Aaron: Where can people find that? Kiran: That is on [GitLab](https://gitlab.com/), I believe. Aaron: I think these kinds of things will be linked from I think it's [ckoliver.com](https://ckoliver.com/). Kiran: It is. It is. Yes, yes. That is correct. Aaron: Folks listening at home right now looking for any of your talks or links. Kiran: They can find them on [my website](https://ckoliver.com/). Yes, indeed. They sure can. Aaron: And I'll get that again. It is [ckoliver.com](https://ckoliver.com/). Kiran: Yes, indeed. Yes, indeed. Aaron: Next on your list is something that I would love to aspire to as well, but I cannot run. But you're running three races for charity. [laughs] Kiran: I have done two of those. Aaron: Congrats. That's amazing. Kiran: I have done a few. It's actually something that I started...I started doing a lot more walking over the last couple of years. I actually broke my leg in 2018. So something I did once I was able to walk again was do some walks for charity. Aaron: My partner is an avid runner. I am not. Kiran: I don't do running just because I have so much new hardware now. It was a really bad break. I had to have surgery, so I got pins and screws and stuff. Aaron: Oh, ouch. Kiran: So I try to avoid running. Walking I can do now. The cool thing about the races I do is it's for a charity called [Random Tuesdays](https://rantue.org/), and it's all fandom-related. And there's a Doctor Who Running Club for example. Aaron: [laughs] Kiran: It exists. It's a thing. There's a Harry Potter Running Club. There's a bunch of them. And then there are some other fandoms I don't remember off the top of my head. Aaron: Do the Harry Potter ones run with a broom between your legs? Kiran: No. No, no. But we do have really cool swag. We have fun shirts. Aaron: So it's only for people playing Quidditch then. Kiran: There are actually events every couple of weeks where there is Quidditch, and they get house teams. And you have to run a certain amount of miles, and you get a fun t-shirt. And it's actually pretty cool. It's really collaborative. And it's a whole bunch of people who have just come together to play Quidditch essentially in terms of running a lot of miles and trying to beat the other houses by walking more or running more. [laughs] Aaron: Nice. Kiran: The profits from their shirts and stuff go to charity. Aaron: And the last one of your five-year goal list, how is the learn two new instruments coming along? Kiran: That's actually coming out pretty okay. Well, I haven't necessarily picked up a new instrument. I did get a violin. I learned to play some scales. So I'm going to say playing is achieved. Aaron: Yeah, 100%. Kiran: Because playing counts as not letting it not die. Aaron: [laughs] Yes. Kiran: So I have completed a singular scale, and I can tune it. So I will call that a win. And I can play some basic notes, so I'm good. And I am picking up another instrument. I am relearning to play the flute. I got one from a friend of mine. So I have my flute, my violin, and I have a guitar. And I'm hoping in the next couple of months to maybe get an ocarina. But my dogs hate the tin whistle I got, so I'm not thinking an ocarina is going to be any better, but we can hope it might be. Maybe they'll like the ocarina better. Maybe they're just [Zelda](https://www.zelda.com/) dogs. That would be nice. Aaron: From a fellow ADHD person, if I can keep a hobby long enough that I'm still interested in it, but the time the gear I've ordered online arrives, I count that as a win. Kiran: Accurate. Yeah. Aaron: My interests and hobbies change so frequently. [laughs] I'm not very musical myself, but I have some colleagues on the team who stream sometimes about [Sonic Pi](https://sonic-pi.net/), which is making music with programming, and that's a lot of fun. Kiran: Ooh, that's exciting. I would love that link. Please send it to me because I'm actually going to get...I actually did a talk with a friend of mine in New Zealand, and they're sending me a [Raspberry Pi](https://www.raspberrypi.org/) with a keyboard, and I don't know what to do with it. So if I can make music with it, that's going to be wild. And you'll have made my day. [laughs] Aaron: It's like a live coding environment. I think the syntax that they use is very similar to Ruby. Kiran: Oh, that's cool. Aaron: And as you're typing, it is being compiled in real-time and changes the music that's being played. I should point out it's [Sonic Pi](https://sonic-pi.net/), P-I like [Raspberry Pi](https://www.raspberrypi.org/), not py as in P-Y, which is what I'm normally talking about, which would be Python. I think there is a similar Python music generation package. But this particular one, the [Sonic Pi](https://sonic-pi.net/), is the P-I [Raspberry Pi](https://www.raspberrypi.org/) if people are looking for it online. Kiran: That is awesome. Aaron: I have a colleague who was streaming about that pretty regularly. We have [Sam Aaron](​​https://twitter.com/samaaron), I think, might be correct. I might be getting it wrong, the creator of [Sonic Pi](https://sonic-pi.net/). If it's not [Sam Aaron](https://twitter.com/samaaron), I'm very sorry for getting the wrong name if they're listening. [laughs] But they've been on the [New Relic](https://newrelic.com/) stream a couple of times, and I played with it. And it's always such an interesting thing to see because it's such a great combination of two interests, one that I would like to think I know a little bit about and one I have no idea. I just can't generate music. I can't play an instrument. I can't hold a tune. [laughter] Kiran: Well, I highly recommend that you give it a shot. I think that you could surprise yourself. I really do. And it's one of those things where you think I can't do that but give it a try. There are a lot of instruments out there. One of them might resonate. Aaron: Very true. Very true. But yeah, if you've got a [Raspberry Pi](https://www.raspberrypi.org/) on the way and you obviously have an interest in music already, I would definitely give it a go. So we are coming up on time, unfortunately. I do like to leave a little bit of time at the end just for guests to go through kind of...we've talked a lot about where people can find you with your personal site and things. But are there any other projects or socials or things you're working on that you'd like people to know about that you want to give a shout out in these last few minutes? Kiran: I would like to give a shout-out actually to my team. My team at [Camunda](https://camunda.com/) is pretty great. I love the developer relations team there. I love everyone I work with, wonderful people. Aaron: That's lovely. Kiran: And I am super stoked to be a part of such an amazing group of people. And I get to come to work every day and do awesome stuff and work on great things and talk to amazing people. I'd like to shout out all of our community extension maintainers. You're all wonderful. All of our contributors, fantastic. I love you all. And I think that overall, I'd just like to thank everybody that's been supporting me over the years. It's been a long haul, started from the bottom, and now we're here. Aaron: [laughs] I think that's probably the sweetest shout-out I've had at the end of one of these shows, if I'm honest. [laughs] Normally, it would be like, "Yeah, this is a project I'm working on, and here's where you can find me on Twitter, and this is my Instagram." But that was really sweet. Kiran: We did that in the beginning. We did that already. [laughter] Aaron: Yeah, bears repeating sometimes. Kiran: Nah, I'm good. You can find me on Twitter. I'm [@kiran_oliver](https://twitter.com/kiran_oliver). And I'm on GitHub at C-E-L-A-N-T-H-E, [celanthe](https://github.com/celanthe). And yeah, that's me in a nutshell. And I look forward to talking to everyone in the community. Aaron: Well, thank you so much for being on the show. I've really enjoyed chatting with you. It's been a very eclectic episode. That's always a lot of fun for me. Kiran: I'm sorry it's super random. [laughs] Aaron: No. Hey, that's interesting. But yeah, thanks again for joining us. Kiran: Thank you. Aaron: Hopefully, everybody at home subscribes to our latest episodes and all the usual things. And I will see you all next time. Thanks again. Goodbye. Jonan: Thank you so much for joining us. We really appreciate it. You can find the show notes for this episode along with all of the rest of The Relicans podcasts on [therelicans.com](https://therelicans.com). In fact, most anything The Relicans get up to online will be on that site. We'll see you next week. Take care.
therubyrep
845,460
Learning Videos for Test Driven Development
Links to YouTube videos that help you learn Test Driven Development and get better at testing.
0
2021-09-29T14:30:31
https://dev.to/jesterxl/learning-videos-for-test-driven-development-1ip5
testing, tdd, javascript
--- title: Learning Videos for Test Driven Development published: true description: Links to YouTube videos that help you learn Test Driven Development and get better at testing. tags: testing,tdd,javascript //cover_image: https://direct_url_to_image.jpg --- It took me 21 years to grok Test Driven Development. I had at least 4 false starts that I can remember. I'm _still_ learning & getting better. It just "clicked" once I kept at it. I feel it's helped my designs. I couldn't have done it w/o a coding style I feel comfortable in. If you too are struggling to understand it, or don't get why it's helpful, check out Dave Farley's Continuous Delivery YouTube channel. He covers a lot more than TDD, but he'll give you another perspective about TDD you'll dig. It doesn't matter if you're Object Oriented, Functional, or imperative, he speaks to us all. {% youtube llaUBH5oayw %} Another one of his takes on when your spidey sense says "something is wrong here": {% youtube -4Ybn0Cz2oU %} Tons more [programming content on his channel](https://www.youtube.com/channel/UCCfqyGl3nq_V0bo64CjZh8g) that's worth your time. _Another_ perspective is one the cats from the TestDouble crew, Justin Searls. He has a super set of videos covering the strategy (why), tactics (how), and everything in between of good/bad. This is one of my faves about over mocking: {% youtube x8sKpJwq6lY %} He's also got many other videos that are SUPER comprehensive about real-world scenarios. {% youtube VD51AkG8EZw %} Finally, plugging mine. These should give you another perspective in the best and worst programming languages there are, with actual coding. TDD using Functional Programming in Elm: {% youtube QMb-z2L_OYA %} ...and TDD using Object Oriented Proramming in JavaScript: {% youtube lbFP4iFqk00 %}
jesterxl
845,475
Async await like syntax, without ppx in Rescript !!!
Prerequisite: Basic understanding of functional programming. Basic knowledge on...
0
2021-09-29T15:01:13
https://blog.techrsr.com/posts/rescript-async
rescript, reason, functional, promise
### Prerequisite: - Basic understanding of functional programming. - Basic knowledge on Rescript/ReasonML. The code and ideas that I will be discussing about in this article are my own opinions. It doesn't mean this _is_ the way to do it, but it just means that this is _also_ a way to do it. Just my own way. In Rescript, currently (at the time of writing this article) there is no support for async/await style syntax for using promises. You can read about it [here](https://rescript-lang.org/docs/manual/latest/promise). Even though Rescript's pipe syntax make things cleaner and more readable, when it comes to working with promises there is still readability issues due to the lack of async/await syntax. There are ppx available to overcome this issue. But what if, we can overcome this issue without using any ppx. Lets first look at, how existing promise chaining looks like in Rescript. ```reasonml fetchAuthorById(1) |> Js.Promise.then_(author => { fetchBooksByAuthor(author) |> Js.Promise.then_(books => (author, books)) }) |> Js.Promise.then_(((author, books)) => doSomethingWithBothAuthorAndBooks(author, books)) ``` In the above code, to access both author and books, I am creating a tuple to be passed into the next promise chain and using it there. This can easily grow and become more cumbersome when we chain three or more levels. The idea is to, > _Create a function that takes multiple promise functions as labelled arguments, executes them sequentially and stores each result as a value in an object, with labels as keys of the object_ This idea is inspired from Haskell's `DO` notation. Lets see, how this function looks like. ```reasonml type promiseFn<'a, +'b> = 'a => Js.Promise.t<'b> let asyncSequence = (~a: promiseFn<unit, 'a>, ~b: promiseFn<{"a": 'a}, 'b>) => a() |> Js.Promise.then_(ar => {"a": ar}->Js.Promise.resolve) |> Js.Promise.then_(ar => ar ->b ->map(br => { "a": ar["a"], "b": br, } ) ) ``` Lets understand what this function is doing. 1. A `type` called `promiseFn` is defined, that takes some polymorphic type `'a` and returns a promise of type `'b`. 1. `asyncSequence` function takes two labelled arguments `a` and `b` which are of type `promiseFn`. 1. Argument `a` is a function that takes nothing, but returns a promise of `'a`. 1. Argument `b` is a function that takes an `Object` of type `{"a": 'a}` where the key `a` corresponds to the label `a` and the value `'a` corresponds to the response of the function `a`. 1. `a` is first invoked and from its response an `Object` of type `{"a": 'a}` is created and passed into function `b`. The response of function `b` is taken and an object of type `{"a": 'a, "b": 'b}` is created. The above function, chains only 2 promise functions. But, using this method we can create functions that chains multiple promise functions. ```reasonml // Takes 3 functions let asyncSequence3 = ( ~a: promiseFn<unit, 'a>, ~b: promiseFn<{"a": 'a}, 'b>, ~c: promiseFn<{"a": 'a, "b": 'b}, 'c>, ) => asyncSequence(~a, ~b) |> Js.Promise.then_(abr => abr->c |> Js.Promise.then_(cr => { "a": abr["a"], "b": abr["b"], "c": cr, }->Js.Promise.resolve ) ) // Takes 4 functions let asyncSequence4 = ( ~a: promiseFn<unit, 'a>, ~b: promiseFn<{"a": 'a}, 'b>, ~c: promiseFn<{"a": 'a, "b": 'b}, 'c>, ~d: promiseFn<{"a": 'a, "b": 'b, "c": 'c}, 'd>, ) => asyncSequence3(~a, ~b, ~c) |> Js.Promise.then_(abcr => abcr->d |> Js.Promise.then_(dr => { "a": abcr["a"], "b": abcr["b"], "c": abcr["c"], "d": dr, }->Js.Promise.resolve ) ) // .... Any level ``` See, we are using previous `asyncSequence3` to define next level `asyncSequence4`. To understand this function better, lets see how it is used. Lets rewrite our previous example using this `asyncSequence4`. ```reasonml asyncSequence4( ~a=() => fetchAuthorById(1), ~b=arg => fetchBooksByAuthor(arg["a"]), ~c=arg => doSomethingWithBothAuthorAndBooks(arg["a"], arg["b"]), ~d=arg => Js.log(arg)->Js.Promise.resolve ) // Response of asyncSequence4 will be a promise of type // { // "a": <Author>, // "b": <BooksArray>, // "c": <Response of doSomethingWithBothAuthorAndBooks> // "d": <unit, since Js.log returns unit> // } ``` What is happening is, the response of `fetchAuthorById` is taken and an object of type `{"a": <Author>}` is created. This object is passed to function `b` as `arg` and hence that function `b` has access to previous function `a`'s result. Now the response of `b` is merged together with response of `a` into a single object as `{"a": <Author>, "b": <BooksArray>}` and passed to function `c` as argument `arg`. Now function `c` has access to both the response of `a` as well as response of `b` in the object that is received as argument. This is continued down the path to function `d`. With this approach the chaining is easy and multiple `asyncSequence` can be chained like below, which can provide access to all the previous values. ```reasonml let promiseResp = asyncSequence4( ~a=() => fetchAuthorById(1), ~b=arg => fetchBooksByAuthor(arg["a"]), ~c=arg => doSomethingWithBothAuthorAndBooks(arg["a"], arg["b"]), ~d=arg => Js.log(arg)->Js.Promise.resolve ) asyncSequence( ~a=() => promiseResp, ~b=arg => doSomethingWithAllThePreviousResponse(arg["a"]) ) ``` Before we jump into pros and cons of this approach, lets see one common mistake that can happen. ```reasonml asyncSequence3( ~a=() => firstExecution(), ~c=_ => thirdExecution(), ~b=_ => secondExecution(), ) ``` The above code will compile fine. It is easy to think that `c` will be executed after `a`, but thats not true. The execution will always happen from `a` to `z` even though the order is changed. Now, lets see what are the pros and cons of this approach. ## Pros: 1. Far more readable than the raw Promise chaining. 1. Somewhat similar to the js async await syntax. 1. Each function down the line has access to all the previous responses. 1. No PPX and no additional dependencies needed. 1. Completely type safe. Compiler will raise errors of any wrong usage. 1. One `asyncSequence` can be chained to the next `asyncSequence` easily. ## Cons: 1. Multiple overloaded functions required. 1. Order must not be changed. 1. Keys of the object cannot be changed ("a", "b" ... will always be the keys). You can check the refactored, full code [here](https://github.com/praveen-kumar-rr/rescript-async/blob/master/src/Promise.res). **Hope you enjoyed! Happy Hacking!**
praveenkumarrr
845,514
Building a Joke guesser game in React
Hello fellow humanoids. In this post we will build a basic Joke guesser game in react.
0
2021-09-29T16:32:52
https://dev.to/vigneshiyergithub/building-a-joke-guesser-game-in-react-5f14
react, webdev
--- title: Building a Joke guesser game in React published: true description: Hello fellow humanoids. In this post we will build a basic Joke guesser game in react. tags: React, webdev cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3brorpqtx7l89gc0f89u.jpg --- ## What is this post about ? Hello fellow humanoids. Today we will build a Joke guesser game and cover some basic concepts of making API calls and other react concepts. Check out the game here : [Joke Guesser Game](https://vigneshiyergithub.github.io/joke-guesser-game/) Github repo for reference : [Repo Link](https://github.com/vigneshiyergithub/joke-guesser-game) ## Content * How to create a game ? * How to use Joke API for game ? * How to do scoring ? Lets go deep dive into each one and explore how it was implemented. ## How to create a game The game we are creating today will be a joke guesser game. It will be comprised of 2 part Joke. The first part will set the premise for the joke and the gamer will enter the probable second part of the joke and will be scored according to the string similarity. The complete game is comprised of 10 rounds. ![Game UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8x0xa9hm391e93gd3o5g.png) {% gist https://gist.github.com/vigneshiyergithub/3a5ef0f745e6b99af0af10d8034443b0 %} ## How to use Joke API for game ? For the game in context we would query a Joke API endpoint to fetch the joke for the round. The joke will be a bi-parted joke which we would use to form Question for the joke and use the second part for the text similarity score. {% gist https://gist.github.com/vigneshiyergithub/70be6110d9d7e0c2afdd3ede8b914fd3 %} ## How to do scoring ? Once the gamer has entered his guess for scoring with the original joke we would be using Text Similarity. For text similarity we would be using ["string-similarity"](https://www.npmjs.com/package/string-similarity) npm package. ## Conclusion This game is not developed to it's entirety and has room for improvement. Please feel free to fork the repo and make changes as you please. Do let me know if I made any grave blunders in coding this. Thanks for reading this post. Stay safe and lend a hand to another :)
vigneshiyergithub
845,774
What is Hacktoberfest 2021?
Hacktoberfest 2021 is the 8th edition of Hacktoberfest hosted by DigitalOcean. It is an open source...
0
2021-09-29T18:33:38
https://dev.to/rakeshsantwani/what-is-hacktoberfest-2021-4ab
hacktoberfest, opensource, github, contributorswanted
**Hacktoberfest** 2021 is the 8th edition of Hacktoberfest hosted by DigitalOcean. It is an open source festival celebrated during October every year, encouraging people worldwide to actively participate and contribute to participating open source projects hosted across GitHub and GitLab. In fact, Hacktoberfest 2020 had attracted 169,886 participants and 116,361 participating open source repositories, representing 135 countries. You can simply [register yourself here](https://hacktoberfest.digitalocean.com/) and start contributing to any participating open source project from Oct 01 - Oct 31. And if you meet the [contribution criteria](https://hacktoberfest.digitalocean.com/resources/participation) set by DigitalOcean, you’ll receive a Hacktoberfest t-shirt🤩 from DigitalOcean! Additionally, if you make successful contributions to [LoginRadius open source projects](https://github.com/LoginRadius), you’ll separately receive a LoginRadius branded Hacktoberfest t-shirt from us, recognizing and thanking you for your valuable contributions. #How to Contribute? The exciting part about being involved in the open source community is that no matter how small or big your contributions are, the community will welcome your efforts and collaborate with you positively, sharing feedback and expressing gratitude. Especially with LoginRadius open source projects, your contributions can make a big difference! We also try making your collaboration with us more enjoyable. Please note that only contributions that add significant value to our projects will be eligible for swag. This will be at our sole discretion. But you may go ahead and contribute in any way you would like. ##Prerequisites ####. Git ####. Github ####. Forking a repository ####. Creating a pull request ##Win LoginRadius Branded Swag By actively participating in Hacktoberfest, you make the open source community more sustainable, and, in turn, this makes you feel at home. Empowering one another is what best depicts the open source philosophy and is a reward in itself. However, we want to make it more fun by sending cool t-shirts to all the accepted/eligible contributors. Just make sure to fill this form after you raise a pull request. Don’t forget that your contributions to our projects also count towards your overall Hacktoberfest contributions calculated by DigitalOcean — and if you’re eligible, they’ll send you another t-shirt as well. Let’s have fun with Hacktoberfest 2021! ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1godkvgae6nss8pu2q46.gif) i am also mentor in the Hacktoberfest so feel free connect with me Thanks for reading....
rakeshsantwani
845,787
Java: Convert String to a Number
When it comes to converting a string of numbers, like: "1234" into a number the first question that...
0
2021-09-29T19:21:50
https://dev.to/haytamkh7/java-convert-string-to-a-number-28jn
java, programming, coding, oop
When it comes to converting a string of numbers, like: "1234" into a number the first question that jumps to your mind is: "They look like a number! why do I need to convert a number to a number?!" Well, the computer doesn't see things the way we see them, a **_string_** consists of numbers is not a number! or at least according to the computer. A string of numbers is a bunch of characters and they do not present a numeric value. So in Java we have two ways to convert a string of 'numbers' to a real number, and here is how this could be done: >In this post I'm going to talk about converting string to int ###Using Integer.parseInt() This method will return the primitive numeric value of a string that contains **only** numbers, otherwise it will throw an error (NumberFormatException) For example: ``` String testStr = "150"; try{ System.out.println(Integer.parseInt(testStr)); } catch (NumberFormatException e) { System.out.print("Error: String doesn't contain a valid integer. " + e.getMessage()); } ``` ###Using Integer.valueOf() This method will return an integer object of the passed parameter, if the passed parameter isn't valid it will throw an error. For example: ``` String testStr = "200"; try{ System.out.println(Integer.valueOf(testStr)); } catch (NumberFormatException e) { System.out.print("Error: String doesn't contain a valid integer. " + e.getMessage()); } ```
haytamkh7
845,813
Welcome Thread - v144
Welcome to DEV! A thread of hellos and intros.
0
2021-09-29T19:49:08
https://dev.to/thepracticaldev/welcome-thread-v144-287
welcome
--- title: Welcome Thread - v144 published: true description: Welcome to DEV! A thread of hellos and intros. tags: welcome canonical_url: --- ![Willy Wonka Welcome](https://media.giphy.com/media/OJqimXwqG7CQE/giphy.gif?cid=ecf05e47phjd4n8s48dw6z2to68w2n5km6609s75lh8pdrer&rid=giphy.gif&ct=g) ### Welcome to DEV! 1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about yourself. 2. Reply to someone's comment, either with a question or just a hello. 👋 **Great to have you in the community!**
thepracticaldev
846,118
Deploying Spring Boot MVC with JSP project to AWS Elastic Beanstalk
Disclaimer: Try this at your own risk. AWS resources required for DemoApplication should be covered...
0
2021-10-01T00:20:23
https://towardsaws.com/deploying-spring-boot-mvc-with-jsp-project-to-aws-elastic-beanstalk-dc665b6b8849
java, jsp, aws, beanstalk
_Disclaimer: Try this at your own risk. AWS resources required for DemoApplication should be covered under the free tier. If you’re not on a free tier make sure to clean up the resources provisioned soon after trying this out. Check if there are any leftovers remaining inside an Amazon S3 bucket after deleting the application._ There are multiple options available when it comes to the deployment of a Spring Boot project. In this story, we’ll be focusing on how to deploy a project with JSP pages using a service available in Amazon Web Services. ___ Throughout the last couple of Medium stories, I have discussed how to create a Spring Boot project and adding different features to those projects. In order to proceed with this story you need to have a Spring Boot MVC project, if you don’t have one check out my Medium story [Implementing Spring Boot MVC CRUD operations with JPA and JSP] (https://mmafrar.medium.com/implementing-spring-boot-mvc-crud-operations-with-jpa-and-jsp-4dfa1882b4a3) or feel free to clone the below repository. {% github mmafrar/spring-mvc-crud-example %} Once you have a project to work on, open **build.gradle** file and under plugins change id ‘java’ to id ‘war’ and add the below dependencies to the project. ``` implementation 'org.apache.tomcat.embed:tomcat-embed-jasper' implementation 'org.springframework.boot:spring-boot-starter-tomcat' ``` If you’re using Maven as the build tool, update the ***pom.xml*** file. Then open the main Spring Boot application and make below changes. {% gist https://gist.github.com/mmafrar/74afc82cf59de871d8b60870dd2012f7 %} Next login to AWS Management Console. From the search bar look for Elastic Beanstalk and navigate to the page. Click on ***Create Application*** button to open Create a web app form. Make sure to provide a suitable name for the application. Select the values in the image for the dropdowns. Upload *.war file and click on Create application. ![Create a web app form](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fmr34sq2zu4lo9s7goa.png) You’ll be displayed with the steps and progress on provisioning the resources needed to deploy the application. This will take some time based on the size and resources required for the project. If the deployment was successful you’ll be seeing the shown screen. ![Environment page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/shejxy56j38s2hvu1ir5.png) Click on the domain name displayed on the screen to open the application. Use **Upload and deploy** button to publish a different version of the code. ___ Happy Coding! Below is a DEV Community video I have published. Also, you might be interested to check my Medium story [Setting up continuous integration in Spring Boot with GitHub and CircleCI] (https://medium.com/geekculture/setting-up-continuous-integration-in-spring-boot-with-github-and-circleci-f2e68690e138). {% link mmafrar/how-to-add-an-icon-for-an-ios-app-312c %} Cover Image: Photo by [Austin Distel] (https://unsplash.com/@austindistel) on [Unsplash] (https://unsplash.com)
mmafrar
846,131
Animation In Flutter: Animation Class, Tween & CurvedAnimation
In the previous article, we saw how to use AnimationController to control our animation. We will...
0
2021-09-30T05:18:35
https://dhruvnakum.xyz/animation-in-flutter-animation-class-tween-and-curvedanimation
flutter, dart, animation, programming
* In the previous article, we saw how to use AnimationController to control our animation. We will further customize our basketball animation in this article. * The following is an example of our earlier animation: * ![animation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rox2gnwfokbdugbyelwc.gif) * Because of the lack of a smooth bouncing effect, the above animation appears strange. Let's make this animation better. * But first, let's have a look at the basic Animation library that comes along with the Flutter SDK. ---------- ## Animation: * Animation is a core library of the Flutter. This is an abstract class. It means we won't be able to instantiate it. ![animtioninstantiate](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebkwqmymnoy0ccaen9i0.png) * We can track the animation's completion and dismissal using an instance of the Animation class. In addition, we can check the status of the currently running animation. * Let's first create an Animation instance in our app. ``` class _MyHomePageState extends State<MyHomePage> with TickerProviderStateMixin { late Animation _animation; //..... } ``` * A value of type T is assigned to an animation. It means that we can make an animation for almost any datatypes. For example, we can make: ![animationType](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4tmwdj6xs22zgpgksmw.png) * In our case, we want an animation of type double. Because we want to translate the ball position which is of type double. So let's update the Animation type. ``` class _MyHomePageState extends State<MyHomePage> with TickerProviderStateMixin { late Animation<double> _animation; //..... } ``` * As previously stated, Animation is an abstract class. It has no idea what is going on on-screen. It only understands the values that are passed to it and the state of that specific animation. * The Animation provides us with more control over defining the upperBound and lowerBound values of the animation, i.e. the begin and end values. * When the controller plays this animation, it generates interpolated values between the begin and end points. We use that interpolated value to animate our widget. * But now comes the question of how to make an animation? To do so, let us first define Tween. --------- Read the rest on [my website](https://dhruvnakum.xyz/animation-in-flutter-animation-class-tween-and-curvedanimation)
redstar25
846,177
[Java] Converting PDF to SVG
SVG (Scalable Vector Graphics) is a vector image format that can be searched, indexed, scripted,...
0
2021-09-30T06:40:10
https://dev.to/codesharing/java-converting-pdf-to-svg-5408
java, pdf, svg, api
SVG (Scalable Vector Graphics) is a vector image format that can be searched, indexed, scripted, compressed, and can be scaled in size without loss of quality. In this article, I will share the following two ways to convert a PDF file to SVG format using Free Spire.PDF for Java. ● Converting each page of the PDF file into a single SVG file. ● Converting multiple pages of the PDF file into one SVG file. **Import jar dependency** **Method 1:** Download the [free library](https://www.e-iceblue.com/Download/pdf-for-java-free.html) and unzip it. Then add the Spire.Pdf.jar file to your project as dependency. **Method 2:** Directly add the jar dependency to maven project by adding the following configurations to the pom.xml. ```xml <repositories> <repository> <id>com.e-iceblue</id> <name>e-iceblue</name> <url>http://repo.e-iceblue.com/nexus/content/groups/public/</url> </repository> </repositories> <dependencies> <dependency> <groupId>e-iceblue</groupId> <artifactId>spire.pdf.free</artifactId> <version>4.4.1</version> </dependency> </dependencies> ``` **Sample 1: Converting a 3-page PDF file to 3 SVG files** ```java import com.spire.pdf.*; public class ToSVG { public static void main(String[] args) { //Load the PDF file PdfDocument pdf = new PdfDocument(); pdf.loadFromFile("Island.pdf"); //Save to SVG image pdf.saveToFile("ToSVG.svg", FileFormat.SVG); pdf.close(); } } ``` ![ToSVG1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qplrpe4mm0gjqtep3472.jpg) **Sample 2: Converting a 3-page PDF file to 1 SVG file** ```java import com.spire.pdf.*; public class PDFtoSVG { public static void main(String[] args) throws Exception { String inputPath = "Island.pdf"; PdfDocument document = new PdfDocument(); document.loadFromFile(inputPath); document.getConvertOptions().setOutputToOneSvg(true); document.saveToFile("output.svg", FileFormat.SVG); document.close(); } } ``` ![ToSVG2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pdal6wkx91jijmq1x0d.jpg)
codesharing