id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
882,173
Query in Apache CouchDB: Views
In this articles, I will talk about how to query documents in Apache CouchDB via Views. ...
15,248
2021-10-30T17:05:18
https://dev.to/yenyih/query-in-apache-couchdb-views-4hlh
database, node, beginners
In this articles, I will talk about how to query documents in Apache CouchDB via Views. ##What is Apache CouchDB? A short introduce about CouchDB first for those who don't know. Apache CouchDB is an open-source document-oriented NoSQL database, implemented in Erlang. It is very easy to use as CouchDB makes use of the ubiquitous HTTP protocol and JSON data format. Do check out their [Official Website](https://couchdb.apache.org/) for more detail. 😉 --- Alright, back to our main topic today.✌ First of all, before we talk about what is view, I need to introduce 2 important things in CouchDB. ##Query Server The first thing to introduce is CouchDB Query Server. What is Query Server? Based on the official documentation: > The Query server is an external process that communicates with CouchDB by JSON protocol through stdio interface and processes all design functions calls, such as JavaScript views. By default, CouchDB has a built-in Javascript query server running via [Mozilla SpiderMonkey](https://spidermonkey.dev/). That's mean we can define a javascript function to tell CouchDB what documents you want to query. > Note: If you are not comfortable with Javascript, You can use other programming languages query server such as Python, Ruby, Clojure and etc. You can find the query server configuration [here](https://docs.couchdb.org/en/latest/config/query-servers.html) Ooookay, then where to define the javascript function?🤔 which is the second thing to introduce. ##Design Document Design Document is a special document within a CouchDB database. You can use design document to build indexes, validate document updates, format query results, and filter replications. Below is an example of the design document structure. ``` { "_id": "_design/example", "views": { "view-number-one": { "map": "function (doc) {/* function code here */}" }, "view-number-two": { "map": "function (doc) {/* function code here */}", "reduce": "function (keys, values, rereduce) {/* function code here */}" } }, "updates": { "updatefun1": "function(doc,req) {/* function code here */}", "updatefun2": "function(doc,req) {/* function code here */}" }, "filters": { "filterfunction1": "function(doc, req){ /* function code here */ }" }, "validate_doc_update": "function(newDoc, oldDoc, userCtx, secObj) { /* function code here */ }", "language": "javascript" } ``` Let's us break down chunk by chunk. ###1. CouchDB's document ID. _**Underscore id**_ is a reserved property key for representing the ID of the JSON document you save in the database. If the document starts with _design/ in front, meaning it is a design document. ``` "_id": "_design/example", ``` ###2. View functions We can define our views query logic here. Mostly driven by Javascript function as Javascript is default query server language. Later we will go more detail on the view function. ``` "views": { "view-number-one": { "map": "function (doc) {/* function code here */}" }, "view-number-two": { "map": "function (doc) {/* function code here */}", "reduce": "function (keys, values, rereduce) {/* function code here */}" } }, ``` ###3. Update functions Update functions are functions logic that saved in CouchDB server and then we can request to invoke to create or update a document. ``` "updates": { "updatefun1": "function(doc,req) {/* function code here */}", "updatefun2": "function(doc,req) {/* function code here */}" }, ``` ###4. Filter functions Filter functions use to filter database changes feed. ``` "filters": { "filterfunction1": "function(doc, req){ /* function code here */ }" }, ``` ###5. Validate Document Update Function As named, you can define validation rules in this function to validate the document when you post into CouchDB. ``` "validate_doc_update": "function(newDoc, oldDoc, userCtx, secObj) { /* function code here */ }", ``` ###6. Language Language property is telling CouchDB which programming language query server of this design document belongs to. ``` "language": "javascript" ``` I wont dive deep on _Update function_, _Filter function_ and _Validate document function_ as our focus today is view function. If you are interested, you may leave a message below let me know😉, then I can share a post about how to use update functions too. --- ✈Back to Views🛬 ##What is Views? View in Apache CouchDB actually is a little bit similar to normal SQL database view. > A database view is a subset of a database and is based on a query that runs on one or more database tables. The difference is CouchDB view is based on Map Reduce. As example design document above, we can see that actually view function consists of 2 property keys (map & reduce), one is _**map function**_, another one is _**reduce function**_. (Reduce function is Optional) ###1. Map function 🔍 Map functions accept a single document as the argument and (optionally) emit() key/value pairs that are stored in a view. Let's say we have a list of blog post documents saved in our CouchDB database. ``` [ { _id: "c2ec3b79-d9ac-45a8-8c68-0f05cb3adfac", title: "Post One Title", content: "Post one content.", author: "John Doe", status: "submitted", date: "2021-10-30T14:57:05.547Z", type: "post" }, { _id: "ea885d7d-7af2-4858-b7bf-6fd01bcd4544", title: "Post Two Title", content: "Post two content.", author: "Jane Doe", status: "draft", date: "2021-09-29T08:37:05.547Z", type: "post" }, { _id: "4a2348ca-f27c-427f-a490-e29f2a64fdf2", title: "Post Three Title", content: "Post three content.", author: "John Doe", status: "submitted", date: "2021-08-02T05:31:05.547Z", type: "post" }, ... ] ``` If we want to query posts by status, we can create a javascript _map_ function as below: ``` function (document) { emit(document.status, document); } ``` ![Key Value Show](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhbsadr1s8eqjtd3v7tm.JPG) For the whole design document will look like this: ``` { "_id": "_design/posts", "views": { "byStatus": { "map": "function (document) { emit(document.status, document); }" } }, "language": "javascript" } ``` After we saved this design document into CouchDB, CouchDB will start building the view. That's it, we have create a CouchDB view successfully.🎉🥳 To use the view, just send a GET method http request with the url below: ``` http://{YOUR_COUCHDB_HOST}:5984/{YOUR_DATABASE_NAME}/_design/posts/_view/byStatus ``` Result: ![Result One](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x1re6zreifk1b6swuocv.JPG) If we want to get all the posts with status "draft", then we call the http request with parameters key="draft", it will return us all the posts with status "draft" only. ``` http://{YOUR_COUCHDB_HOST}:5984/{YOUR_DATABASE_NAME}/_design/posts/_view/byStatus?key="draft" ``` Result: ![Result Two](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwrv2eom0bani2g9rrm9.JPG) Let say another map function emit document by date: ``` function (document) { emit(document.date, document); } ``` Then we can query blog posts by date range. ``` http://{YOUR_COUCHDB_HOST}:5984/{YOUR_DATABASE_NAME}/_design/posts/_view/byDate?startkey=""&endkey="2021-09-29\uffff" ``` Result: ![Result Date Range](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/osm3wtmnhym1avx9pi26.JPG) As query above, I defined a start date via _startkey_ and end date via _endkey_ , then CouchDB will return we the posts within the startkey and endkey. However my startkey is empty string, meaning that I don't care about start date, just give me the first post document until the date of the endkey. > Tips: If you want to reverse the return result, you can just add a parameter "descending=true" --- ###2. Reduce/Rereduce ✂ Reduce function is optional to a view, it is based on the map function result then you can perform SUM, COUNT or custom logic with to filter or derive into any desire result. Let's say we have a map result shows (month, expenses): ``` function (document) { emit(document.month, document.expenses); } ``` Example Result: ![Result Month](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqrchmdb40fbs792lh62.JPG) If we want to get february expenses only, then we will put a parameter _**key="february"**_, then it will return us february expenses only. Based on the map result, we can add a reduce function to help us to sum the february expenses amount. ``` function(keys, values, rereduce) { return sum(values); } ``` Result for _**key="february"**_ after reduce : ![Result Reduce](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p2uihbjq0o3naih9wuvv.JPG) That's it. We can instantly get the sum result no matter how many documents you have in the database. This is the power of Map Reduce. You can even rereduce, meaning perform second time reduce logic based on the first reduce result. For more detail, you may check out the official documentation [here](https://docs.couchdb.org/en/latest/ddocs/views/intro.html) --- ##In Conclusion CouchDB views is very powerful, flexible and super fast to query a result like Hadoop. However, CouchDB only supports one layer map reduce derivation. If you do not understand what is Map Reduce, you may check out this [Youtube video](https://www.youtube.com/watch?v=43fqzaSH0CQ&ab_channel=internet-class). Thank you for your reading.😊
yenyih
882,318
HTML - Let's Talk About Semantics
Salam and hello, folks! Today, let's discuss semantics. So, what's about it? ...
0
2021-10-30T20:08:30
https://dev.to/alserembani/html-lets-talk-about-semantics-4jo4
webdev, html, beginners
Salam and hello, folks! Today, let's discuss semantics. So, what's about it? --- ## Semantics So, what is the dictionary meaning of semantics? > Semantics is the study of meaning, reference, or truth. Then, how about in the computer science field? > Semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. So, put it easy, semantics is a way to understand easily what programming syntax means, so your code will be easily read, or the right word is "more verbose". Is it just about you able to read the code? Let's proceed with the discussion. --- ## The screen readers Technology in general, and specifically the web, is not just about doing the products, but beyond that. Delivering a quality web means that more and more audiences can benefit from it, whether they are visually challenged or not, or anything that makes the experience of visiting the web difficult. At least, that is one of definitions that I come up with throughout my years (though it is still little compared to other "lions" in the web development industry). To assist these kinds of difficulties, semantics are some aspects that should be highlighted, so screen readers can recognise each element inside your HTML, and people that require accessibilities can benefit from it. --- ## Crawlers Other than that, there is an entity that will crawl through your page and see what your page features. Other than all your Meta tags (oooops.. I mean, meta tags), crawlers will crawl through your DOM and see what is inside your content and try to classify based on the content. That is why semantics is important if you really prioritise your Search Engine Optimisation (SEO), so crawlers can keep their eyes on your pages and serve them to the deserving audience. --- ## Verbosity And of course, for developer experience, verbosity is one of the scoring points so your page can be easily maintained. Providing the elements gives a contextual meaning, the readers, whether the technical or non-technical ones, can distinguish which part of your code needs to pay attention to when the time comes. Okay, so my point is - semantics is quite important for these aspects: - Accessibility - Verbosity - SEO Maybe there are a lot more reasons for semantics, so put in the comments below what you think about semantics 👇 So..... onto the HTML! --- ## Semantics in HTML So, what about semantics in HTML? All the elements in HTML do give meaning to what you want to put inside it, whether it contains navigations, or information, or maybe emphasising the content. It's up to you of course, but improving your HTML elements does give a lot of benefit to the audience 🧑 and the robot 🤖. Let's dive in. ### [Text](https://developer.mozilla.org/en-US/docs/Learn/HTML/Introduction_to_HTML/HTML_text_fundamentals) ```html <p>Hello DEVto!<p> ``` As simple as paragraph element `<p>`, displaying text to the user. So to tell that the tag is a title, you may use class or style to indicate that is the title, sure. Let's do just that. ```html <p class="title">Hello DEVto!</p> ``` Is this a good practice? Well, it is subjective whether it is right or wrong, but to help screen readers and crawlers to understand your content, you can use several other tags that can assist you there, and improve the styling using those tags. ```html Headings (title) => <h1>, <h2>, <h3>, <h4>, <h5>, <h6> Text highlights => <strong>, <em>, <small>, <sub>, <sup>, <abbr>, <code> Text formatting only => <b>, <i> ``` You can actually wrap your text without any of these tags, but I would not recommend that practice, because screen readers won't know what those text contexts refer to. ```html Not recommended <div>Hello DEVto</div> Instead, <div> <p>Hello DEVto</p> </div> ``` ### [Containers](https://developer.mozilla.org/en-US/docs/Glossary/Semantics) ```html <div> <p>Hello DEVto</p> </div> ``` I have seen a lot of pages that use `div` elements, and that is okay. Of course, `div` is a way to containerise your content, but oftentimes, I see the `div` element is abused, especially used for button 💀. Not exactly abused, but it is better to give it meaningful context. For example, for navigation, you can use the `nav` element. You can put all your navigation content inside `nav`, so your crawlers understand that the division represents your page navigation or any navigation context you intend to. ```html Navigation => <nav> Layouts => <main>, <section>, <article>, <header>, <footer> Contexts => <mark>, <figure>, <figcaption>, <summary>, <details>, <aside>, <address> ``` ### Forms Most of the websites nowadays required input from users, whether for newsletters, or registration, or login, or the exact thing I did so this article can be read by you! Usually, for simple single input, I can just put `<input>`. But for more than two inputs, I will use `<form>` for the purpose. ```html <form> For simple input => <input> For more diverse options => <select><option>, <textarea>, <meter>, <datalist> For labeling => <label>, <legend>, <progress> For actions => <button> </form> ``` But heed my advice. **NEVER USE DIV FOR BUTTONS** for screen won't detect it for keyboard users. This is what I am talking about regarding people abusing `<div>` for buttons. ### Other existing semantic elements Well, of course, there are other elements that already have contexts within. ```html <table> - <thead>:<th>, <tbody>:<tr>, <td> <ul> and <ol> for list, <dl> for more advanced list ``` Well, there are actually [a rough total of 100 semantic elements](https://developer.mozilla.org/en-US/docs/Web/HTML/Element) you can use to give more meaning of course! --- ## Is semantic a must? Well, I wouldn't say you have to, but try to improve your content, so others can benefit from it, and might as well help you in future for your page discovery! The reason behind my writing today is about encouraging web developers to pay attention to semantics. You might be thinking "There are a lot of tags that I have to know then". Well, HTML elements are interchangeable, which means that changing from `div` to `main` doesn't break any of your page content, as long as you know which context it belongs to, so improvisation can even come later, given if you still have the motivation to go to your code and replace it 🤨. --- ## My personal approach Oftentimes, you want to make your design system first before creating your content. You can standardise your content globally using these semantic tags. For example, in my CSS sheet; ```css h1 { font-size: 3rem; } h2 { font-size: 2.75rem; } h1, h2, h3 { font-weight: 700; padding: 1rem 0px 0.5rem; } section { padding: 1rem 0.5rem; } ``` And then, later you want to develop more complex components, so then you will utilise class. ```html <section class="blog-card"> ``` ```css .blog-card { border-radius: 4px; } ``` Or with Tailwind utilities ```css .blog-card { @apply bg-blue-500; } ``` With these, your design system will be organised, standardised, and easy to debug later on. Well, it is up to the developer in which they prefer. --- ## What next after semantics? If you are into semantics, you might be interested in [ARIA standards in HTML](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA), where it really boost experience in screen reading and accessibility. Well, what is your opinion in semantics? Write what you think below, and keep the discussion healthy. A meme or two should be okay for a little bit fun, but keep it in context. That's it for this week. And for that, have some rest, and peace be upon ya!
alserembani
882,329
Demystifying Principal Component Analysis: Handling the Curse of Dimensionality
Generally, in machine learning problems, we often encounter too many variables and features in a...
0
2021-11-02T13:33:54
https://dev.to/charfaouiyounes/demystifying-principal-component-analysis-handling-the-curse-of-dimensionality-2hm6
datascience, datascienceformi, dimentionalityreduction
Generally, in machine learning problems, we often encounter too many variables and features in a given dataset on which the machine learning algorithm is trained. The higher the number of features in a dataset, the harder it gets to visualize and to interpret the results of model training and the model’s performance. Moreover, when dealing with such a massive dataset in terms of features, the computational costs should be considered. It would be great if there was a method to eliminate or exclude some features or dimensions to obtain a smaller feature space. This is where dimensionality reduction techniques come into play. As we will see in this article what is this concept, and the details of a powerful algorithm called Principal Component Analysis that reduce the dimensionality of the dataset. That way, you get a better understanding of these types of problems and how you can solve them. #Dimensionality Reduction Dimensionality reduction is a set of machine learning techniques for reducing the number of features by obtaining a set of principal variables. This process can be carried out using several methods that simplify the modeling of complex problems, eliminate redundancy, and reduce the possibility of overfitting. Some many techniques and algorithms can reduce the dimensionality of data, such as feature selection processes, which applies both statistical measures and search methods to obtain a smaller feature space. There are also linear methods ( — like PCA), principal curves, kernel methods, local dimensionality reduction, nonlinear auto-associators, vector quantization methods, and multidimensional scaling methods. We also deep learning solutions that are classified as unsupervised learning — this subset has its neural network architecture that we call autoencoders. Today, we’re going to talk about one of the foundational algorithms in dimensionality reduction — principal component analysis. ##Advantages of Dimensionality Reduction Whenever there are massive datasets, dimensionality reduction is what comes to mind for machine learning engineers and AI developers. It helps them perform data visualization and complex data analysis processes. Here are some other advantages of using these techniques: - Reduces both space and time complexity required for machine learning algorithms. - Improves the interpretation of machine learning model parameters. - It makes it easier to visualize data when reduced to very low dimensions, such as 2D or 3D. - Avoids the curse of dimensionality. **Example** Say we’re doing a simple classification problem that relies on both humidity and rainfall. In such a case, these features may overlap. But oftentimes these highly-correlated features can be collapsed into just one feature, which helps shrink the overall feature space. #Principal Component Analysis Principal component analysis (PCA) is an algorithm that uses a statistical procedure to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. This is one of the primary methods for performing dimensionality reduction — this reduction ensures that the new dimension maintains the original variance in the data as best it can. That way we can visualize high-dimensional data in 2D or 3D, or use it in a machine learning algorithm for faster training and inference. ##PCA steps PCA works with the following steps: - Standardize (or normalize) the data: By default, this is the machine learning engineer’s responsibility. - Calculate the covariance matrix from this standardized data (with dimension d). - Obtain the Eigenvectors and Eigenvalues from the newly-calculated covariance matrix. - Sort the Eigenvalues in descending order, and choose the 𝑘 Eigenvectors that correspond to the 𝑘 largest Eigenvalues — where 𝑘 is the number of dimensions in the new feature subspace (𝑘≤𝑑). - Construct the projection matrix 𝑊 from the 𝑘 selected Eigenvectors. - Transform the original dataset 𝑋 by simple multiplication in 𝑊 to obtain a 𝑘-dimensional feature subspace 𝑌. Next, we dive deeper into the details of each step to more fully understand the concepts. ##PCA in-depth Here, we’ll be refreshing some foundational Linear Algebra and Statistics concepts in this section, alongside the details of the previous steps. The math is important here because it’s the core of the principal component analysis algorithm. ###Data Standardization Standardizing (or normalizing) the data before applying PCA is critical — this is because PCA will give more importance to those features having higher variances than to those features with extremely low variances during the identification of the best principle components to retain. Say that you have a dataset with features that have different units — for example, one feature is in kilometers and another is in centimeters, and both have the same change in value. PCA will understand that the KM feature is reflecting minor changes, while the other is reflecting bigger changes. In this case, standardization is required to tackle these issues; otherwise, PCA will give a higher preference to the centimeter variable. ###Covariance Matrix Variance in statistics means the measure of the variability or spread in a set of data. Mathematically, it’s the average squared deviation from the mean score. Covariance is a measure of the amount or the degree to which corresponding elements from two sets of ordered data move in the same direction. A covariance matrix generalizes the notion of variance to multiple dimensions (or multiple features). It describes the covariance for each pair of features values (or Random Variables). A covariance matrix can be calculated as follows: ![Covariance Matrix Formula](https://miro.medium.com/max/594/1*wEhH-e511VQ6CMH_5TFcow.png) What a covariance matrix does for PCA is simply describe the relationship between the variables so the algorithm knows how to reduce their dimensionality without losing a lot of information. ###Computing Eigenvectors and Eigenvalues The eigenvector of a matrix (linear transformation) is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it, whereas the eigenvalue is the factor by which the Eigenvector is scaled. These concepts are at the core of PCA. We’re going to calculate those elements for our covariance matrix from the previous step. The Eigenvectors (or principal components) determine the directions of the new feature space, and the Eigenvalues determine their magnitude. In other words, the Eigenvalues explain the variance of the data along the new feature axes. The Eigenvalue and Eigenvector of a matrix can be calculated as follows: ![alt text](https://miro.medium.com/max/422/1*rNCM_2BNF_ZcS7MAkE4quQ.png) — Where 𝜆 is the Eigenvalue and 𝑋 is the Eigenvector. To solve the previous equation, we need a zero determinant: ![alt text](https://miro.medium.com/max/454/1*AgjlWvX0QbKxYSzOJoRS3Q.png) With that, we find the required Eigenvalues and Eigenvectors. ###Sort Eigenvalues The typical goal of PCA is to reduce the dimensionality of the original features space by projecting it onto a smaller subspace — in other words, where the Eigenvectors will form the axes. To decide which Eigenvectors can be dropped without losing too much information for the construction of lower-dimensional subspace, we need to inspect their corresponding Eigenvalues: - The Eigenvectors with the lowest Eigenvalues carry the least information about the distribution of the data — those are the ones that can be dropped. - The Eigenvectors with the high Eigenvalues carry more information about the distribution of the data — those are the ones that need to be preserved. To do so, a common approach is to rank or sort the Eigenvalues from highest to lowest, and then choose the top k Eigenvectors. ###Projection matrix This is the easiest part: the construction of the projection matrix that will be used to transform the dataset into the new feature subspace. The way to construct this matrix is to concatenate the top k Eigenvectors from the previous step. ###Transform the original dataset We’ve finally arrived at the last step. Here, we’ll use the projection matrix 𝑊 from the previous step to transform our samples into the new subspace. We can achieve that using simple matrix multiplication, via the equation 𝑌=𝑋×𝑊, where 𝑌 is the matrix of our transformed samples, and X is our original matrix dataset. #Coding Now that we’ve covered the steps and the math behind PCA, we’ll now look at the two ways to implement PCA — first by coding it ourselves, and second through the use of an external library. ##Hard coding To implement the algorithm ourselves, we need the NumPy functions. Then, using the aforementioned steps, we get the principal component analysis. Here’s an implementation: ```python import numpy as np def pca(number_compontents, data): if not 0 <= number_compontents <= data.shape[1]: raise ValueError('The number of features are less than the number of components') # calculate the covariance matrix cov_matrix = np.cov(data.T) # calculate the eigen things eig_vals , eig_vecs = np.linalg.eigh(cov_matrix) # making eigen values and eigen vectors as pairs eig_pairs = [(np.abs(eig_vals[i]) , eig_vecs[:,i]) for i in range(len(eig_vals))] # Sorting All of Them eig_pairs.sort(key = lambda x : x[0] , reverse= True) # Getting the selected vector in a form of matrix final = [eig_pairs[i][1].reshape(data.shape[1],1) for i in range(number_compontents)] # Creating the Projection Matrix, multiplying by -identity is an addons projection_matrix = np.hstack((final)) # transforming the data Y = data.dot(projection_matrix) return Y # Give the dataset and the number of components new = pca(number_compontents = 2 , x_train) plt.scatter(new[:,0], new[:,1], c = y_train,s = 100, alpha = 0.5) plt.xlabel('Principal Component 1') plt.ylabel('Principal Component 2') plt.show() ``` ##Using an external library Even if you can implement PCA yourself, you’ll probably end up like all other machine learning engineers and data scientists — using libraries to accomplish dimensionality reduction. Here is a code sample that uses scikit-learn: ```python # Getting The PCA From sklearn library from sklearn.decomposition import PCA # create the object with your desired dimensions. pca = PCA(n_components = 2) # project the data in the new dimension space. new = pca.fit_transform(x_train) # Ploting The Data For Sklearn Result plt.scatter(new[:,0], new[:,1], c = y_train,s = 100, alpha = 0.5) plt.xlabel('Principal Component 1') plt.ylabel('Principal Component 2') plt.show() ``` #Resources There are more examples and lessons out there that will help you start using PCA and dimensionality reduction in general. Here are a few I particularly like: - This [answer](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues) describes PCA in an easy-to-understand non-technical way. - Introduction to [LDA](https://towardsdatascience.com/linear-discriminant-analysis-in-python-76b8b17817c2), another dimensionality reduction method. - PCA in sklearn: [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html). - Another way to calculate principal components using the [SVD](https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca) algorithm. #Conclusion In this blog post, we saw that the curse of dimensionality can constitute a serious obstacle to the efficiency of the machine learning models — Having too many overlapping data dimensions. As a solution, we explored the PCA algorithm that helps you reduce the dimensionality of data so the machine learning algorithms can operate in a faster and more effective way. I hope I succeeded in conveying the concept and the details of the PCA to you. Understanding how these algorithms and other machine learning algorithms work is pretty delightful because it helps you debug problems in your data — and eventually to build better models.
charfaouiyounes
882,424
tsParticles 1.37.1 Released
tsParticles 1.37.1 Changelog Bug Fixes Fixed issue with dynamic imports and...
13,803
2021-10-30T23:41:00
https://dev.to/tsparticles/tsparticles-1371-released-2gl6
showdev, javascript, webdev, html
# tsParticles 1.37.1 Changelog ## Bug Fixes - Fixed issue with dynamic imports and async loading - Added browserslist to fix some issues with older browsers (should fix #2428) --- {% github matteobruni/tsparticles %}
matteobruni
882,435
Build a Budget App using Flutter and Appwrite
Overview There are multiple components to think about when you are building an App. The...
0
2021-11-02T02:21:12
https://dev.to/1995yogeshsharma/build-a-budget-app-using-flutter-and-appwrite-1hoi
appwrite, flutter, mobileapp
# Overview There are multiple components to think about when you are building an App. The broad component can be categorised as - 1.) The code for app: How data is consumed, displaying the components. 2.) The backend: Where the data is stored, api implementations For 1, we have multiple solutions like react, ionic, flutter, Android sdk etc. For 2, there are a lot of things to consider, how you handle authentication / authorisation, which database are you using etc. To make things simpler, there are frameworks like firebase and appwrite. In this blog we will learn about **Flutter** for building the app and **Appwrite** for its backend. Both Flutter and Appwrite are open-source projects and you can find them at - Appwrite - https://github.com/appwrite/appwrite Flutter - https://github.com/flutter/flutter The best method to learn any new technology is to build something using it! So, in this blog we will make an app using flutter and appwrite. We will create a simple **budget management** app which can do following things - - Allow authentication for multiple user, thus different users can manage their budgets - Show current balance and option to add expense / income - Show the historical transactions done by the user Alright, with these requirements in mind, we are ready to start building!! _pre-requisites_ - A little knowledge of flutter would be helpful but I will point to different resources as and when needed. # Building the Budget App You can find the final version of app in the repository - https://github.com/1995YogeshSharma/budget-app ## Step 1: Dev Environment Setup Setting up both flutter and appwrite for local development is very easy. Appwrite comes as dockerized container and is very easy to use. - Instruction to setup flutter - https://flutter.dev/docs/get-started/install - Instruction to setup appwrite - https://github.com/appwrite/appwrite#installation I am using **VS Code** as IDE but you can choose your preferred IDE. ## Step 2: Build the layout using Flutter In this section, we build the layout of our app and use mock data. ### 2.1 Create a blank project Run the following command to create a blank flutter project ``` flutter create budget-app ``` Strip down the code in `lib/main.dart` to just have the outline code - ``` import 'package:flutter/material.dart'; void main() { // runApp(BudgetApp()); runApp(new MaterialApp( home: new BudgetApp(), )); } // Our main widget class BudgetApp extends StatefulWidget { @override _State createState() => new _State(); } class _State extends State<BudgetApp> { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Budget App'), ), body: Container( child: Center( child: Text('Hello World'), ), ), ); } } ``` When you run the app, it should look like - ![Step 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22v57iv0123ro1ru2nj7.png) ### 2.2 Build the layout You can find the updated code at - https://github.com/1995YogeshSharma/budget-app/blob/main/blog_steps/step3_lib/main.dart On compiling the app should look like - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qq5od4g79jza3w51joq.png) ### 2.3 Add the functionality Now, let's add the functionality to add expense / income and show the transaction history. We also add the bottom sheet view for the login / signup. You can find the updated code at - https://github.com/1995YogeshSharma/budget-app/blob/main/blog_steps/step3_lib/main.dart On compiling the app should look like - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5mms9vg96fga14kzxgc9.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d62icsnndm48niigvlw9.png) ## Step 3: Add the Appwrite authentication and database ### 3.1: Add flutter project to Appwrite Follow the instructions at https://appwrite.io/docs/getting-started-for-flutter to setup your app connections with Appwrite. ### 3.2: Create required collections in Appwrite We will need two collections for the functionality - balances : This collection will be used to store the current balance for each user. The Document has 'email' and 'balance' Click on the left bar options in Appwrite UI to create your collection with following settings ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gxgpfnmbs8sruuq63g6.png) - transactions : This collection will be used to store transactional history for a user. Each document will have 'email' filed and 'transactions' which is an array of transactions ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xw7xpn6izki290dr6id3.png) ### 3.3: Add appwrite authentication to the app We need to create 'login', 'logout' and 'signup' functions in our appwrite client code and use them in the main file Code for appwrite client - https://github.com/1995YogeshSharma/budget-app/blob/main/blog_steps/step6_lib/appwrite_client.dart Updated main file code - https://github.com/1995YogeshSharma/budget-app/blob/main/blog_steps/step6_lib/main.dart We should be able to login and signup in our app now! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dlnedo9j9u44hwb9xylj.png) ### 3.4: Add appwrite database connections For transactions, we need to create a model which will be used to store the Transaction object and convert it to JSON to store it to appwrite database and back to object when fetched from appwrite The updated code is present at - https://github.com/1995YogeshSharma/budget-app/tree/main/budget_app/lib We have built our app now! **The final App** ![App gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9vu806xshoyeh1i3rc1k.gif)
1995yogeshsharma
882,551
Query in Apache CouchDB: Mango Query
In previous articles, we talked about design documents and how to use views to query in CouchDB....
15,248
2021-10-31T03:38:48
https://dev.to/yenyih/query-in-apache-couchdb-mango-query-lfd
database, node, beginners
In previous articles, we talked about design documents and how to use views to query in CouchDB. Besides Javascript query server, CouchDB also has a built-in Mango query server for us to query documents. Therefore in this article, I will talk about what is Mango Query, and when to use Mango Query? --- ##What is Mango 🥭? Mango is a MongoDB inspired query language interface for Apache CouchDB. > Mango provides a single HTTP API endpoint that accepts JSON bodies via HTTP POST. These bodies provide a set of instructions that will be handled with the results being returned to the client in the same order as they were specified. The general principle of this API is to be simple to implement on the client side while providing users a more natural conversion to Apache CouchDB than would otherwise exist using the standard RESTful HTTP interface that already exists. ##How to use mango query in CouchDB? We use back the same use case example in previous articles (A list of blog posts): ``` [ { _id: "c2ec3b79-d9ac-45a8-8c68-0f05cb3adfac", title: "Post One Title", content: "Post one content.", author: "John Doe", status: "submitted", date: "2021-10-30T14:57:05.547Z", type: "post" }, { _id: "ea885d7d-7af2-4858-b7bf-6fd01bcd4544", title: "Post Two Title", content: "Post two content.", author: "Jane Doe", status: "draft", date: "2021-09-29T08:37:05.547Z", type: "post" }, { _id: "4a2348ca-f27c-427f-a490-e29f2a64fdf2", title: "Post Three Title", content: "Post three content.", author: "John Doe", status: "submitted", date: "2021-08-02T05:31:05.547Z", type: "post" }, ... ] ``` If we want to query the posts with status draft, we can define the mango query as below: ``` { "selector": { "status": { "$eq": "draft" } }, "fields": ["_id", "_rev", "title", "content", "date", "author"], "sort": [], "limit": 10, "skip": 0, "execution_stats": true } ``` --- Let's us break down line by line before we submit our mango query. ###1. Selector 🔎 This is the place you define your query condition, you can give it a document property key that you want to query and the result. With the example above we want to query documents with status "draft", so we can make use of the operator equal _$eq_ . ``` "selector": { "status": { "$eq": "draft" } } // If it is an equal operator, we also can define as below too "selector": { "status": "draft" } ``` ###2. Fields 🎁 Sometimes you might just required a property value, or your document might be a big JSON document or you are working for mobile client that you want to optimize the query result download size. Therefore, fields is handy for us to tell CouchDB just return what property fields to us. Just like GraphQL, get what you needed. ``` "fields": ["_id", "_rev", "title", "content", "date", "author"], ``` > Tips: Fields is Optional, if you didn't define fields, CouchDB will just return the whole document to you. ###3. Sort, Limit, Skip These are normal useful feature that you can do in other normal database. They are optional too. ``` "sort": [], "limit": 10, "skip": 0, ``` > *Note: For limit by default is 25, however there is an internal maximum limit which is around 250 number of documents with a Mango Query request. Therefore, if you didn't define the limit or even set the limit to 1k, it will still return around 250 documents. To solve this issue, either use CouchDB Views for this particular query or use Bookmark (We will talk about bookmark later). ###4. Execution Statistics This is a nice feature for developer to know the basic execution statistics for the specific mango query request. It is Optional too. And then we can post our mango query to ``` POST /{YOUR_DATABASE_NAME}/_find ``` Result: ``` { "docs": [ { "_id": "ea885d7d-7af2-4858-b7bf-6fd01bcd4544", "_rev": "1-f9397a0bc5b6150270b5309db35ec4b9", "title": "Post Two Title", "content": "Post two content.", "date":"2021-09-29T08:37:05.547Z", "author":"Jane Doe" } ], "bookmark": "g1AAAAB4eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqrpCZaWJimmKfomiemGemaWJha6CaZJ6XpmqWlGBgmJaeYmJqYgPRywPQSrSsLAKuSIMM", "execution_stats": { "total_keys_examined":0, "total_docs_examined":1, "total_quorum_docs_examined":0, "results_returned":1, "execution_time_ms":2.253 }, "warning": "no matching index found, create an index to optimize query time" } ``` --- Let's us take a look at the result. ###1. Docs 📃 Here is the result we got from Mango Query. ``` "docs": [ { "_id": "ea885d7d-7af2-4858-b7bf-6fd01bcd4544", "_rev": "1-f9397a0bc5b6150270b5309db35ec4b9", "title": "Post Two Title", "content": "Post two content.", "date":"2021-09-29T08:37:05.547Z", "author":"Jane Doe" } ], ``` ###2. Bookmark 🔖 This is the bookmark we mentioned earlier. Bookmark from official document is > A string that enables us to specify which page of results we require. Used for paging through result sets. Every query returns an opaque string under the bookmark key that can then be passed back in a query to get the next page of results. If any part of the selector query changes between requests, the results are undefined. ``` "bookmark": "g1AAAAB4eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqrpCZaWJimmKfomiemGemaWJha6CaZJ6XpmqWlGBgmJaeYmJqYgPRywPQSrSsLAKuSIMM" ``` As I mentioned earlier there is a maximum number of documents for the CouchDB Mango Query return result per request. So if you have result more than 250 and you want the next page result starts from 251, we can just get the current bookmark and put into our next Mango query. For example: ``` { "selector": { "status": { "$eq": "draft" } }, "bookmark": "g1AAAAB4eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqrpCZaWJimmKfomiemGemaWJha6CaZJ6XpmqWlGBgmJaeYmJqYgPRywPQSrSsLAKuSIMM" } ``` ###3. Execution Statistics Result As our above Mango Query _"execution_stats"_ is set to true, so CouchDB will return the execution statistic report of this mango query request. ``` "execution_stats": { "total_keys_examined":0, "total_docs_examined":1, "total_quorum_docs_examined":0, "results_returned":1, "execution_time_ms":2.253 }, ``` ###4. Warning This is a kindly reminder from CouchDB that we didn't create an index for this mango query. Just like any other databases. It's always recommended that to create an appropriate index when deploying in production. ``` "warning": "no matching index found, create an index to optimize query time" ``` --- ##How to create a Mango Index? Since we are getting the reminder from the above example, now we can create a Mango Index to optimize the query above. This is how a Mango Index looks like: ``` { "index": { "fields": ["status"] }, "ddoc" : "posts-by-status", "type" : "json" } ``` ``` POST /{YOUR_DATABASE_NAME}/_index ``` After created our index, just define the design document name of the mango index in our mango query. ``` { "selector": { "status": { "$eq": "draft" } }, "use_index": "posts-by-status" } ``` Then you will no longer see the "warning" message from the return result. > Tips: To check or debug whether your mango index has create/use properly. use /{YOUR_DATABASE_NAME}/_explain endpoint for your mango query. > Another Tips: If you wish to index all fields of your document. You can define fields with empty array when creating the mango index. ##When to use Mango Query or CouchDB Views? For my opinion, I personally think that Mango Query is useful for ad-hoc search / sort / filtering. CouchDB Views is useful for reporting/statistics involve Sum, Count, Median or fixed recurring query. Therefore, depending on your requirement to pick which is the most suitable. But most of the time you will be using both of them within a project. --- ##In Conclusion. This is a simple guide on using Mango Query in Apache CouchDB. Hope you find these useful. Actually there are [more](https://docs.couchdb.org/en/latest/api/database/find.html) you can do with Mango Query. Check it out. Thank you for reading. 😊
yenyih
882,571
Why You Should Write Pure Functions
Originally posted @ CatStache.io - Check it out for more posts and project updates! Pure functions...
0
2021-10-31T05:01:05
https://dev.to/bamartindev/why-you-should-write-pure-functions-4ea2
javascript, programming, functional, beginners
--- title: 'Why You Should Write Pure Functions' tags: 'javascript, programming, functional, beginner' --- *Originally posted @ [CatStache.io](https://www.catstache.io/blog/js-pure-functions) - Check it out for more posts and project updates!* Pure functions are a cornerstone of functional programming, but even if you are writing code that isn't purely functional its a great idea to prefer them! ## Defining Pure Function The two properties of a pure function: * Given the same set of arguments, the function will always produce the same result. * Invoking the function produces no side effects. A side effect can be thought of as any observable effect *besides* returning a value to the invoker. A simple example of a pure function: ```javascript const add = (a, b) => a + b; ``` For any input into this function, it will always produce the same value. That is to say, invoking the function like `add(5,2)` will **always** produce 7. It is also possible to see that nothing else, such as modifying state or interacting with other systems, so this function is pure! Technically, if we were to rewrite the previous function to call `console.log` to output some info, that would make the function *impure* because it is having an observable effect that is not just returning the function. Another example of an impure function would be `Math.random()` as it modifies the internal state of the Math object (breaking point 2) and you get different results each time the function is invoked (breaking point 1). ## Side Effects Cause Complexity Functions that are pure are easier to reason about - you can create a mapping of inputs to outputs, and that mapping will always hold true. It doesn't depend on external state or effects to produce a result! Lets look at a function that might be written to determine the number of days since the UNIX epoch (January 1, 1970 00:00:00 UTC) to now (don't use this, and prefer a library if you are working with time, this is just an example 😉) ```javascript const daysSinceUnixEpoch = () => { const currentDate = new Date(); const epochDate = new Date('1/1/1970'); return Math.floor((currentDate - epochDate) / (24 * 60 * 60 * 1000)); } ``` This function will produce the value `18930`, and every time I run it it will produce that value. Well, it will produce that every time I run that **today**. Depending on when you read this, if you were to copy this function and invoke it, I have no idea what value it will produce! This makes it difficult to reason about, because I need to know the external state, namely the current day, to try and figure out what value should be produced. This function would also be incredibly difficult to test, and any test that might be written would be very brittle. We can see that the issue is that we are making use of an impure value produced by `new Date()` to determine the current date. We could refactor this to make a function that is pure and testable by doing the following: ```javascript const daysSinceUnixEpoch = (dateString) => { const currentDate = new Date(dateString); const epochDate = new Date('1/1/1970'); return Math.floor((currentDate - epochDate) / (24 * 60 * 60 * 1000)); } ``` A simple swap to require a date string for computing the difference makes this a pure function since we will **always** get the same result for a given input, and we are not make use of any effectful code. Now, if I were to call this with `daysSinceUnixEpoch('10/31/2021')` I get the same result, but now if you were to call it you should also get `18930`, neat! ## Side Effects Are Unavoidable Now, while pure functions are awesome, we can't really build an app that does anything of note without side effects. If the user can't see output, or interact with the app in any way, they probably won't have much reason to stick around! Therefore, the idea of preferring pure functions isn't to get rid of side effect, but to reduce the surface area where effectful code is executed and extract pure functionality into reusable and testable functions. Let's look at another example of some code that might be written server side with the Express web framework. A common thing that is done server side is ensuring that the data sent in a request contains all the expected values. Imagine writing a handler for a POST request to an endpoint `/api/comment` that expected a request body with keys for `postId`, `userId`, `comment` to indicate who posted the comment, what post the comment was on, and what the comment was. Lets take a first stab at this: ```javascript router.post('/api/comment', async (req, res) => { const {postId, userId, comment} = req.body try { if (postId !== null && userId !== null && comment != null) { const res = await Comment.create({postId, userId, comment}) return res.send(res) } else { return res.status(400).json({message: 'Expected keys for postId, userId, and comment'}) } } catch (e) { return res.status(500).json({error: e}) } }) ``` This would work, we see that we pull the keys out of the request body, then we check that they all exists. If they do we do something to create the comment, otherwise we send back a 400 with the message saying we expected certain keys. If we want to test that our logic for rejecting the request based on the payload is correct we would need to do a lot of mocking and faking a request with different payloads. Thats a huge pain! What if we instead extracted the pure code from this effectful function? ```javascript const expectedReqBody = (body, keys) => { return keys.every(key => key in body) } router.post('/api/comment', async (req, res) => { const expectedKeys = ['postId', 'userId', 'comment'] if(!expectedReqBody(req.body, expectedKeys)) { return res.status(400).json({message: `Body of request needs to contain the following keys: ${expectedKeys}`}) } const {postId, userId, comment} = req.body try { const res = await Comment.create({postId, userId, comment}) return res.send(res) } catch (e) { return res.status(500).json({error: e}) } }) ``` Now, we have extracted out the pure functionality of checking if values exist. If we are given an array of expected keys and the request body we can ensure they all exist. Now we can test the functionality by testing the pure function `expectedReqBody` and feel safe when we are using this function as part of validation. As a bonus, if you wanted to validate the body on other requests you have an already tested solution! ## Extra Bonuses I have previously written briefly about [function composition](https://www.catstache.io/blog/js-hof-composition) and this works really well with pure functions! If you compose a handful of pure functions it is really easy to reason about what will happen throughout the 'data pipeline'. If you have effectful code sprinkled in, it can cause a massive headache! Pure functions can also be memoized! If you have functionality that takes a lot of CPU power to compute, but is pure, you can cache the results! I can write a bit about memoization but some libraries to use include ramda's [memoizeWith](https://ramdajs.com/docs/#memoizeWith) and lodash's [memoize](https://lodash.com/docs/4.17.15#memoize) ## Conclusion Thanks for taking the time to read about pure functions! I will leave you with a tldr bullet point list on the topic: * Pure functions always map the same input to output, and contain no side effects. * We can reason about and test pure functions easily, and pure functions are easier to reuse and compose with. * Side effect add extra complexity, but they are unavoidable if we want to write meaningful apps. * Writing pure functions allows us to reduce the surface area of effectful code.
bamartindev
882,607
airth06_01.java
// ~Airthmatic Operators // problem: // airth06_01.java{wirte java program showing the waorking...
0
2021-10-31T07:31:12
https://dev.to/ukantjadia/airth0601java-1a4o
java, beginners, programming, tutorial
```java // ~Airthmatic Operators // problem: // airth06_01.java{wirte java program showing the waorking of all Airthmatic Operators.} class airth06_01 { public static void main(String args[]) { int x= 10 , y= 10; System.out.println("Value of x = "+x+" and value of y = "+y); System.out.println("SUM of "+x+" and "+y+" is "+(x+y)); System.out.println("SUBSTRACTION of "+x+" and "+y+" is "+(x-y)); System.out.println("MULTIPLICATION of "+x+" and "+y+" is "+(x*y)); System.out.println("DIVIDE of "+x+" and "+y+" is "+(x/y)); System.out.println("MODULAS of "+x+" and "+y+" is "+(x%y)); System.out.println("SUM and EQUAL of "+x+" and "+y+" is "+(x += y)); System.out.println("SUBSTRACTION and EQUAL of "+x+" and "+y+" is "+(x -= y)); System.out.println("MULTIPLICATION and EQUAL of "+x+" and "+y+" is "+(x *= y)); System.out.println("DIVIDE and EQUAL of "+x+" and "+y+" is "+(x /= y)); System.out.println("MODULAS and EQUAL of "+x+" and "+y+" is "+(x %= y)); System.out.println("ENDING THE PROGRAM...\n:-) THANK YOU (-:"); } } ```
ukantjadia
882,609
Hacktoberfest 2021 Experience
My first Hacktoberfest experience and it has been an interesting one so far. I got the opportunity to...
0
2021-10-31T07:38:13
https://dev.to/devcreed/hacktoberfest-2021-experience-8n3
hacktoberfest, opensource, github
My first Hacktoberfest experience and it has been an interesting one so far. I got the opportunity to be involved with my local tech community during this period, working more with GIT and Github. I will be finishing this edition with 6 PRs .....😊. I'm hoping to participate in the upcoming Hacktoberfest editions.💪
devcreed
882,786
CV's Absurd
One of the responsibilities I have as an Engineering Manager in YouGov is being also a hiring manager...
0
2021-10-31T10:44:42
https://rogowski.page/posts/cvs-absurd/
recruitment, cv, career, news
One of the responsibilities I have as an Engineering Manager in YouGov is being also a hiring manager for my team. I'm responsible for the process because I care to find a correct balance between skills & the need for more engineers ([Open Positions](https://jobs.yougov.com)). {% twitter 1454293034179317764 %} When I've read a Reddit [thread](https://www.reddit.com/r/recruitinghell/comments/qhg5jo/this_resume_got_me_an_interview/) about a software engineer who got tired of getting rejected by automated screeners and filled their CV with absurd information they got a 90% call back rate from companies like Reddit, AirTable, Dropbox, Bolt, Robinhood, and others I was like ![WTF](https://i.giphy.com/media/J34ARJKZmxZACE7DvJ/giphy.webp) ### &nbsp; ### Reality check I've heard about these tools before, but I wasn't aware of how strict the rules are to get a good score. I liked my CV because I got positive feedback about it from different recruiters which actually read it, so I've put my CV into the free evaluating tool available [online](https://resumeworded.com/), and my CV's initial score was 23/100 😱! ![score](https://www.rogowski.page/images/Screenshot%202021-10-31%20at%2010.43.48.png) So I assume I wouldn't receive a call from any company that is using the screening tool - which sucks. After spending 15 minutes and making some quick fixes (mostly structure) I re-tested my CV and my score was 67. ![score](https://www.rogowski.page/images/Screenshot%202021-10-31%20at%2011.03.13.png) Now I have to focus on the impact section and I can probably go much higher without lying -but I liked the CV before changes more. ### Transparency We all want to reduce bias in our recruitment process - me too. Automating stuff seems like a really good idea because algorithms are not biased, except the process still is. People which know about the ATS (Applicant Tracking System) vs people which don't and are not aware of how relevant it is. What would be great is to be more transparent with candidates and if they were rejected because their CV didn't go through the automation we should just tell them that. We can even provide automated feedback to them, the same as some companies do with automated tasks & assignments. But transparency should be a Two-Way street, and if you plan to lie in your CV please don't, the company will find out anyway through assignment or interview but everyone will waste their time already. So if you didn't hear back from the big company you would love to work for don't worry maybe just tweak your CV :).
michalrogowski
882,807
Spooky Dev Stories
Halloween dev stories
0
2021-10-31T13:38:49
https://dev.to/crisarji/spooky-dev-stories-5345
halloween, spooky
--- published: true title: Spooky Dev Stories description: Halloween dev stories tags: halloween, spooky cover_image: https://raw.githubusercontent.com/crisarji/blogs-dev.to/master/blog-posts/2021-10-spooky-dev-stories/assets/spooky-dev-stories.png series: canonical_url: --- Hello developer pal!, glad to see you here. What about sharing one spooky story from your life as dev?, me first! When I started to work, it was as a Jr BE dev, using `C#` and `Visual Source Safe`, there was this delivery we needed to accomplish in one week, so I started coding on Monday, Tuesday, I made a couple initial commits, the CI/CD we had that time took the changes, created an artifact minimized and optimized, deployed it to QA and that was all, everything was running smoothly. Next 3 days, I was so focused in my work that I completely forgot to commit, 3 days in a row without committing, on Friday afternoon, I finally did, and guess what?, the panic. Something happened that I lost all my changes, I tried to recover it in any possible way, unsuccessfully. After spending a couple hours, I went to my boss, sure that I was gonna get fired, explained the whole situation, and instead of asking me for a card box, he sat at my desk, helped me to track down the latest deploy(from 3 days ago), disassembled some minimized files(function names as `i`, return values as `x`, etc), stand up from my desk and just said `Good luck during the weekend`, then left. After spending the whole weekend decoding and remembering what a function with signature `(a,b,c)` with a return `z` could be, and not sleeping at all, I was able to deliver by Monday. Moral: always commit! Thanks for reading!
crisarji
882,820
What are the things that scare you as a Developer? 🎃
Happy Halloween, everyone! I think the question fits the theme, so I'd like to ask everyone: What...
0
2021-10-31T12:47:18
https://dev.to/rammina/what-are-the-things-that-scare-you-as-a-developer-3gkd
discuss, beginners, programming, career
Happy Halloween, everyone! I think the question fits the theme, so I'd like to ask everyone: **What are your fears as a developer?** Here are the things I'm scared of (reasonable or not aside): - not being able to find my next freelance client. - production bugs. - performance anxiety when showing interviewers and clients my work output. - Imposter Syndrome (fear of being a fake or being too dumb) - that my Imposter Syndrome isn't even real, and that I actually suck. - running into a bug that has no StackOverflow, Github issues, and/or online solutions (which means I need to figure it out on my own). - posting a question online (especially on SO) where others might judge me. - the feeling of I'm not improving enough (lack of visible progress). Almost forgot to add: - not understanding what I'm reading I'd like to know what you all are afraid of as fellow developers! <hr /> **Note: I'm learning a lot from what everyone's sharing. Thank you, everyone!** ### Other Media Feel free to reach out to me in other media! <span><a target="_blank" href="https://www.rammina.com/"><img src="https://res.cloudinary.com/rammina/image/upload/v1638444046/rammina-button-128_x9ginu.png" alt="Rammina Logo" width="128" height="50"/></a></span> <span><a target="_blank" href="https://twitter.com/RamminaR"><img src="https://res.cloudinary.com/rammina/image/upload/v1636792959/twitter-logo_laoyfu_pdbagm.png" alt="Twitter logo" width="128" height="50"/></a></span> <span><a target="_blank" href="https://github.com/Rammina"><img src="https://res.cloudinary.com/rammina/image/upload/v1636795051/GitHub-Emblem2_epcp8r.png" alt="Github logo" width="128" height="50"/></a></span>
rammina
883,061
My First week in BootCamp(TaskForce 4.0)
am very excited to share my first week experience in bootcamp which was prepared by awesomity in...
0
2021-10-31T16:52:03
https://dev.to/kdany25/my-first-week-in-bootcamptaskforce-40-540e
am very excited to share my first week experience in bootcamp which was prepared by awesomity in parteniship with Code of Africa. bootcamp started on 25th october and today is the end of first week in 6weeks we have to complete. i don't know where i can start because i can't put it all in words it might take 100pages but lemme start. we started on monday 25th october with good game of trying to lift egg and curry it with a ring and put it on bottle.(was hard) but we learnt how team works and we needed it. we had some more soft skills and as well as some coding challenges of codewars. we learnt how to make user stories and their acceptance criteria and i liked it and helped me and we used it at the end of week where we had presentation of our new project(showApp) as conclusion , we really had good week and am so ready of more 5weeks to come. i will be back here next week to tell you how 2week treated me. Thank you
kdany25
883,084
My Journey
This is my first post detailing my journey to a career in Software Development! My most proficient...
0
2021-10-31T17:51:07
https://dev.to/bixxith/my-journey-473d
python, programming, career, beginners
This is my first post detailing my journey to a career in Software Development! My most proficient language by far is Python. I love doing everything in Python. I don't know if I am interested in a data science career but I would like to do more with it. I learned the Django framework to expand on my Python knowledge. I hold two technical certificates in website development and I am able to use HTML, CSS, and Javascript. I am currently taking a class on Javascript to expand my knowledge and am considering learning either MERN or MEAN stacks. When I start my junior year at Western Governor's University I will be learning C# and .Net development. I've already taken a class on C#/Visual Basic so I'm somewhat aware of the basics. I will try to do frequent updates as I learn new things. I'm currently working on my portfolio website using HTML/CSS/Boostrap/Javascript/Python/Django/PostgreSQL and I'm really happy with how it's turning out.
bixxith
883,308
Exemplo de Projeto Utilizando Sagemaker MultiModel
Este é um post feito a muitas mãos de um projeto dentro da Conta Azul junto a AWS Leia o artigo aqui
0
2021-11-09T21:52:50
http://gabubellon.me//blog/exemplo-de-projeto-utilizando-sagemaker-multimodel
aws, python, sagemaker, datascience
--- title: Exemplo de Projeto Utilizando Sagemaker MultiModel published: true date: 2021-04-30 00:00:00 UTC tags: aws,python,sagemaker,datascience canonical_url: http://gabubellon.me//blog/exemplo-de-projeto-utilizando-sagemaker-multimodel --- Este é um post feito a muitas mãos de um projeto dentro da Conta Azul junto a AWS Leia o artigo [aqui](https://aws.amazon.com/pt/blogs/aws-brasil/como-a-conta-azul-criou-um-sistema-de-gerenciamento-e-inferencia-de-modelos-utilizando-o-amazon-sagemaker-multi-model-endpoints/)
gabubellon
883,212
SPO600 - Update
Where I'm standing currently with my SPO600 course (software portability and optimization) is quite...
0
2021-10-31T19:02:15
https://dev.to/hyporos/spo600-update-469j
Where I'm standing currently with my SPO600 course (software portability and optimization) is quite behind. I've admittedly been slacking a bit which I know is not good and therefore the lab content is confusing to me. I will hope to get these labs (3 and 4) done in the next blogging period so that I can keep up with further content and learn new things. In the meantime, I will be going over the lectures that I have missed to prepare myself for these labs. I will be blogging about my reflection on those lectures shortly.
hyporos
883,217
Hacktoberfest: Small changes make a big impact
For my first issue to tackle for realease0.2 I decided to go with something I was more comfortable...
0
2021-10-31T19:09:30
https://dev.to/amasianalbandian/hacktoberfest-small-changes-make-a-big-impact-29aj
For my first issue to tackle for realease0.2 I decided to go with something I was more comfortable with, but in a different environment. I picked up an issue from Telescope that was front-end and a bug. This was perfect because it wasn't too much pressure for me, and something I can start without feeling overwhelmed. ## The issue: Essentially, the header for "what is telescope" had really bad wrapping. Another issue I noticed was that the headers in the readme were not consistent. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8h2to0zp8abla3pe0sg0.png) The challenged I faced was that this was in the readme file, and would involve page responsiveness. I'm not too familiar with markdown, and a few months ago I would say I have no idea how to use it. After creating our SSGs I felt as if I should know more about markdown. ## The solution I realized that I could either do one of two approaches: 1. Change the wrapping of the heading 2. Change the size of the image for the logo. After playing around with both of these, I realized the wrapping would not give a consistent solution. I therefore went with the second option, to change the size of the logo. It was funny to me how a small change from the height and width made a significant difference to how the readme looked - and it was noticeable. ## Reflection After making my changes, and having my PR merged, I noticed that the changes were actually reverted in another PR. It's funny how open source works. Sometimes the changes we make are significant in that point in time, and then have to be adjusted once more as new content is added. When I first saw the PR that reverted this, I was upset. I decided to read further as to why it was reverted, and realized it made sense. This is my [PR where it changes the height and width from 150 to 100](https://github.com/Seneca-CDOT/telescope/pull/2338/files) This is Diana's [PR where it changed the height and width from 100 to 130](https://github.com/Seneca-CDOT/telescope/pull/2356/files) So in summary, although we make contributions and they work, it should go without saying, it won't always be the solution for every case, especially when adding new features.
amasianalbandian
883,242
Setting up a MySQL database using Prisma
What is Prisma? Prisma is an open-source Node.js and Typescript ORM (Object Relational...
0
2021-10-31T20:20:25
https://www.ineza.codes/blog/setting-up-a-mysql-database-with-prisma
prisma, serverless, nextjs
## What is Prisma? Prisma is an open-source Node.js and Typescript ORM (Object Relational Mapper) it acts as a sort of middleware between your application and the database helping you to manage and work with your database. It currently supports PostgreSQL, MySQL, SQL Server, SQLite and some of its features also support MongoDB. I was recently tasked with setting up a MySQL database with a Next.js application. My goal was to connect the database using Next.js’ serverless capabilities. Below are the steps I followed to achieve that. ## Install and invoke Prisma In order to use Prisma in a project, first install its CLI as a dev dependency. ```bash npm install prisma --save-dev ``` The next step is to initialize Prisma. You only need to do this once during the setup process. ```bash npx prisma init ``` At this point, the Prisma CLI created some files in your root directory. The `schema.prisma` file in the `prisma`folder is where we define our datasource provider and the schema of the tables in our database. However if you already have an existing database, you don’t need to create a schema from scratch, cause Prisma handles that for you 😉. I’ll show you how shortly. ## Connect to the database The first step is to modify the datasource provider in the `schema.prisma` file to look like this. ``` datasource db { provider = "mysql" url = env("DATABASE_URL") } ``` I set `provider` to the type of database I’m using. In my case it’s `mysql`. The `url` property will take the value of the connection url which is defined in the `.env`file created by Prisma. ``` DATABASE_URL="mysql://USER:PASSWORD@HOST:PORT/DATABASE" ``` Above is the format of how to write your connection url. ## Generate data models The next step is to generate the data models/schema. How this happens is; Prisma uses the connection url you provided to connect to the database. Prisma and the database have a short chat, then Prisma comes back with models of your database’s structure i.e data types, relationships and whatever else it needs to know about your database 😄 To do this we run the command ```bash npx prisma db pull ``` If the command has run successfully Prisma will generate models from MySQL into Prisma data model which is saved in the `prisma.schema` file. If the Prisma schema is new to you, have a look at their [documentation](https://www.prisma.io/docs/concepts/components/prisma-schema/data-model) ## Read data from the database In order to perform CRUD (Create, Read, Update, Delete) operations with Prisma we need to install the `@prisma/client` package. ```bash npm install @prisma/client ``` ### Create a Prisma instance After installing the package the next important step is to create a single Prisma instance that will be imported wherever we need to use it. The reason we need a single instance is because every time we initialize Prisma client in a file it creates a connector to the database, if initialized in multiple files it could exhaust the database connection limit. ```js // utils/prisma.js import { PrismaClient } from "@prisma/client"; let prisma; if (process.env.NODE_ENV === "production") { prisma = new PrismaClient(); } // `stg` or `dev` else { if (!global.prisma) { global.prisma = new PrismaClient(); } prisma = global.prisma; } export default prisma; ``` ### Fetch data from the database **N/B** Prisma Client works from the backend so we have to call it from a serverless function or a Nodejs application. ```js // /pages/api/fetchUsers.js import prisma from "../../utils/prisma"; export default async function handler(req, res) { try { const results = await prisma.users.findMany(); return res.status(200).json(results); } catch (error) { return res.status(500).json({ message: error.message }); } } ``` Above is a simple example of fetching all records from the users table. I start by calling the prisma instance followed by the table name and finally the function I’d like to run. Super easy and clean compared to writing raw sql queries. You can learn more prisma functions by looking at their docs which I have included in the reference section below. I tested the above with Next.js api routes, however the same can be applied to Gatsby serverless functions or a Nodejs application.. Thank you for reading ❤️ ## Reference [CRUD operations with prisma client](https://www.prisma.io/docs/concepts/components/prisma-client/crud) [Invoking a single prisma instance](https://github.com/prisma/prisma/issues/5007#issuecomment-618433162) [Add Prisma to an existing project](https://www.prisma.io/docs/getting-started/setup-prisma/add-to-existing-project)
inezabonte
883,261
My first week in Taskforce 4.0 at CodeofAfrica and Awesomity lab.
Introduction I know this time you are asking yourselves the same question I asked myself...
0
2021-10-31T21:22:55
https://dev.to/cyimanafaisal/my-first-week-in-taskforce-40-at-codeofafrica-and-awesomity-lab-48o6
webdev, programming
# Introduction I know this time you are asking yourselves the same question I asked myself the first time when I heard about taskforce Bootcamp. To know what **Taskforce** is? I suggest you go through the **Awesomity** website. [Awesomity.rw](https://awesomity.rw/our-projects). But really Taskforce "It’s more like a movement, an evolution born out of the need to create talent where it was previously thought not to exist. The Task Force is Awesomity’s effort to develop a solid and talented pool of software developers and product designers right here in the heart of Africa." (Awesomity, N.A). In this post we will look at what it is like to be a taskforce intern, What do you get from the taskforce, and the team at Awesomity as well as the CodeofAfrica team. ## what it is like to be a taskforce intern? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nscaf31iryitqmoqsatp.PNG) So far my first week has been good in terms of skills development, team collaboration, as well as hanging out with the team from CodeOfAfrica and Awesomity. Being a taskforce participant the first thing to learn is to know your team, second is to be responsible. So far I can tell that being a taskforce participant you are treated as human not as a tool they understand your weakness and you get to work with the team to turn your weaknesses into valuable strengths that can help your team to grow and achieve the goals. I can't lie I have been into different teams playing the role of software developers but I haven't seen an enjoyable environment that helps you to be productive as the one from taskforce. Once Stephen Covey said, "Most of us spend too much time on what is urgent and not enough time on what is important." For me being in a taskforce Bootcamp is important and I wish everyone who is willing to learn and develop as well can attend the program. ## What do you get from attending taskforce? I know getting a job is the first thing that comes to the minds of developers attending boot camps but that is really fine. at taskforce, you don't get to be considered as a potential developer when it comes to hiring but on top of that, you learn soft skills from which every developer must have on top of their CVs. My first-week experience has proved to me that if you can't integrate with your team you are of no value to the team because you can't help the team move forward. So being able to communicate proactively and efficiently is the key to the team. not only that you get to learn these soft skills but you get to learn the best practices for remote work as a software engineer. I haven't had much experience so far but keep checking my posts I will be documenting my journey all the way long. ## CodeofAfrica and Awesomity team ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lr40c7on63mge1s4coqq.PNG) You must be thinking that I am crazy but it is worth it to share with you the experience I had on my first Friday hanging out with both teams. It was amazing, fun, and human-friendly. People from there, are flexible, open to questions, and the fact that they put away all the stress from work and play games is amazing. I would like to conclude by saying that my first week was awesome and enjoyable. I can't wait to share with you guys the next weeks to come stay tuned.
cyimanafaisal
883,267
Javascript and code documentation
javaScript is a programming language that conforms to the ECMAScript specification. JavaScript is...
0
2021-10-31T21:43:30
https://dev.to/mcube25/javascript-and-code-documentation-3hgo
javaScript is a programming language that conforms to the ECMAScript specification. JavaScript is high-level, often just-in-time compiled and multi-paradigm. It has dynamic typing, prototype-based object-orientation and first-class functions. #### Why code documentation? If you wrote a couple of functions for making an HTML table with JavaScript. You could use those function right now, or pass them to another developer. Everything is clear in your mind the moment you write the code, yet a month later you don't remember how to use function A or function B anymore. How function A is supposed to be called? What parameters it takes? And what shape should the parameters have? Code documentation helps you and other developers understand how to use the software you wrote. There are many ways for documenting a piece of code. For example you can write: * howtos guides for using your code * a nice README for the repo * code documentation in the source But on top of how-to and READMEs, code documentation in the source holds most of the value. It sits right there with the code and helps avoiding mistakes as you write JavaScript in the editor. Consider the following function: ```javascript function generateTableHead(table, data) { const thead = table.createTHead(); const row = thead.insertRow(); for (const i of data) { const th = document.createElement("th"); const text = document.createTextNode(i); th.appendChild(text); row.appendChild(th); } } ``` This function kind of speaks for itself, generateTableHead. The data parameter doesn't specify what it should execute or what data should be. If we look at the function's body, it becomes evident that "data" must be an array. table on the other hand does not make it clear if it is to be a string or an actual html element. We simply as add a comment before the function: ```javascript /** * Generates a table head */ function generateTableHead(table, data) { const thead = table.createTHead(); const row = thead.insertRow(); for (const i of data) { const th = document.createElement("th"); const text = document.createTextNode(i); th.appendChild(text); row.appendChild(th); } } ``` This is the standard code documentation. The function generateTableHead" should take an HTMLTableElement and an array as parameters. We can add annotations for both like ```javascript /** * Generates a table head * @param {HTMLTableElement} table - The target HTML table * @param {Array} data - The array of cell header names */ function generateTableHead(table, data) { const thead = table.createTHead(); const row = thead.insertRow(); for (const i of data) { const th = document.createElement("th"); const text = document.createTextNode(i); th.appendChild(text); row.appendChild(th); } } ``` Code documentation is often overlooked and considered more or less a waste of time. Great code should speak for itself. And that's true to some extents. Code should be clear, understandable and plain. Writing documentation for code can help your future self and your colleagues.
mcube25
883,396
Oracle SQL - Ways to Work on it
Introduction SQL, yes and it's not MYSQL. SQL is a Structured Query Language, it is a language that...
0
2021-11-01T03:26:01
https://dev.to/iamanshuldev/oracle-sql-ways-to-work-on-it-5foc
sql, oracle
Introduction SQL, yes and it's not MYSQL. SQL is a Structured Query Language, it is a language that is used to communicate with the database whereas MYSQL is an open-source database product(RDBMS) that allows users to keep the data organized in a database. I will write another article on this topic specifically. Let's move on. Oracle has different products and one of the famous ones other than JAVA is SQL. Some call it 'S' 'Q' 'L' and some call it 'cequel'. I am the first kind of person. When I started practicing the SQL, I discussed it with my friends/seniors and they recommended my w3schools.com. I moved to MYSQL Workbench to get some hands-on experience on it but it is slightly different from SQL. Lets Move Directly on Main Thing So after a quite research and help from the Oracle Twitter team, I explored many ways to work on it. The best part is all of them are free. They are listed below. 1. Online Yes, we can practice online using the Oracle Apex tool. It is an online tool where you can do almost everything possible related to SQL. https://livesql.oracle.com/apex 2.Application I have mentioned that a few things that you cannot do in an online tool but do not restrict yourself and try another great application where you can become a master in SQL. So for that, you have to download the tool called Oracle Database Express Edition(XE). In order to get the latest database, you can download Database 19c or 21c from Oracle official website. Links are as follows. https://www.oracle.com/ca-en/database/technologies/ 3. Virtualization Another great tool, I like the most(WHY? I will tell you in the end). First of all, I hope you have heard of another Oracle free application VirtualBox, if not then no problem. Now you read it, go search it online. It is really cool. https://www.oracle.com/database/technologies/databaseappdev-vm.html Note: It's taking too much space on SSD/HDD so prefer not to save on your System Drive(C Drive). Final Note I prefer Virtualization as it has many good features and safety. DataBase XE was heavy on memory and occupied a lot of space on the C drive so I had to remove it. Sad Note for MAC users, Database XE will not work for you as well as M1 chip from 2020 as has some issues with virtualization(Please try once on your system, may it works for you). For some reference, please follow this Twitter Thread https://twitter.com/iamanshultweet/status/1445055709419290633 Please support the community and my Twitter account(https://twitter.com/iamanshultweet) for another awesome article on SQL/PL-SQL.
iamanshuldev
883,620
REST APIs and Designing Best Practices
APIs (Application Programming Interfaces) are necessary for applications or devices to connect and...
0
2021-11-01T07:45:09
https://dev.to/shoki/rest-apis-and-designing-best-practices-10k
architecture, beginners, database
APIs (Application Programming Interfaces) are necessary for applications or devices to connect and communicate with each other. In this article, I will explain what REST Api is, which is vary popular one, and the best practice of designing it. ## What is REST? ### History of REST 1. REST was born in 2000, after SOAP. 2. Amazon started to use REST in 2003, and REST started to be used gradually instead of SOAP. 3. REST is superior to SOAP in some ways, such as the ability to use caching, better performance due to statelessness, support for many formats such as JSON and XML, and the ability to check operations by simply entering a URI, but security is inferior to SOAP in some cases. ### REST Api and Architecture Style REST is an abbreviation for Representational State Transfer, which was first announced by Roy Fielding in 2000. It is one of the architectural styles. #### What is architectural styles? 1. Architectural style is a design guideline, and just as there are Japanese and European architectural styles, there are various architectural styles. 2. There are many other architectural styles, such as object-oriented architecture style, microservice architecture style, etc. REST is a combination of multiple architectural styles such as client-server architecture style, tiered architecture style, cache architecture style, etc. ### Features of REST REST has the following features: 1. Published with addressable URI. - Accessible by URI, such as `https:/○○○○/example/rest/01` - Responded in JSON or XML 2. The interface must be unified in the use of HTTP methods - Data can be used by using methods such as GET, POST, PUT, DELETE, HEAD, etc. 3. Statelessness - All information is thrown in the request, and the server does not retain the session information. - This makes it possible to reserve server resources and change servers flexibly according to the load of requests 4. Ability to connect to other information if the resource URI is known - The information being requested can contain hyperlinks. It is possible to connect from one link to another, and RESTful systems can smoothly link information to each other. 5. Return processing results in HTTP status code - Return results in status codes such as 200 (OK), 403 (Forbidden), 404 (Not Found), etc. Web services that implement this architectural style of REST are called RESTful services. The HTTP calling interface of a web system built according to such REST principles is called a REST API. ## Best Practice of Development of REST API The detailed rules may vary depending on your development team, but these are the basic rules: 1. Give names that are easily understood by humans. 2. Add a noun to the end of the URI, not a verb. 3. Give Nouns in the URI plural. 4. Return the processing result as an HTTP status code. 5. Include the API version in the URI. So let's talk about it more with examples. ### Step1. Give names that are easily understood by humans #### 1-1. Reveal that it's URI of api and don't duplicate words ex) ❎ `http://*api*.example.com/services/*api*/` ✅ `http://api.example.com/services/` ✅ `http://example.com/services/api` #### 1-2. Unify the rules If URI to obtain friend information is using a query parameter `http://api.example.com/friends?id=100`,   URI to post a message to the friend also should be using a query parameter    ✅ `http://api.example.com/friends/messages?id=100`. Should not be ❎ `http://api.example.com/friends/100/messages` and vise versa. #### 1-3. Avoid to create deeply nested URIs ex) ❎ `https://api.example.com/sports/1/players/2/friends` ✅ `https://api.example.com/players/2/friends` The further to the right URI goes, the more resources it will have as a subset. ### Step2. Add a noun to the end of the URI, not a verb GET method: ❎ `https://api.example.com/v1/getItems` ✅ `https://api.example.com/v1/items` POST method: ❎ `https://api.example.com/v1/news/createArticles` ✅ `https://api.example.com/v1/news/articles` Like these examples, avoid using verbs at the end of URIs as much as possible, since they are semantically understood by using the HTTP method at the time of the request. ### Step3. Give Nouns in the URI plural ❎ `https://api.example.com/v1/item/01` ✅ `https://api.example.com/v1/items/01` Generally, resources in a path are not one, but sets, like items (= multiple), so always use the plural form consistently. ### Step4. Return the results with an HTTP status code When an error occurs, return the processing result in the HTTP status code so that the API user can understand the cause of the error. For example, the famous one `404 Not Found` will be returned when the endpoint is valid but the resource itself does not exist. For more information about HTTP status code, you can check it out [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status#server_error_responses) ### Step5. Include the API version in the URI The version of the API should be included. This will prevent users from being affected if the API changes over time. Version management will allow you to change the API structure without compromising compatibility. ## Summary That's about REST and the best practice of designing Rest API. I think the rules are slightly different depending on the team, but I hope it helps you.
shoki
883,814
Strapi Tutorial: Build a Blog with Next.js
This article was originally posted on my personal blog If you want to start your own blog, or just...
0
2021-11-01T17:28:15
https://blog.shahednasser.com/strapi-tutorial-build-a-blog-with-next-js/
javascript, beginners, strapi, node
--- title: Strapi Tutorial: Build a Blog with Next.js published: true date: 2021-11-01 10:13:21 UTC tags: javascript,beginners,strapi,nodejs canonical_url: https://blog.shahednasser.com/strapi-tutorial-build-a-blog-with-next-js/ cover_image: https://blog.shahednasser.com/static/7ba78ca821fb59c20235977dcf45de59/bram-naus-n8Qb1ZAkK88-unsplash-2.jpg --- _This article was originally posted on [my personal blog](https://blog.shahednasser.com/strapi-tutorial-build-a-blog-with-next-js/)_ If you want to start your own blog, or just want to learn a cool CMS platform, then you should check out [Strapi](https://strapi.io). Strapi is an open-source Node.js headless CMS. This means that you set up Strapi and plug it into any frontend or system you have. In this tutorial we'll first look at why you should use Strapi, how to set it up from scratch, then we'll use one of Strapi's starters to easily create a blog with Next.js. ## Why Strapi Headless APIs provide you with a lot of flexibility. When you want to develop a system with different components, you don't have to worry about finding one framework or programming language that you can use to be able to implement all the components. Strapi allows you to integrate CMS into your projects regardless of what they are. Whether you want to add CMS to your e-commerce store, build a blog, or any other use case that requires CMS, you can easily use Strapi to build the CMS part, then use its APIs to integrate it into your system. What sets Strapi apart is that it's fully customizable. You are not bound to a database schema or data structure. Once you set up Strapi, you are free to create your own models and c0llections as fit your needs. This makes setting up your CMS much easier and allows you to focus on creating the front-end. ## Set Up Strapi In this section, you'll learn how to set up Strapi from scratch. This allows you to understand better how it works and what are the different elements to it. In the next section, you'll be using a Strapi starter blog that does all the heavy lifting for you. ### Install Strapi The first step is to install Strapi. You can do that with this command: ``` npx create-strapi-app strapi-blog --quickstart ``` ### Register as Admin Once the installation is complete, a tab will open in your default browser and it will be a registration form. You will need to fill out your information as an admin user. ![Strapi Tutorial: Build a Blog with Next.js](https://backend.shahednasser.com/content/images/2021/10/Screen-Shot-2021-10-28-at-10.40.36-AM.png) Once you're done, you'll be logged into your dashboard. ### Create a Content-Type Let's say you're creating the blog's database yourself. You'll need to create a `posts` table that stores all the posts you'll create. In Strapi, you create Content-Types. In these Content-Types, you can add any kind of field you would to the table. On your dashboard, you should see "Create Your First Content-Type". Click on it. ![Strapi Tutorial: Build a Blog with Next.js](https://backend.shahednasser.com/content/images/2021/10/Screen-Shot-2021-10-28-at-10.42.06-AM.png) Then, a pop-up will appear asking you to name the Content-Type. Content-Types are named in the singular form in Strapi. So, enter `post` in the Display Name field then click Continue. After that, you'll need to add some fields to the Content-Type. You'll see that there are many to choose from. Add the following fields to the Post Content-Type: 1. `title` of type Text. You can set it to required by clicking on the Advanced Settings Tab and checking the required checkbox. 2. `content` of type Rich text. You should also set it to required. 3. `admin_user` this will be a Relation type. You'll link it to the User Content-Type. 4. `date_created` this will be a Date field of type Datetime. You can also set it to required. 5. `file` this will be a Relation type as well to the File Content-Type. We can use it to add an image to the post Once done, the Post Content-Type should look like this: ![Strapi Tutorial: Build a Blog with Next.js](https://backend.shahednasser.com/content/images/2021/10/Screen-Shot-2021-10-28-at-11.00.42-AM.png) Click _Save,_ and the new Content-Type will be added successfully. ### Set Permissions Next, you'll set permissions to allow users to access the posts. To do that, in the sidebar go to Settings, then go to Roles under Users & Permissions. ![Strapi Tutorial: Build a Blog with Next.js](https://backend.shahednasser.com/content/images/2021/10/Screen-Shot-2021-10-28-at-11.01.31-AM.png) There, choose Public, then scroll down to Permissions and select all permissions. ### Making Requests If you now try sending a GET request to `localhost:1337/posts` you'll see an empty array. In Strapi, once you create a Content-Type, you'll have the following API requests ready for use: 1. GET `/posts`: Get the list of items in the Content-Type. 2. GET `/posts/{id}`: Get the item having id `{id}`. 3. GET `/posts/count`: Get the number of items in the Content-Type. 4. POST `/posts`: Create a new post. 5. DELETE `/posts/{id}`: Delete a post of id `{id}`. 6. PUT `/posts/{id}`: Update a post of id `{id}`. Note that we use the plural form of the Content-Type in the requests. As we can see, Strapi makes it easy to create Content-Types on the fly and once you do you can start accessing them with the REST API right away. ## Using Strapi Starters There are [many starters](https://strapi.io/starters) for Strapi for different languages and frameworks. Starters allow you to start with a certain template with ready front-end or a configured Strapi instance with the Content-Type required for the template. This saves you time rebuilding or reconfiguring the same project ideas. In this section, you'll create a blog using Strapi starters. We'll use Next.js for the front-end. ### Set Up Next.js Starter To create a Strapi blog with Next.js, you can use [strapi-starter-next-blog](https://github.com/strapi/strapi-starter-next-blog). It comes with both a Strapi installation ready with the necessary Content-Types which are Article and Category. In your terminal run the following command to install it: ``` npx create-strapi-starter strapi-next-blog next-blog ``` This will install inside a directory called `strapi-next-blog` 2 directories. One called `backend`, which includes the Strapi installation, and one called `frontend`, which includes the Next.js installation. Once the installation is done, change to the `frontend` directory then run both Strapi and Next.js with one command: ``` npm run develop ``` This will run Strapi on `localhost:1337` and Next.js on `localhost:3000`. If the browser was not opened with to the Strapi dashboard, go to `localhost:1337/admin/auth/register-admin` and register as a new user just like you did before. When you are redirected to the dashboard, you'll see that there are already Content-Types and Collections for these types ready. ![Strapi Tutorial: Build a Blog with Next.js](https://backend.shahednasser.com/content/images/2021/10/Screen-Shot-2021-10-28-at-1.50.01-PM-1.png) If you go to each of them you'll see that there are already demo data available. Now, to check the frontend, go to `localhost:3000`. You'll see a blog with some blog posts ready. ![Strapi Tutorial: Build a Blog with Next.js](https://backend.shahednasser.com/content/images/2021/10/Screen-Shot-2021-10-28-at-2.11.04-PM.png) And that's it! You can now post stories on the Strapi dashboard and see them on your Next.js frontend. With one command, you were able to create a blog. ## Conclusion Strapi is a fully customizable CMS that makes it easier for you to integrate CMS into your systems or websites, as well as use it to create CMS platforms. After following along with this tutorial, you should check out more of Strapi's [Content API documentation](https://strapi.io/documentation/developer-docs/latest/developer-resources/content-api/content-api.html#api-endpoints) to learn more about how you can access the content types and more.
shahednasser
883,959
Wetware of writing and doing
Originally presented at Tools for Thought Rocks on October 29, 2021 (with video and timestamps)....
0
2021-11-01T13:18:21
https://dev.to/rosano/wetware-of-writing-and-doing-4jib
productivity, writing, tutorial, discuss
_Originally presented at [Tools for Thought Rocks](https://lu.ma/tftrocks-oct) on October 29, 2021 (with [video and timestamps](https://cafe.rosano.ca/t/presenting-wetware-of-writing-and-doing-at-tools-for-thought-rocks/148/2)). Below is an expanded text version of my presentation for anyone who prefers reading._ * * * I talk often about my apps and their features, but rarely about how I use them day-to-day—partially to leave space for people to imagine their own workflows, but also because I didn't think it wouldn't be of interest to share mine. This changed after a conversation with [pvh](https://www.pvh.ca/), who remarked that after reading the website for [Launchlet](https://rosano.hmm.garden/01f1ghk7crrk2g4b3e37j8vpgx) and trying to play with the [compose interface](https://launchlet.dev/compose), it wasn't clear how all the parts came together until watching my [tutorial videos](https://rosano.hmm.garden/01f1ghgrgxq5adk0sdck3csghh)—I found that interesting coming from someone who has plenty of experience with computer programming and its paradigms. It made me realize 1) that interfaces clearly communicating 'features' doesn't mean people appropriate them, 2) the importance of good affordances to help people go beyond merely 'using the app' to extending themselves in the process. The larger question to address here is: how can the environment better transmit what is possible so that those within it can take fuller advantage? It will likely take some time for me to find my own answers and implement them in projects, so for now, I feel motivated to do what is knowable and share more about how I use my apps to illuminate the wetware. What I find myself 'doing' most of the time involves: making [apps and websites](https://github.com/rosano); writing [texts about personal experiences and interests](https://cafe.rosano.ca/c/writing/8); recording [screencasts about programming](https://vimeo.com/rosano/videos); organizing [online events](https://rosano.hmm.garden/01ew1g0nvabn71z3xwpj93bbqg); and generally working on [personal projects](https://rosano.hmm.garden/01etsqssqjv29ykfphkxq01042). It all adds up, and to keep things from overwhelming me I practice a [productivity trinity](https://rosano.hmm.garden/01ett0ax73nhv89tyd5wpn145z) which can be summarized as: > 1. _Capture everything_: get ideas out of your head as soon as possible. > 2. _Organize if needed_: move it where you are likely to encounter it. > 3. _Purge_: do it or delete it as soon as possible. The mix of details below might seem chaotic, but they all relate to these three points in some way. One objective of [Capture everything](https://rosano.hmm.garden/01ett072dk3kyevtrraez4ctgf) is to keep going: I avoid interruptions like checking out links people send me and do everything later; it helps to maintain focus on whatever has my attention. Making time to read articles or watch videos can be a challenge and often gets neglected, but in my experience it usually happens eventually and delaying consumption has the benefit of obsoleting some things before you get to it. When there's a lot of collecting from streams or timelines and placing into queues help batch the process of reading, watching, listening, and writing, it helps to have a place to put things. Most of my queues are digital now (although at one point I did write and organize my life with small pieces of paper): [Pocket](https://getpocket.com/) is for reading because it syncs with my e-reader (to read without internet access, and with something closer to paper than a screen, and without the distractions of my computer or phone), and for checking out websites because I like to close all my browser tabs as soon as possible; [1Feed](https://1feed.app/)is for newsletters (as it interrupts my flow to read long text while checking e-mail), and for following Internet things with a timeline presentation; [Joybox](https://rosano.hmm.garden/01f3t6hb8645evfj9k0yjvpsy9) is for audiovisual media [segmented with tags for listening, watching, and passive consumption](https://www.youtube.com/watch?v=McKXW-bP2HQ&t=13m18s); [Kommit](https://rosano.hmm.garden/01f1qb660m91xyn050bn79dhnz) is for words and phrases that I want to learn from foreign languages; Launchlet is for [shortcuts and removing friction from workflows](https://www.youtube.com/watch?v=McKXW-bP2HQ&t=21m45s); [Emoji Log](https://rosano.hmm.garden/01f1km6a1g3ph2jd3j7nx0qd02) is for [personal tracking and time-bound journaling](https://www.youtube.com/watch?v=McKXW-bP2HQ&t=23m40s), like books I read or recipes I cook, or more personal thoughts and monitoring emotions. For everything else, there's [Hyperdraft](https://rosano.hmm.garden/01etj3kw7w4zyz1f5ktnnagn7n), which is mostly _[reference](https://rosano.hmm.garden/01etae8r35m5yfj4eby16vswfy)_\-oriented and not time-bound—it functions as: dashboards of to-dos for dozens of projects; [space to mix private and public writing](https://www.youtube.com/watch?v=McKXW-bP2HQ&t=36m32s); an environment that spans the entire arc of 'capture, brainstorm, organize, outline, draft, write, publish' that is on all my devices and [local-first](https://www.inkandswitch.com/local-first.html), thus minimizing discontinuities from needing to be in a specific place or not having internet access; writing queues my for various newsletters and a [templating system](https://www.youtube.com/watch?v=McKXW-bP2HQ&t=39m56s) for [Ephemerata](https://rosano.hmm.garden/01f58x4bdpm6530ba58wxjm30w); quick jot-pads for when I'm not sure where to put something; and a convenient place for [Ideas increment automatically when they are captured](https://rosano.hmm.garden/01et7vrq0dzezj2aj0vkr4t2zy). All these queues provide, on the one hand, a sense of space that I find relaxing because there is a place to put things, and on the other hand, an uneasiness about being overwhelmed as they are easy to neglect and intentionally out of sight; the serenity is stronger when you trust yourself to attend to them. How does one maintain balance and create healthy rhythms for processing these queues? Many of my strategies help me avoid being 'completist' and find reasons to purge things when there's a backlog: if I read until the halfway point and haven't found anything interesting, if the video doesn't hold my attention, if I haven't moved on it in weeks, if it's expired or irrelevant now, into the void it goes. It took me a while to realize that 'delete' can mean "I don't want to be reminded of this"; we have to train digital systems to not show things 'forever'. I try to prune my lists frequently in addition to actually doing things, but it's hard for me to repeat at specific intervals as life tends to get in the way: I've found it useful to observe how I feel and find the cadence that works for me—we are not machines. One rhythm I frequently engage in with enthusiasm is \[\[work digress cycle\]\]. I've been surprised at how this idea of queues helps me 'write without magic'. It feels like writing happens without great pain or earnestness, and I think of it reductively as "mostly just moving things around". Let's say there's 3% which is creative personal expression (that everyone has but in their own way), and 97% which is stuff that requires no talent, such as: capturing ideas as they occur, allowing details to passively collect over time, periodically perusing through the old to find potential connections to the new… Here the queues function like buckets collecting drips of water: some have zero drops, some have one, some have a few; eventually some have 'enough' or are overflowing and can be [marked as prompts for finalizing](https://www.youtube.com/watch?v=McKXW-bP2HQ&t=42m59s), which for me implies taking a queue or list of items to sort, group, massage, tidy, and publish. It's easier than confronting a blank screen, or twiddling thumbs to figure out how to start, and showcases the power of [Writing creates a space for 'the answer to go'](https://rosano.hmm.garden/01et5a1fy7zy4pvqe8nywg471m): with little effort, I find myself having lots to write about, unintimidated by the process of finishing. I think everyone has the necessary pieces to do this, but most people get stuck in their 97%, which is a tractable problem that can be encroached upon by finding tools and workflows that fit, making things simpler or perhaps effortless, and cultivating calm spaces to write and reason that are free from judgement. Understanding the wetware is not always obvious and I'm still not sure of how it should be presented, be it in words or an interface. I hope that with plenty of examples of how I use my apps, it helps unveil how they can be leveraged to do more. In the future, I would hope to integrate an understanding of my own processes into the onboarding of my software so that it doesn't require more than the experience of using the app to feel empowered by all its possibilities. I might summarize this first exploration as 1) collect, organize, purge with lots of queues, 2) let time work in your favour, and 3) spend time on what motivates you. * * * P.S. Thanks to [Jess Martin](https://jessmart.in/) and [Tools for Thought Rocks](https://toolsforthought.rocks/)community for the invitation to present, and the prompt—this wouldn't exist if it wasn't for your concept of 'Workflow Walkthroughs' 🙏🏽. * * * P.P.S. For anyone who made it this far, please enjoy this short video of my old-time [analog to-do dashboard](https://youtu.be/sctotQrchsk). --- Follow my journey on [Twitter](https://twitter.com/rosano) (or via the [mailing list](https://rosano.ca/list))
rosano
884,263
OpenTelemetry Distributed Tracing with ZIO
A tutorial on implementing distributed tracing for Scala applications using OpenTelemetry and various libraries from the ZIO ecosystem.
0
2021-11-01T16:29:00
https://tuleism.github.io/blog/2021/opentelemetry-distributed-tracing-zio/
distributedtracing, observability, scala, tutorial
--- title: OpenTelemetry Distributed Tracing with ZIO published: true date: 2021-11-01 00:00:00 UTC tags: distributedtracing,observability,scala,tutorial canonical_url: https://tuleism.github.io/blog/2021/opentelemetry-distributed-tracing-zio/ description: A tutorial on implementing distributed tracing for Scala applications using OpenTelemetry and various libraries from the ZIO ecosystem. --- ## Introduction This post is some quick notes on using [ZIO](https://zio.dev/) and [zio-telemetry](https://github.com/zio/zio-telemetry) to implement [OpenTelemetry](https://opentelemetry.io/) distributed tracing for Scala applications. The source code is available [here](https://github.com/tuleism/opentelemetry-distributed-tracing-zio). This is not an introduction to any of these technologies, but here are a few good reads: - NewRelic's introduction to [Distributed Tracing](https://newrelic.com/resources/ebooks/quick-introduction-distributed-tracing). - Lightstep's concise summary for [OpenTelemetry](https://opentelemetry.lightstep.com/). ## Initial implementation For demonstration purpose, we will perform [manual instrumentation](https://opentelemetry.io/docs/concepts/instrumenting/#manual-instrumentation) on a [modified](Hhttps://github.com/tuleism/opentelemetry-distributed-tracing-zio/commit/d9ba391dc9725e3e4ddae9c05cfe2641d8a435cf) version of the [zio-grpc](https://scalapb.github.io/zio-grpc/)'s [helloworld example](https://github.com/scalapb/zio-grpc/tree/master/examples/helloworld), in which we incorporate both [gRPC](https://grpc.io/) and HTTP communications: - **Original**: `hello-client` sends a `HelloRequest` with `name` *x* and `hello-server` returns a `HelloResponse` with `message` *Hello, x*. - **Modified**: in addition to the original behavior, client sends an optional integer field `guess` and server performs an HTTP request to [HTTPBin](https://httpbin.org/) based on its value. ![Initial Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnqh607w1uvszevnsdz4.png) ### Add the new flag `guess`: ```protobuf // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; google.protobuf.Int32Value guess = 2; } // The response message containing the greetings message HelloReply { string message = 1; } ``` ### Add zio-grpc dependency ```scala resolvers += Resolver.sonatypeRepo("snapshots") addSbtPlugin("org.scalameta" % "sbt-scalafmt" % "2.4.3") addSbtPlugin("com.thesamet" % "sbt-protoc" % "1.0.4") val zioGrpcVersion = "0.5.1+12-93cdbe22-SNAPSHOT" libraryDependencies ++= Seq( "com.thesamet.scalapb.zio-grpc" %% "zio-grpc-codegen" % zioGrpcVersion, "com.thesamet.scalapb" %% "compilerplugin" % "0.11.5" ) ``` ### Set up `build.sbt` - Generate Scala code from `helloworld.proto`. - Depend on [sttp](https://sttp.softwaremill.com) for HTTP client. ```scala val grpcVersion = "1.41.0" val sttpVersion = "3.3.15" val scalaPBRuntime = "com.thesamet.scalapb" %% "scalapb-runtime-grpc" % scalapb.compiler.Version.scalapbVersion val grpcRuntimeDeps = Seq( "io.grpc" % "grpc-netty" % grpcVersion, scalaPBRuntime, scalaPBRuntime % "protobuf" ) val sttpZioDeps = Seq( "com.softwaremill.sttp.client3" %% "async-http-client-backend-zio" % sttpVersion ) lazy val root = Project("opentelemetry-distributed-tracing-zio", file(".")).aggregate(zio) lazy val zio = commonProject("zio").settings( Compile / PB.targets := Seq( scalapb.gen(grpc = true) -> (Compile / sourceManaged).value, scalapb.zio_grpc.ZioCodeGenerator -> (Compile / sourceManaged).value ), libraryDependencies ++= grpcRuntimeDeps ++ sttpZioDeps ) ``` ### Client implementation - Create a gRPC client pointing to localhost:9000. - Send a single `HelloRequest`. - Send 5 `HelloRequest`s in parallel. - Send a single `HelloRequest` with an invalid guess. - Print "Done" and exit. ```scala object ZClient extends zio.App { private val clientLayer = GreeterClient.live( ZManagedChannel( ManagedChannelBuilder.forAddress("localhost", 9000).usePlaintext() ) ) private val singleHello = GreeterClient.sayHello(HelloRequest("World")) private val multipleHellos = ZIO.collectAllParN(5)( List( GreeterClient.sayHello(HelloRequest("1", Some(1))), GreeterClient.sayHello(HelloRequest("2", Some(2))), GreeterClient.sayHello(HelloRequest("3", Some(3))), GreeterClient.sayHello(HelloRequest("4", Some(4))), GreeterClient.sayHello(HelloRequest("5", Some(5))) ) ) private val invalidHello = GreeterClient.sayHello(HelloRequest("Invalid", Some(-1))).ignore private def myAppLogic = singleHello *> multipleHellos *> invalidHello *> putStrLn("Done") def run(args: List[String]): URIO[ZEnv, ExitCode] = myAppLogic.provideCustomLayer(clientLayer).exitCode } ``` ### Server implementation - Fail the request if `guess` is less than 0. - Based on the value of `guess`, delay for some time and then send a request to HTTPBin. ```scala type ZGreeterEnv = Clock with Random with SttpClient ``` ```scala object ZGreeterImpl extends RGreeter[ZGreeterEnv] { def sayHello(request: HelloRequest): ZIO[ZGreeterEnv, Status, HelloReply] = { val guess = request.guess.getOrElse(0) for { _ <- ZIO.fail(Status.INVALID_ARGUMENT).when(guess < 0) code <- ??? delayMs = ??? _ <- httpRequest(code) .delay(delayMs.millis) .mapError(ex => Status.INTERNAL.withCause(ex)) } yield HelloReply(s"Hello, ${request.name}") } def httpRequest(code: Int): RIO[SttpClient, Unit] = send(basicRequest.get(uri"https://httpbin.org/status/$code")).unit } ``` ### Run it - To run the server: ```bash $ sbt "zio/runMain com.github.tuleism.ZServer" [info] running (fork) com.github.tuleism.ZServer [info] Server is running. Press Ctrl-C to stop. ``` - To run the client: ```bash $ sbt "zio/runMain com.github.tuleism.ZClient" [info] running (fork) com.github.tuleism.ZClient [info] Done [success] Total time: 12 s ``` At this point, we only know that it takes roughly 12 seconds for the client to initialize and finish its work. Let's add distributed tracing to gain more insights into this. ## Common tracing requirements - For both client and server, we need to acquire a [Tracer](), an object responsible for creating and managing [Spans](). - Tracing data is sent to [Jaeger](https://www.jaegertracing.io/), which acts as a standalone [collector](https://opentelemetry.lightstep.com/the-collector-and-exporters/). ![Instrumented Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oug7wvdrcos2luchiq3e.png) ### Add new dependencies - [zio-telemetry's OpenTelemetry module](https://zio.github.io/zio-telemetry/docs/overview/overview_opentelemetry). - We also depend on [zio-config](https://zio.github.io/zio-config/) to read tracing config from file and [zio-magic](https://github.com/kitlangton/zio-magic) to ease [ZLayer](https://zio.dev/1.x/datatypes/contextual/zlayer/) wiring. ```scala val openTelemetryVersion = "1.6.0" val zioConfigVersion = "1.0.10" val zioMagicVersion = "0.3.9" val zioTelemetryVersion = "0.8.2" val openTelemetryDeps = Seq( "io.opentelemetry" % "opentelemetry-exporter-jaeger" % openTelemetryVersion, "io.opentelemetry" % "opentelemetry-sdk" % openTelemetryVersion, "io.opentelemetry" % "opentelemetry-extension-noop-api" % s"$openTelemetryVersion-alpha" ) val zioConfigDeps = Seq( "dev.zio" %% "zio-config" % zioConfigVersion, "dev.zio" %% "zio-config-magnolia" % zioConfigVersion, "dev.zio" %% "zio-config-typesafe" % zioConfigVersion ) val zioMagicDeps = Seq( "io.github.kitlangton" %% "zio-magic" % zioMagicVersion ) val zioTelemetryDeps = Seq( "dev.zio" %% "zio-opentelemetry" % zioTelemetryVersion, "com.softwaremill.sttp.client3" %% "zio-telemetry-opentelemetry-backend" % sttpVersion ) ``` ### Add a config layer ``` tracing { enable = false enable = ${?TRACING_ENABLE} endpoint = "http://127.0.0.1:14250" endpoint = ${?JAEGER_ENDPOINT} } ``` ```scala case class AppConfig(tracing: TracingConfig) case class TracingConfig(enable: Boolean, endpoint: String) object AppConfig { private val configDescriptor = descriptor[AppConfig] val live: Layer[ReadError[String], Has[AppConfig]] = TypesafeConfig.fromDefaultLoader(configDescriptor) } ``` ### Add a `Tracer` layer - Depend on the configuration, we either create a *noop* `Tracer` or one that sends data to Jaeger. - Once we have it, we can construct a `Tracing` layer, which give us access to many [useful operations in zio-telemetry](https://www.javadoc.io/static/dev.zio/zio-opentelemetry_2.13/0.8.2/zio/telemetry/opentelemetry/Tracing$.html). ```scala object ZTracer { private val InstrumentationName = "com.github.tuleism" private def managed(serviceName: String, endpoint: String) = { val resource = Resource.builder().put(ResourceAttributes.SERVICE_NAME, serviceName).build() for { spanExporter <- ZManaged.fromAutoCloseable( Task(JaegerGrpcSpanExporter.builder().setEndpoint(endpoint).build()) ) spanProcessor <- ZManaged.fromAutoCloseable(UIO(SimpleSpanProcessor.create(spanExporter))) tracerProvider <- UIO( SdkTracerProvider.builder().addSpanProcessor(spanProcessor).setResource(resource).build() ).toManaged_ openTelemetry <- UIO(OpenTelemetrySdk.builder().setTracerProvider(tracerProvider).build()).toManaged_ tracer <- UIO(openTelemetry.getTracer(InstrumentationName)).toManaged_ } yield tracer } def live(serviceName: String): RLayer[Has[TracingConfig], Has[Tracer]] = ( for { config <- ZIO.service[TracingConfig].toManaged_ tracer <- if (!config.enable) { Task(NoopOpenTelemetry.getInstance().getTracer(InstrumentationName)).toManaged_ } else { managed(serviceName, config.endpoint) } } yield tracer ).toLayer } ``` ## New server ### Instrument the HTTP client - Use out-of-the-box [sttp backend](https://sttp.softwaremill.com/en/latest/backends/wrappers/zio-opentelemetry.html). - We also add additional HTTP specific attributes according to the OpenTelemetry's [semantic convention](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md#name). ```scala object SttpTracing { private val wrapper = new ZioTelemetryOpenTelemetryTracer { def before[T](request: Request[T, Nothing]): RIO[Tracing, Unit] = Tracing.setAttribute(SemanticAttributes.HTTP_METHOD.getKey, request.method.method) *> Tracing.setAttribute(SemanticAttributes.HTTP_URL.getKey, request.uri.toString()) *> ZIO.unit def after[T](response: Response[T]): RIO[Tracing, Unit] = Tracing.setAttribute(SemanticAttributes.HTTP_STATUS_CODE.getKey, response.code.code) *> ZIO.unit } val live = AsyncHttpClientZioBackend.layer().flatMap { hasBackend => ZIO .service[Tracing.Service] .map { tracing => ZioTelemetryOpenTelemetryBackend(hasBackend.get, tracing, wrapper) } .toLayer } } ``` ### Instrument the gRPC server We can add `Tracing` without changing our server implementation with a [ZTransform](https://scalapb.github.io/zio-grpc/docs/decorating/). For each request: - We use [zio-telemetry's spanFrom](https://www.javadoc.io/static/dev.zio/zio-opentelemetry_2.13/0.8.2/zio/telemetry/opentelemetry/TracingSyntax$$OpenTelemetryZioOps.html#spanFrom[C](propagator:io.opentelemetry.context.propagation.TextMapPropagator,carrier:C,getter:io.opentelemetry.context.propagation.TextMapGetter[C],spanName:String,spanKind:io.opentelemetry.api.trace.SpanKind,toErrorStatus:PartialFunction[E,io.opentelemetry.api.trace.StatusCode]):zio.ZIO[Rwithzio.telemetry.opentelemetry.Tracing,E,A]), which extracts the propagated context (through [gRPC Metadata](https://grpc.io/docs/what-is-grpc/core-concepts/#metadata), using [W3C Trace Context format](https://w3c.github.io/trace-context/)) and starts a new child `Span` right after. - We have access to a [RequestContext](https://scalapb.github.io/zio-grpc/docs/context) and thus the full method name used for `Span`'s name. - We also add additional gRPC specific attributes according to the OpenTelemetry's [semantic convention](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/rpc.md#grpc-attributes). ```scala object GrpcTracing { private val propagator: TextMapPropagator = W3CTraceContextPropagator.getInstance() private val metadataGetter: TextMapGetter[Metadata] = new TextMapGetter[Metadata] { override def keys(carrier: Metadata): java.lang.Iterable[String] = carrier.keys() override def get(carrier: Metadata, key: String): String = carrier.get( Metadata.Key.of(key, Metadata.ASCII_STRING_MARSHALLER) ) } private def withSemanticAttributes[R, A](effect: ZIO[R, Status, A]): ZIO[Tracing with R, Status, A] = Tracing.setAttribute(SemanticAttributes.RPC_SYSTEM.getKey, "grpc") *> effect .tapBoth( status => Tracing.setAttribute( SemanticAttributes.RPC_GRPC_STATUS_CODE.getKey, status.getCode.value() ), _ => Tracing.setAttribute(SemanticAttributes.RPC_GRPC_STATUS_CODE.getKey, Status.OK.getCode.value()) ) def serverTracingTransform[R]: ZTransform[R, Status, R with Tracing with Has[RequestContext]] = new ZTransform[R, Status, R with Tracing with Has[RequestContext]] { def effect[A](io: ZIO[R, Status, A]): ZIO[R with Tracing with Has[RequestContext], Status, A] = for { rc <- ZIO.service[RequestContext] metadata <- rc.metadata.wrap(identity) result <- withSemanticAttributes(io) .spanFrom( propagator, metadata, metadataGetter, rc.methodDescriptor.getFullMethodName, SpanKind.SERVER, { case _ => StatusCode.ERROR } ) } yield result def stream[A](io: ZStream[R, Status, A]): ZStream[R with Tracing with Has[RequestContext], Status, A] = ??? } } ``` ### Update Server Main - Add required layers for Tracing. - Transform the original `ZGreeterImpl`. ```scala import zio.magic._ object ZServer extends ServerMain { private val requirements = ZLayer .wire[ZEnv with ZGreeterEnv]( ZEnv.live, AppConfig.live.narrow(_.tracing), ZTracer.live("hello-server"), Tracing.live, SttpTracing.live ) .orDie def services: ServiceList[Any] = ServiceList .add(ZGreeterImpl.transform[ZGreeterEnv, Has[RequestContext]](GrpcTracing.serverTracingTransform)) .provideLayer(requirements) } ``` ## New client ### Inject current context into [gRPC Metadata](https://grpc.io/docs/what-is-grpc/core-concepts/#metadata) for [context propagation](https://opentelemetry.lightstep.com/core-concepts/context-propagation/) ```scala object GrpcTracing { ... private val metadataSetter: TextMapSetter[Metadata] = (carrier, key, value) => carrier.put(Metadata.Key.of(key, Metadata.ASCII_STRING_MARSHALLER), value) val contextPropagationClientInterceptor: ZClientInterceptor[Tracing] = ZClientInterceptor.headersUpdater { (_, _, metadata) => metadata.wrapM(Tracing.inject(propagator, _, metadataSetter)) } ... } ``` ```scala object ZClient extends zio.App { private val clientLayer = GreeterClient.live( ZManagedChannel( ManagedChannelBuilder.forAddress("localhost", 9000).usePlaintext(), Seq(GrpcTracing.contextPropagationClientInterceptor) ) ) ... } ``` ### Start a `Span` for each request - Use `ZTransform` to record the relevant gRPC attributes. ```scala object GrpcTracing { ... def clientTracingTransform[R]: ZTransform[R, Status, R with Tracing] = new ZTransform[R, Status, R with Tracing] { def effect[A](io: ZIO[R, Status, A]): ZIO[R with Tracing, Status, A] = withSemanticAttributes(io) def stream[A](io: ZStream[R, Status, A]): ZStream[R with Tracing, Status, A] = ??? } } ``` - Unlike the server, we don't have access to a `RequestContext` object, so we have to set the method name manually. - We also start additional `Span`s. ```scala object ZClient extends zio.App { ... private def errorToStatusCode[E]: PartialFunction[E, StatusCode] = { case _ => StatusCode.ERROR } private def sayHello(request: HelloRequest) = GreeterClient .sayHello(request) .span( GreeterGrpc.METHOD_SAY_HELLO.getFullMethodName, SpanKind.CLIENT, errorToStatusCode ) private val singleHello = sayHello(HelloRequest("World")) .span("singleHello", toErrorStatus = errorToStatusCode) private val multipleHellos = ZIO .collectAllParN(5)( List( sayHello(HelloRequest("1", Some(1))), sayHello(HelloRequest("2", Some(2))), sayHello(HelloRequest("3", Some(3))), sayHello(HelloRequest("4", Some(4))), sayHello(HelloRequest("5", Some(5))) ) ) .span("multipleHellos", toErrorStatus = errorToStatusCode) private val invalidHello = sayHello(HelloRequest("Invalid", Some(-1))).ignore .span("invalidHello", toErrorStatus = errorToStatusCode) } ``` ### Add required layers ```scala object ZClient extends zio.App { ... private val requirements = ZLayer .wire[ZEnv with Tracing]( ZEnv.live, AppConfig.live.narrow(_.tracing), ZTracer.live("hello-client"), Tracing.live ) >+> clientLayer def run(args: List[String]): URIO[ZEnv, ExitCode] = myAppLogic.provideCustomLayer(requirements).exitCode } ``` ## Showtime ### Run Jaeger through Docker - Tracing data can be sent to port 14250. - We can view [Jaeger UI](https://github.com/jaegertracing/jaeger-ui) at http://localhost:16686. ```bash $ docker run --rm --name jaeger \ -p 16686:16686 \ -p 14250:14250 \ jaegertracing/all-in-one:1.25 ``` ### Start the server ```bash $ TRACING_ENABLE=true sbt "zio/runMain com.github.tuleism.ZServer" [info] running (fork) com.github.tuleism.ZServer [info] Server is running. Press Ctrl-C to stop. ``` ### Start the client ```bash $ TRACING_ENABLE=true sbt "zio/runMain com.github.tuleism.ZClient" [info] running (fork) com.github.tuleism.ZClient [info] Done [success] Total time: 12 s ``` ### Distributed Tracing in action! - Now we can see the details for `multipleHellos`: ![multipleHellos Span](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j54n9vrfngkjb6idaddh.png) - And which `guess` is causing the longest delay. ![bad guess](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojdo73cv5o525j3inn2b.png) ## Integration with Logging - Let's add tracing context into log messages following the [specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/overview.md#trace-context-in-legacy-formats). - We're going to use [izumi logstage](https://izumi.7mind.io/logstage/index.html), our favorite logging library. - See the [diffs](https://github.com/tuleism/opentelemetry-distributed-tracing-zio/commit/d9ba391dc9725e3e4ddae9c05cfe2641d8a435cf). ### Add logging dependency ```scala val izumiVersion = "1.0.8" val loggingDeps = Seq( "io.7mind.izumi" %% "logstage-core" % izumiVersion, "io.7mind.izumi" %% "logstage-adapter-slf4j" % izumiVersion ) ``` ### Setup logging - Add `trace_id`, `span_id` to logging context if current trace context is valid. ```scala object Logging { private def baseLogger = IzLogger() val live: ZLayer[Has[Tracing.Service], Nothing, Has[LogZIO.Service]] = ( for { tracing <- ZIO.service[Tracing.Service] } yield LogZIO.withDynamicContext(baseLogger)( Tracing.getCurrentSpanContext .map(spanContext => if (spanContext.isValid) CustomContext( "trace_id" -> spanContext.getTraceId, "span_id" -> spanContext.getSpanId, "trace_flags" -> spanContext.getTraceFlags.asHex() ) else CustomContext.empty ) .provide(Has(tracing)) ) ).toLayer } ``` ### Add a few log messages - E.g for `singleHello`. ```scala object ZClient extends zio.App { ... private val singleHello = ( for { _ <- log.info("singleHello") _ <- sayHello(HelloRequest("World")) } yield () ).span("singleHello", toErrorStatus = errorToStatusCode) } ``` ### Sample Logs ```bash [info] running (fork) com.github.tuleism.ZClient [info] I 2021-11-01T22:59:10.881 (ZClient.scala:37) …tuleism.ZClient.singleHello [24:zio-default-async-11] trace_id=9c8a7ebb87381293bc8937a5f7673cb9, span_id=cb7c9a440472e1be, trace_flags=01 singleHello [info] I 2021-11-01T22:59:14.064 (ZClient.scala:44) …eism.ZClient.multipleHellos [21:zio-default-async-8 ] trace_id=fe405246fbaa5f876c19f14fa649a99f, span_id=bef19494bef4106e, trace_flags=01 multipleHellos [info] I 2021-11-01T22:59:18.171 (ZClient.scala:60) …uleism.ZClient.invalidHello [26:zio-default-async-13] trace_id=be5ccd425e0cfb01fd97274abd0c4d72, span_id=ea6499fb9a7c8d28, trace_flags=01 invalidHello [info] I 2021-11-01T22:59:18.272 (ZClient.scala:66) ….tuleism.ZClient.myAppLogic [15:zio-default-async-2 ] Done [success] Total time: 12 s ``` ## Extra notes - If we receive an HTTP 5xx response, we should set the `Span` status to error according to the [semantic convention](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md#status). However, it is currently [not possible](https://github.com/zio/zio-telemetry/issues/444) with `zio-telemetry`. - We need [a better way](https://github.com/scalapb/zio-grpc/issues/309) to implement tracing for `zio-grpc` client.
tuleism
884,856
Simple Python Projects for Beginners With Source Code
I believe that the best way to master any programming language is to create real-life projects using...
0
2021-11-02T04:41:42
https://dev.to/visheshdvivedi/simple-python-projects-for-beginners-with-source-code-4nb8
python, programming, beginners, tutorial
I believe that the best way to master any programming language is to create real-life projects using that language. The same is the case with the python programming language. If you are a beginner python programmer, and you want to master the python programming language, then it's really important that you start creating real-life projects using python language. Now it can be a bit difficult for beginner programmers to find projects that are easy for them to build along with the source code for research and analysis as to how the project works. So I have created a list of the top 10 easy python project ideas for beginners along with their source code.  As you will progress through this post, the project difficulty level will keep going higher. As a beginner programmer, you should be able to create all these projects by yourself, as you go through your programming journey. If you liked my post, don’t forget to check out other posts on my blog site. You can also check out my YouTube channel, where I already have a video on this topic. So let’s start with our list ###1. Calculator [![simple python projects for beginners with source code - calculator.jpg](https://1.bp.blogspot.com/-z90ZW7j9mNU/YRqt9jaOhmI/AAAAAAAAAcs/x1gInDZTlcgY4Oh4PSpU5sJK0Vi_DCRCgCLcBGAsYHQ/w640-h378/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bcalculator.jpg "simple python projects for beginners with source code - calculator.jpg")](https://1.bp.blogspot.com/-z90ZW7j9mNU/YRqt9jaOhmI/AAAAAAAAAcs/x1gInDZTlcgY4Oh4PSpU5sJK0Vi_DCRCgCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bcalculator.jpg) The calculator is probably one of the most common project ideas that you will see in many youtube videos. And the reason for this is that it is very easy to make and it will clear your basic concepts regarding if-else statements and while loop. The calculator will start by asking what the user wants to do, like: 1. Addition 2. Subtraction 3. Multiplication 4. Division, and 5. Exit Based on the option, the script will either add, subtract, multiply or divide the numbers that the user entered. Once the operation is complete, the script will again ask the user its choice. If the user chooses to exit, the program will stop. You can find this project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/calculator.py). ###2. Dice Rolling Simulator [![simple python projects for beginners with source code - dice rolling simulator](https://1.bp.blogspot.com/-lwn5gSZUl9E/YRquR_cv9WI/AAAAAAAAAc0/8hakSZ0M-X45CkXWF6kvzqZmj34KWI_wgCLcBGAsYHQ/w640-h336/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bdice%2Brolling%2Bsimulator.jpg "simple python projects for beginners with source code - dice rolling simulator")](https://1.bp.blogspot.com/-lwn5gSZUl9E/YRquR_cv9WI/AAAAAAAAAc0/8hakSZ0M-X45CkXWF6kvzqZmj34KWI_wgCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bdice%2Brolling%2Bsimulator.jpg) This project is, again, an easy one, but at the same time, a very crucial one as it will introduce beginner python programmers to the random library. Random library is used to generate pseudo-random results. We call its output pseudo-random as the output we get is not 100% random, but still is, pretty much random.  This script will ask the user whether to roll the dice or exit the game. If the user chooses to roll the dice, the script will use the random module, to get a random number between 1 and 6 and display it to the user. If the user chooses to exit, the program will simply exit. You can find this project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/dice_rolling.py) ###3. Number Guessing Game [![simple python projects for beginners with source code - number guessing game](https://1.bp.blogspot.com/-yE8mjMltBsM/YRqudbnP1YI/AAAAAAAAAc4/UoCDKKte87ce88g3Jtw_apLnft0TA8AFgCLcBGAsYHQ/w640-h480/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bnumber%2Bguessing%2Bgame.jpg "simple python projects for beginners with source code - number guessing game")](https://1.bp.blogspot.com/-yE8mjMltBsM/YRqudbnP1YI/AAAAAAAAAc4/UoCDKKte87ce88g3Jtw_apLnft0TA8AFgCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bnumber%2Bguessing%2Bgame.jpg) This project is similar to the dice rolling simulator, which means that this project also uses the random module to generate random numbers. However, this project also has some extra logic in it, that turns this project into a fun game to play. In this game, the script will randomly get a number between 1 and 10 and the user has to guess which number it is. Depending on the user’s entered number, the script will tell if the actual number is greater than or less than the original number. In the end, the script will tell how many tries the user took to guess the number. You can find this project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/number_guessing.py) ###4. Value Converter [![simple python projects for beginners with source code - value converter](https://1.bp.blogspot.com/-_tEasoZ3VpA/YRquoXUZsvI/AAAAAAAAAdA/B6Mt9jpnCbMEl48bktHBZHdyO_61arYYwCLcBGAsYHQ/w640-h426/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bvalue%2Bconverter.jpg "simple python projects for beginners with source code - value converter")](https://1.bp.blogspot.com/-_tEasoZ3VpA/YRquoXUZsvI/AAAAAAAAAdA/B6Mt9jpnCbMEl48bktHBZHdyO_61arYYwCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bvalue%2Bconverter.jpg) Its name can be a bit confusing to understand, but the project in itself is very easy to make. The script that we will make within this project, will be used to perform some common conversions between different measurement units, like Celsius to Fahrenheit, rupee to pound, centimeter to foot, etc.  Initially, this script will only be able to perform some basic conversions that require very basic mathematics. But as you improve your programming skills, and get introduced to the NumPy library, you can add the functionality of doing some more complex conversions like calculating the area and perimeter of a circle with a given radius. (you would need the value of pi for this conversion, which you can get from the NumPy library) You can find this project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/value_converter.py). ###5. Countdown Timer [![simple python projects for beginners with source code - countdown timer](https://1.bp.blogspot.com/-D8vtbvu1sU4/YRquxFT6rAI/AAAAAAAAAdI/B-J2XnEPCGU0nCCfcr5B8QW3tj6WTB0IQCLcBGAsYHQ/w640-h590/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bcountdown%2Btimer.jpg "simple python projects for beginners with source code - countdown timer")](https://1.bp.blogspot.com/-D8vtbvu1sU4/YRquxFT6rAI/AAAAAAAAAdI/B-J2XnEPCGU0nCCfcr5B8QW3tj6WTB0IQCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bcountdown%2Btimer.jpg) Its name is pretty self-explanatory. Within this project, we will create a script that will have the ability to start a countdown timer for a certain amount of time. The user will first enter the number of seconds that the countdown should count, Once the amount is entered, the script will start the countdown and the script will wait for the entered amount of seconds to complete. It will work just like a countdown that you can find on your watch or smartphone. You can find the project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/countdown_timer.py). ###6. Password Strength Checker [![simple python projects for beginners with source code - password strength checker](https://1.bp.blogspot.com/-y29Kk-XqLVg/YRqu45F5ycI/AAAAAAAAAdQ/Q05g8UicG4EYCb0PXL3vgNyznBquDzfHgCLcBGAsYHQ/w640-h418/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bpassword%2Bstrength%2Bchecker.jpg "simple python projects for beginners with source code - password strength checker")](https://1.bp.blogspot.com/-y29Kk-XqLVg/YRqu45F5ycI/AAAAAAAAAdQ/Q05g8UicG4EYCb0PXL3vgNyznBquDzfHgCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bpassword%2Bstrength%2Bchecker.jpg) Whenever you create an account on a site by entering your username and password, the site ensures that the password you are trying to save is strong, so that hackers cannot guess your passwords easily. In this project, we are gonna mimic the site, functionality into a console-based python project, This script will check the strength of the password that the user will enter. The script will show the number of alphabets, numbers, special characters, and whitespace characters present within the password and will accordingly mark the password’s strength. The best password will be the one containing at least one uppercase letter, one lowercase letter, one digit, one special character, and one whitespace. So it’s time you change your Gmail password. You can find this project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/password_strength_checker.py). ###7. Hangman Game [![simple python projects for beginners with source code - hangman](https://1.bp.blogspot.com/-XwlML13RWlM/YRqvArTo_cI/AAAAAAAAAdU/q3UrQKvVCTM3iwNCHT_EPZLuZmNX9WozwCLcBGAsYHQ/w640-h480/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bhangman.jpg "simple python projects for beginners with source code - hangman")](https://1.bp.blogspot.com/-XwlML13RWlM/YRqvArTo_cI/AAAAAAAAAdU/q3UrQKvVCTM3iwNCHT_EPZLuZmNX9WozwCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Bhangman.jpg) Do you remember this game that you used to play? where your friend would guess a letter and write the number of letters in the word in the form of underscores and then you would have to guess the word in few chances? This project is exactly this game but made in python. Hangman is again a very common project to be built by beginner programmers. In this game, the script will randomly pick a word from a list of words, show to the user the number of letters in the word and the user has to guess the word by guessing each letter that may appear in the word. The user will have 7 chances to guess a letter that may be present within the word. If he or she is unable to guess the word within 7 chances, the user will lose the game. You can create this project and have fun with your friends while showing off your programming skills (definitely gonna impress some girls :D) You can find this project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/hangman.py). ###8. Tic Tac Toe [![simple python projects for beginners with source code - tic tac toe](https://1.bp.blogspot.com/-OJe3MjV2mRI/YRqvIHBij2I/AAAAAAAAAdc/Q4A9DCAxHt8RlLnMVPz6BkIdNyBTahkVACLcBGAsYHQ/w640-h562/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Btic%2Btac%2Btoe.jpg "simple python projects for beginners with source code - tic tac toe")](https://1.bp.blogspot.com/-OJe3MjV2mRI/YRqvIHBij2I/AAAAAAAAAdc/Q4A9DCAxHt8RlLnMVPz6BkIdNyBTahkVACLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Btic%2Btac%2Btoe.jpg) If you haven’t heard of this game, or have not played it before, you must have had a rough childhood, just kidding :D. Tic tac toe is a popular, two-player game that involves two players playing on a 3 by 3 board, where each player tries to make three consecutive of their mark. Whoever is able to do it, will win, else there will be a tie. This project will try to make a tic tac toe game in python as a console application. The application would require an understanding about 2D lists in python. Not only you can create this project, but you can also create an undefeatable version of the tic tac toe game in python. If you wanna know how to make it, you can check out this blog [here](https://itsallaboutpython.blogspot.com/2021/05/create-undefeatable-tic-tac-toe-in.html) You can find the project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/tic_tac_toe.py). ###9. Rock, Paper, Scissor [![simple python projects for beginners with source code - rock paper scissor](https://1.bp.blogspot.com/-_nxoo75e5zY/YRqvQItMFlI/AAAAAAAAAdo/gv8p6Wpv0hcHmx6AzwpXj756o5MdwmEjwCLcBGAsYHQ/w640-h426/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Brock%2Bpaper%2Bscissor.jpg "simple python projects for beginners with source code - rock paper scissor")](https://1.bp.blogspot.com/-_nxoo75e5zY/YRqvQItMFlI/AAAAAAAAAdo/gv8p6Wpv0hcHmx6AzwpXj756o5MdwmEjwCLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Brock%2Bpaper%2Bscissor.jpg) This project will mimic the rock paper scissor game, which I am very sure you will know, Here the user will act as one player, and the computer will act as another user. While the user will choose an object from rock, paper, and scissors, the script will also choose an object. Based on user choice, the script will see who won and show the output to the user. You can find the project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/rock_paper_scissor.py). ###10. User Record Management System [![simple python projects for beginners with source code - user record management system](https://1.bp.blogspot.com/--4tA5Kz7c90/YRqvXiVYNTI/AAAAAAAAAdw/bfRq-8X0WBQ-3HFjtUVzbCJATItBDRgUACLcBGAsYHQ/w640-h372/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Buser%2Brecord%2Bmanagement%2Bsystem.jpg "simple python projects for beginners with source code - user record management system")](https://1.bp.blogspot.com/--4tA5Kz7c90/YRqvXiVYNTI/AAAAAAAAAdw/bfRq-8X0WBQ-3HFjtUVzbCJATItBDRgUACLcBGAsYHQ/s1920/simple%2Bpython%2Bprojects%2Bfor%2Bbeginners%2Bwith%2Bsource%2Bcode%2B-%2Buser%2Brecord%2Bmanagement%2Bsystem.jpg) Have you ever wondered how the software that manages the data of millions of users works? maybe you can make a smaller version of that software using python.   This project will focus on clearly understanding the concept of CRUD operation, which is created, read, update and delete. The script will maintain a database within python dictionary and provide users with the option to create an entry, read a single or all entries, update any existing entry, or delete any entry. Those who are aware about OOP can also attempt to create this project while following the OOP paradigm.  You can find the project source code [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners/blob/main/user_management_system.py) ###Conclusion Those were the top 10 easy python projects for beginners. I have created these 10 projects for you all and uploaded them on GitHub, you can find the link [here](https://github.com/visheshdvivedi/Top-10-Easy-Python-Project-Ideas-For-Beginners) But honestly, I would want you all to try building these projects by yourself as it will add more value to your programming skills. Obviously, you can contact me anytime on Instagram, or by joining my discord server. I will try my best to respond to your queries as accurately as possible.   If you liked this post, don’t forget to check out my other posts. You can also check out my YouTube channel [here](https://youtube.com/allaboutpython?sub_confirmation=1) Thanks for watching and take care
visheshdvivedi
885,082
Why you should never use random module for generating passwords.
Why Random Numbers Are Not Random? The Random Numbers come from a particular seed number...
15,405
2021-11-04T09:43:19
https://dev.to/vaarun_sinha/why-you-should-never-use-random-module-for-generating-passwords-38nl
python, beginners, programming, security
## Why Random Numbers Are Not Random? The Random Numbers come from a particular seed number which is usually the system clock.Run the program below to understand the security risk. {% replit @MRINDIA1/Random-Numbers-Are-Not-Random %} The Python Documentation also has a warning about the same: "The **pseudo-random** generators of this module **should not be used for security purposes**." So all the password generators you have built using random module are not secure!? So How do we generate cryptographically secure numbers/passwords? But there is another line after that warning: "*For security or cryptographic uses, see the secrets module.*" ## What is this secrets module? The secrets module is used for generating cryptographically strong random numbers suitable for managing data such as passwords, account authentication, security tokens, and related secrets. ### How it is different from the random module? I found a really good post on reddit from which you can understand what is the difference between these two modules. The Post says: >"with random your numbers come from some seed number, usually based on the system clock, which generates pseudo-random numbers. That means that if you get guess the seed, you can generate the same sequence of numbers. If you used pseudorandomly generated numbers as salts for all your passwords, then brute forcing the keys would become trivial. true random numbers come from "high entropy" seeds, meaning it's not just some number you can guess, it's things that are impossible to reproduce algorithmically. Imagine things like keyboard inputs, time between keystrokes, mouse movements, cpu usage, number of programs running, etc. It might not use those exactly, but you can see how the numbers it generates from those sources are literally impossible to reproduce which is why you want to use those ones as your encryption keys and salts." And another post says: >"It's more secure because it's less predictable. The random module uses an algorithm that's fast but it's possible to calculate what the next random number will be. That's fine for randomly placing things on the screen or something but for generating passwords it's important that the number is not predictable." So basically it makes the seed really hard to guess( less predictable) Reddit Post 1: https://www.reddit.com/r/learnpython/comments/7w8w6y/comment/dtyg7wd/?utm_source=share&utm_medium=web2x&context=3 Reddit Post 2: https://www.reddit.com/r/learnpython/comments/7w8w6y/comment/dtyi6z8/?utm_source=share&utm_medium=web2x&context=3 **Stay tuned for the next blog where we make a password generator which generates cryptographically strong passwords.** ***Happy Coding***
vaarun_sinha
885,191
Top 10 articles about JavaScript of the week💚.
Most popular articles about JavaScript published on the dev.to in this week
14,901
2021-11-02T11:32:01
https://dev.to/ksengine/top-10-articles-about-javascript-of-the-week-590l
javascript, webdev, beginners, react
--- title: Top 10 articles about JavaScript of the week💚. published: true description: Most popular articles about JavaScript published on the dev.to in this week cover_image: https://source.unsplash.com/featured/?javascript tags: javascript,webdev,beginners,react series: Weekly JS top 10 --- DEV is a community of software developers getting together to help one another out. The software industry relies on collaboration and networked learning. They provide a place for that to happen. Once relegated to the browser as one of the 3 core technologies of the web, JavaScript can now be found almost anywhere you find code. JavaScript developers move fast and push software development forward; they can be as opinionated as the frameworks they use, so let's keep it clean here and make it a place to learn from each other! {% tag javascript %} Here is the most popular articles published on this platform. ## #1 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--uSUz8bE4--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ubno1i6mkcco3xkhps1.jpg)](https://dev.to/freakcdev297/creating-a-blockchain-in-60-lines-of-javascript-5fka){% post https://dev.to/freakcdev297/creating-a-blockchain-in-60-lines-of-javascript-5fka %} ## #2 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--Un_Faevb--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a133hjbjfi9wwcexub0m.png)](https://dev.to/nehal_mahida/filter-map-and-reduce-in-js-when-and-where-to-use-281c){% post https://dev.to/nehal_mahida/filter-map-and-reduce-in-js-when-and-where-to-use-281c %} ## #3 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--1hvstQpL--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t68umi2xkf5fr8g0uol5.png)](https://dev.to/0shuvo0/easiest-way-to-add-multilanguage-in-your-website-4n7){% post https://dev.to/0shuvo0/easiest-way-to-add-multilanguage-in-your-website-4n7 %} ## #4 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--98Gihlfk--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gppai9tavc2hpe7ajsj.png)](https://dev.to/franciscomendes10866/create-a-responsive-navbar-using-react-and-tailwind-3768){% post https://dev.to/franciscomendes10866/create-a-responsive-navbar-using-react-and-tailwind-3768 %} ## #5 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--FJ0xik80--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zibj168e2c9xsdlojexh.png)](https://dev.to/sgoulas/i-created-an-e-commerce-site-from-scratch-and-kept-a-development-diary-over-the-cource-of-5-months-12mm){% post https://dev.to/sgoulas/i-created-an-e-commerce-site-from-scratch-and-kept-a-development-diary-over-the-cource-of-5-months-12mm %} ## #6 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--XcuP3uHc--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7fvl52qsqiilcey0n1i.jpeg)](https://dev.to/codeoz/7-nice-api-for-your-projects--3ap){% post https://dev.to/codeoz/7-nice-api-for-your-projects--3ap %} ## #7 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--g9-63U1h--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zs01nq18gjt385xkshgm.png)](https://dev.to/aviyel/building-a-music-player-application-in-react-js-3ngd){% post https://dev.to/aviyel/building-a-music-player-application-in-react-js-3ngd %} ## #8 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--VK9_gxza--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/52pjqkmjblnuqkj30si7.png)](https://dev.to/j471n/how-to-share-anything-from-your-website-by-web-share-api-1h5g){% post https://dev.to/j471n/how-to-share-anything-from-your-website-by-web-share-api-1h5g %} ## #9 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--hFhMEYhY--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/azhvn8vuqrofsidykaas.jpg)](https://dev.to/mustapha/a-deep-dive-into-es6-classes-2h52){% post https://dev.to/mustapha/a-deep-dive-into-es6-classes-2h52 %} ## #10 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--R92mhD9R--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19yus2bb4xdofh1vmnh3.png)](https://dev.to/javascriptacademy/create-a-simple-calculator-using-html-css-and-javascript-4o7k){% post https://dev.to/javascriptacademy/create-a-simple-calculator-using-html-css-and-javascript-4o7k %} > Orginal authors of this articles are @freakcdev297, @nehal_mahida, @0shuvo0, @franciscomendes10866, @sgoulas, @codeoz, @pramit_armpit, j471n, mustapha, javascriptacademy . Enjoy these articles. Follow me for more articles. Thanks 💖💖💖
ksengine
885,413
Build an email news digest app with Nix, Python and Celery
In this tutorial, we'll build an application that sends regular emails to its users. Users will be...
0
2021-11-02T13:36:55
https://docs.replit.com/tutorials/31-build-news-digest-app-with-nix
In this tutorial, we'll build an application that sends regular emails to its users. Users will be able to subscribe to [RSS](https://en.wikipedia.org/wiki/RSS) and [Atom](https://en.wikipedia.org/wiki/Atom_(Web_standard)) feeds, and will receive a daily email with links to the newest stories in each one, at a specified time. As this application will require a number of different components, we're going to build it using the power of Nix repls. By the end of this tutorial, you'll be able to: * Use Nix on Replit to set up a database, webserver, message broker and background task handlers. * Use Python Celery to schedule and run tasks in the background. * Use Mailgun to send automated emails. * Build a dynamic Python application with multiple discrete parts. ## Getting started To get started, sign in to [Replit](https://replit.com) or [create an account](https://replit.com/signup) if you haven't already. Once logged in, create a Nix repl. ![Create a nix repl](https://docs.replit.com/images/tutorials/31-news-digest-app/create-nix-repl.png) ## Installing dependencies We'll start by using Nix to install the packages and libraries we'll need to build our application. These are: 1. **Python 3.9**, the programming language we'll write our application in. 2. **Flask**, Python's most popular micro web framework, which we'll use to power our web application. 3. **MongoDB**, the NoSQL database we'll use to store persistent data for our application. 4. **PyMongo**, a library for working with MongoDB in Python. 5. **Celery**, a Python task queuing system. We'll use this to send regular emails to users. 6. **Redis**, a data store and message broker used by Celery to track tasks. 7. Python's Redis library. 8. Python's Requests library, which we'll use to interact with an external API to send emails. 9. Python's feedparser library, which we'll use to parse news feeds. 10. Python's dateutil library, which we'll use to parse timestamps in news feeds. To install these dependencies, open `replit.nix` and edit it to include the following: ```nix { pkgs }: { deps = [ pkgs.cowsay pkgs.python39 pkgs.python39Packages.flask pkgs.mongodb pkgs.python39Packages.pymongo pkgs.python39Packages.celery pkgs.redis pkgs.python39Packages.redis pkgs.python39Packages.requests pkgs.python39Packages.feedparser pkgs.python39Packages.dateutil ]; } ``` Run your repl now to install all the packages. Once the Nix environment is finished loading, you should see a welcome message from `cowsay`. Now edit your repl's `.replit` file to run a script called `start.sh`: ```bash run = "sh start.sh" ``` Next we need to create `start.sh` in the repl's files tab: <img src="https://docs.replit.com/images/tutorials/31-news-digest-app/new-file.png"/> And add the following bash code to `start.sh`: ```bash #!/bin/sh # Clean up pkill mongo pkill redis pkill python pkill start.sh rm data/mongod.lock mongod --dbpath data --repair # Run Mongo with local paths mongod --fork --bind_ip="127.0.0.1" --dbpath=./data --logpath=./log/mongod.log # Run redis redis-server --daemonize yes --bind 127.0.0.1 ``` The first section of this script will kill all the running processes so they can be restarted. While it may not be strictly necessary to stop and restart MongoDB or Redis every time you run your repl, doing so means we can reconfigure them should we need to, and prevents us from having to check whether they're stopped or started, independent of our other code. The second section of the script starts MongoDB with the following configuration options: * `--fork`: This runs MongoDB in a background process, allowing the script to continue executing without shutting it down. * `--bind_ip="127.0.0.1"`: Listen on the local loopback address only, preventing external access to our database. * `--dbpath=./data` and `--logpath=./log/mongod.log`: Use local directories for storage. This is important for getting programs to work in Nix repls, as we discussed in [our previous tutorial on building with Nix](https://docs.replit.com/tutorials/30-build-with-nix). The third section starts Redis. We use the `--bind` flag to listen on the local loopback address only, similar to how we used it for MongoDB, and `--daemonize yes` runs it as a background process (similar to MongoDB's `--fork`). Before we run our repl, we'll need to create our MongoDB data and logging directories, `data` and `log`. Create these directories now in your repl's filepane. <img src="https://docs.replit.com/images/tutorials/31-news-digest-app/mongodirs.png"/> Once that's done, you can run your repl, and it will start MongoDB and Redis. You can interact with MongoDB by running `mongo` in your repl's shell, and with Redis by running `redis-cli`. If you're interested, you can find an introduction to these clients at the links below: * [Working with the `mongo` Shell](https://docs.mongodb.com/v4.4/mongo/#working-with-the-mongo-shell) * [`redis-cli`, the Redis command line interface](https://redis.io/topics/rediscli) ![Running mongo and redis cli](https://docs.replit.com/images/tutorials/31-news-digest-app/mongo-and-redis-cli.png) These datastores will be empty for now. **Important note**: Sometimes, when stopping and starting your repl, you may see the following error message: ```bash ERROR: child process failed, exited with error number 100 ``` This means that MongoDB has failed to start. If you see this, restart your repl, and MongoDB should start up successfully. ## Scraping RSS and Atom feeds We're going to build the feed scraper first. If you've completed any of our previous web-scraping tutorials, you might expect to do this by parsing raw XML with [Beautiful Soup](https://beautiful-soup-4.readthedocs.io/en/latest/). While this would be possible, we would need to account for a large number of differences in feed formats and other gotchas specific to parsing RSS and Atom feeds. Instead, we'll use the [feedparser](https://pypi.org/project/feedparser/) library, which has already solved most of these problems. Create a directory named `lib`, and inside that directory, a Python file named `scraper.py`. Add the following code to it: ```python import feedparser, pytz, time from datetime import datetime, timedelta from dateutil import parser def get_title(feed_url): pass def get_items(feed_url, since=timedelta(days=1)): pass ``` Here we import the libraries we'll need for web scraping, XML parsing, and time handling. We also define two functions: * `get_title`: This will return the name of the website, for a given feed track (e.g. "Hacker News" for https://news.ycombinator.com/rss). * `get_items`: This will return the feed's items – depending on the feed, these can be articles, videos, podcast episodes, or other content. The `since` parameter will allow us to only fetch recent content, and we'll use one day as the default cutoff. Edit the `get_title` function with the following: ```python def get_title(feed_url): feed = feedparser.parse(feed_url) return feed["feed"]["title"] ``` Add the following line to the bottom of `scraper.py` to test it out: ```python print(get_title("https://news.ycombinator.com/rss")) ``` Instead of rewriting our `start.sh` script to run this Python file, we can just run `python lib/scraper.py` in our repl's shell tab, as shown below. If it's working correctly, we should see "Hacker News" as the script's output. <img src="https://docs.replit.com/images/tutorials/31-news-digest-app/script-test.png"/> Now we need to write the second function. Add the following code to the `get_items` function definition: ```python def get_items(feed_url, since=timedelta(days=1)): feed = feedparser.parse(feed_url) items = [] for entry in feed.entries: title = entry.title link = entry.link if "published" in entry: published = parser.parse(entry.published) elif "pubDate" in entry: published = parser.parse(entry.pubDate) ``` Here we extract each item's title, link, and publishing timestamp. Atom feeds use the `published` element and RSS feeds use the `pubDate` element, so we look for both. We use [`parser`](https://dateutil.readthedocs.io/en/stable/parser.html) to convert the timestamp from a string to a `datetime` object. The `parse` function is able to convert a large number of different formats, which saves us from writing a lot of extra code. We need to evaluate the age of the content and package it in a dictionary so we can return it from our function. Add the following code to the bottom of the `get_items` function: ```python # evaluating content age if (since and published > (pytz.utc.localize(datetime.today()) - since)) or not since: item = { "title": title, "link": link, "published": published } items.append(item) return items ``` We get the current time with `datetime.today()`, convert it to the UTC timezone, and then subtract our `since` `timedelta` object. Because of the way we've constructed this `if` statement, if we pass in `since=None` when calling `get_items`, we'll get all feed items irrespective of their publish date. Finally, we construct a dictionary of our item's data and add it to the `items` list, which we return at the bottom of the function, outside the `for` loop. Add the following lines to the bottom of `scraper.py` and run the script in your repl's shell again. We use [`time.sleep`](https://docs.python.org/3/library/time.html#time.sleep) to avoid being rate-limited for fetching the same file twice in quick succession. ```python time.sleep(1) print(get_items("https://news.ycombinator.com/rss")) ``` You should see a large number of results in your terminal. Play around with values of `since` and see what difference it makes. Once you're done, remove the `print` statements from the bottom of the file. We've now built our feed scraper, which we'll use as a library in our main application. ## Setting up Mailgun Now that we can retrieve content for our email digests, we need a way of sending emails. To avoid having to set up our own email server, we'll use the [Mailgun](https://www.mailgun.com/) API to actually send emails. Sign up for a free account now, and verify your email and phone number. Once your account is created and verified, you'll need an API key and domain from Mailgun. To find your domain, navigate to **Sending → Domains**. You should see a single domain name, starting with "sandbox". Click on that and copy the full domain name (it looks like: `sandboxlongstringoflettersandnumbers.mailgun.org`). ![Mailgun domain](https://docs.replit.com/images/tutorials/31-news-digest-app/mailgun-domain.gif) To find your API key, navigate to **Settings → API Keys**. Click on the view icon next to **Private API key** and copy the revealed string somewhere safe. ![Mailgun api key](https://docs.replit.com/images/tutorials/31-news-digest-app/mailgun-apikey.png) Back in your repl, create two environment variables, `MAILGUN_DOMAIN` and `MAILGUN_APIKEY`, and provide the strings you copied from Mailgun as values for each. <img src="https://docs.replit.com/images/tutorials/31-news-digest-app/add-env-var.png"/> Run your repl now to set these environment variables. Then create a file named `lib/tasks.py`, and populate it with the code below. ```python import requests, os # Mailgun config MAILGUN_APIKEY = os.environ["MAILGUN_APIKEY"] MAILGUN_DOMAIN = os.environ["MAILGUN_DOMAIN"] def send_test_email(to_address): res = requests.post( f"https://api.mailgun.net/v3/{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <digest@{MAILGUN_DOMAIN}>", "to": [to_address], "subject": "Testing Mailgun", "text": "Hello world!"}) print(res) send_test_email("YOUR-EMAIL-ADDRESS-HERE") ``` Here we use Python Requests to interact with the [Mailgun API](https://documentation.mailgun.com/en/latest/api-sending.html). Note the inclusion of our domain and API key. To test that Mailgun is working, replace `YOUR-EMAIL-ADDRESS-HERE` with your email address, and then run `python lib/tasks.py` in your repl's shell. You should receive a test mail within a few minutes, but as we're using a free sandbox domain, it may end up in your spam folder. Without further verification on Mailgun, we can only send up to 100 emails per hour, and a free account limits us to 5,000 emails per month. Additionally, Mailgun's sandbox domains can only be used to send emails to specific, whitelisted addresses. The address you created your account with will work, but if you want to send emails to other addresses, you'll have to add them to the domain's authorized recipients, which can be done from the page you got the full domain name from. Keep these limitations in mind as you build and test this application. ![Recipients](https://docs.replit.com/images/tutorials/31-news-digest-app/recipients.png) After you've received your test email, you can delete or comment out the function call in the final line of `lib/tasks.py`. ## Interacting with MongoDB As we will have two different components of our application interacting with our Mongo database – our email-sending code in `lib/tasks.py` and the web application code we will put in `main.py` – we're going to put our database connection code in another file, which can be imported by both. Create `lib/db.py` now and add the following code to it: ```python import pymongo def connect_to_db(): client = pymongo.MongoClient() return client.digest ``` We will call `connect_to_db()` whenever we need to interact with the database. Because of how MongoDB works, a new database called "digest" will be created the first time we connect. Much of the benefit MongoDB provides over traditional SQL databases is that you don't have to define schemas before storing data. Mongo databases are made up of *collections*, which contain *documents*. You can think of the collections as lists and the documents as dictionaries. When we read and write data to and from MongoDB, we will be working with lists of dictionaries. ## Creating the web application Now that we've got a working webscraper, email sender and database interface, it's time to start building our web application. Create a file named `main.py` in your repl's filepane and add the following import code to it: ```python from flask import Flask, request, render_template, session, flash, redirect, url_for from functools import wraps import os, pymongo, time import lib.scraper as scraper import lib.tasks as tasks from lib.db import connect_to_db ``` We've imported everything we'll need from Flask and other Python modules, as well as our three local files from `lib`: `scraper.py`, `tasks.py` and `db.py`. Next, add the following code to initialize the application and connect to the database: ```python app = Flask(__name__) app.config['SECRET_KEY'] = os.environ['SECRET_KEY'] db = connect_to_db() ``` Our secret key will be a long, random string, stored in an environment variable. You can generate one in your repl's Python console with the following two lines of code: ```python import random, string ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20)) ``` ![Random string](https://docs.replit.com/images/tutorials/31-news-digest-app/randomstring.png) In your repl's "Secrets" tab, add a new key named `SECRET_KEY` and enter the random string you just generated as its value. ![Repl secret key](https://docs.replit.com/images/tutorials/31-news-digest-app/repl-secrets.png) Next, we will create the `context` helper function. This function will provide the current user's data from our database to our application frontend. Add the following code to the bottom of `main.py`: ```python def context(): email = session["email"] if "email" in session else None cursor = db.subscriptions.find({ "email": email }) subscriptions = [subscription for subscription in cursor] return { "user_email": email, "user_subscriptions": subscriptions } ``` When we build our user login, we will store the current user's email address in Flask's [`session` object](https://flask.palletsprojects.com/en/2.0.x/quickstart/#sessions), which corresponds to a [cookie](https://en.wikipedia.org/wiki/HTTP_cookie) that will be cryptographically signed with the secret key we defined above. Without this, users would be able to impersonate each other by changing their cookie data. We query our MongoDB database by calling [`db.<name of collection>.find()`](https://docs.mongodb.com/manual/reference/method/db.collection.find/). If we call `find()` without any arguments, all items in our collection will be returned. If we call `find()` with an argument, as we've done above, it will return results with keys and values that match our argument. The `find()` method returns a [`Cursor`](https://pymongo.readthedocs.io/en/stable/api/pymongo/cursor.html) object, which we can extract the results of our query from. Next, we need to create an authentication [function decorator](https://realpython.com/primer-on-python-decorators/), which will restrict parts of our application to logged-in users. Add the following code below the definition of the `context` function: ```python # Authentication decorator def authenticated(f): @wraps(f) def decorated_function(*args, **kwargs): if "email" not in session: flash("Permission denied.", "warning") return redirect(url_for("index")) return f(*args, **kwargs) return decorated_function ``` The code in the second function may look a bit strange if you haven't written your own decorators before. Here's how it works: `authenticated` is the name of our decorator. You can think of decorators as functions that take other functions as arguments. (The two code snippets below are for illustration and not part of our program.) Therefore, if we write the following: ```python @authenticated def authenticated_function(): return f"Hello logged-in user!" authenticated_function() ``` It will be roughly equivalent to: ```python def authenticated_function(): return f"Hello logged-in user!" authenticated(authenticated_function) ``` So whenever `authenticated_function` gets called, the code we've defined in `decorated_function` will execute before anything we define in `authenticated_function`. This means we don't have to include the same authentication checking code in every piece of authenticated functionality. As per the code, if a non-logged-in user attempts to access restricted functionality, our app will flash a warning message and redirect them to the home page. Next, we'll add code to serve our home page and start our application: ```python # Routes @app.route("/") def index(): return render_template("index.html", **context()) app.run(host='0.0.0.0', port=8080) ``` This code will serve a [Jinja](https://jinja.palletsprojects.com/en/3.0.x/templates/) template, which we will create now in a separate file. In your repl's filepane, create a directory named `templates`, and inside that directory, a file named `index.html`. Add the following code to `index.html`: ```html <!DOCTYPE html> <html> <head> <title>News Digest</title> </head> <body> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% if user_email == None %} <p>Please enter your email to sign up/log in:</p> <form action="/login" method="post"> <input type="text" name="email"> <input type="submit" value="Login"> </form> {% else %} <p>Logged in as {{ user_email }}.</p> <h1>Subscriptions</h1> <ul> {% for subscription in user_subscriptions %} <li> <a href="{{ subscription.url }}">{{ subscription.title }}</a> <form action="/unsubscribe" method="post" style="display: inline"> <input type="hidden" name="feed_url" value="{{subscription.url}}"> <input type="submit" value="Unsubscribe"> </form> </li> {% endfor %} </ul> <p>Add a new subscription:</p> <form action="/subscribe" method="post"> <input type="text" name="feed_url"> <input type="submit" value="Subscribe"> </form> <p>Send digest to your email now:</p> <form action="/send-digest" method="post"> <input type="submit" value="Send digest"> </form> <p>Choose a time to send your daily digest (must be UTC):</p> <form action="/schedule-digest" method="post"> <input type="time" name="digest_time"> <input type="submit" value="Schedule digest"> </form> {% endif %} </body> </html> ``` As this will be our application's only page, it contains a lot of functionality. From top to bottom: * We've included code to display [flashed messages](https://flask.palletsprojects.com/en/2.0.x/patterns/flashing/) at the top of the page. This allows us to show users the results of their actions without creating additional pages. * If the current user is not logged in, we display a login form. * If the current user is logged in, we display: * A list of their current subscriptions, with an unsubscribe button next to each one. * A form for adding new subscriptions. * A button to send an email digest immediately. * A form for sending email digests at a specific time each day. To start our application when our repl runs, we must add an additional line to the bottom of `start.sh`: ```bash # Run Flask app python main.py ``` Once that's done, run your repl. You should see a login form. <img src="https://docs.replit.com/images/tutorials/31-news-digest-app/login-form.png"/> ## Adding user login We will implement user login by sending a single-use login link to the email address provided in the login form. This provides a number of benefits: * We can use the code we've already written for sending emails. * We don't need to implement user registration separately. * We can avoid worrying about user passwords. To send login emails asynchronously, we'll set up a Celery task. In `main.py`, add the following code for the `/login` route below the definition of `index`: ```python # Login @app.route("/login", methods=['POST']) def login(): email = request.form['email'] tasks.send_login_email.delay(email) flash("Check your email for a magic login link!") return redirect(url_for("index")) ``` In this function, we get the user's email, and pass it to a function we will define in `lib/tasks.py`. As this function will be a [Celery task](https://docs.celeryproject.org/en/stable/userguide/tasks.html) rather than a conventional function, we must call it with `.delay()`, a function in [Celery's task-calling API](https://docs.celeryproject.org/en/stable/userguide/calling.html). Let's implement this task now. Open `lib/tasks.py` and modify it as follows: ```python import requests, os import random, string # NEW IMPORTS from celery import Celery # NEW IMPORT from celery.schedules import crontab # NEW IMPORT from datetime import datetime # NEW IMPORT import lib.scraper as scraper # NEW IMPORT from lib.db import connect_to_db # NEW IMPORT # NEW LINE BELOW REPL_URL = f"https://{os.environ['REPL_SLUG']}.{os.environ['REPL_OWNER']}.repl.co" # NEW LINES BELOW # Celery configuration CELERY_BROKER_URL = "redis://127.0.0.1:6379/0" CELERY_BACKEND_URL = "redis://127.0.0.1:6379/0" celery = Celery("tasks", broker=CELERY_BROKER_URL, backed=CELERY_BACKEND_URL) celery.conf.enable_utc = True # Mailgun config MAILGUN_APIKEY = os.environ["MAILGUN_APIKEY"] MAILGUN_DOMAIN = os.environ["MAILGUN_DOMAIN"] # NEW FUNCTION DECORATOR @celery.task def send_test_email(to_address): res = requests.post( f"https://api.mailgun.net/v3/{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <digest@{MAILGUN_DOMAIN}>", "to": [to_address], "subject": "Testing Mailgun", "text": "Hello world!"}) print(res) # COMMENT OUT THE TESTING LINE # send_test_email("YOUR-EMAIL-ADDRESS-HERE") ``` We've added the following: * Additional imports for Celery and our other local files. * A `REPL_URL` variable containing our repl's URL, which we construct using environment variables defined in every repl. * Instantiation of a Celery object, configured to use Redis as a [message broker and data backend](https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/index.html), and the UTC timezone. * A function decorator which converts our `send_test_email` function into a Celery task. Next, we'll define a function to generate unique IDs for our login links. Add the following code below the `send_test_email` function definition: ```python def generate_login_id(): return ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(30)) ``` This code is largely similar to the code we used to generate our secret key. Next, we'll create the task we called in `main.py`: `send_login_email`. Add the following code below the definition of `generate_login_id`: ```python @celery.task def send_login_email(to_address): # Generate ID login_id = generate_login_id() # Set up email login_url = f"{REPL_URL}/confirm-login/{login_id}" text = f""" Click this link to log in: {login_url} """ html = f""" <p>Click this link to log in:</p> <p><a href={login_url}>{login_url}</a></p> """ # Send email res = requests.post( f"https://api.mailgun.net/v3/{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <digest@{MAILGUN_DOMAIN}>", "to": [to_address], "subject": "News Digest Login Link", "text": text, "html": html }) # Add to user_sessions collection if email sent successfully if res.ok: db = connect_to_db() db.user_sessions.insert_one({"login_id": login_id, "email": to_address}) print(f"Sent login email to {to_address}") else: print("Failed to send login email.") ``` This code will generate a login ID, construct an email containing a `/confirm-login` link containing that ID, and then send the email. If the email is sent successfully, it will add a document to our MongoDB containing the email address and login ID. Now we can return to `main.py` and create the `/confirm-login` route. Add the following code below the `login` function definition: ```python @app.route("/confirm-login/<login_id>") def confirm_login(login_id): login = db.user_sessions.find_one({"login_id": login_id}) if login: session["email"] = login["email"] db.user_sessions.delete_one({"login_id": login_id}) # prevent reuse else: flash("Invalid or expired login link.") return redirect(url_for("index")) ``` When a user clicks the login link in their email, they will be directed to this route. If a matching login ID is found in the database, they will be logged in, and the login ID will be deleted so it can't be reused. We've implemented all of the code we need for user login. The last thing we need to do to get it working is to configure our repl to start a [Celery worker](https://docs.celeryproject.org/en/stable/userguide/workers.html). When we invoke a task with `.delay()`, this worker will execute the task. In `start.sh`, add the following between the line that starts Redis and the line that starts our web application: ```python # Run Celery worker celery -A lib.tasks.celery worker -P processes --loglevel=info & ``` This will start a Celery worker, configured with the following flags: * `-A lib.tasks.celery`: This tells Celery to run tasks associated with the `celery` object in `tasks.py`. * `-P processes`: This tells Celery to start new processes for individual tasks. * `--loglevel=info`: This ensures we'll have detailed Celery logs to help us debug problems. We use `&` to run the worker in the background – this is a part of Bash's syntax rather than a program-specific backgrounding flag like we used for MongoDB and Redis. Run your repl now, and you should see the worker start up with the rest of our application's components. Once the web application is started, open it in a new tab. Then try logging in with your email address – remember to check your spam box for your login email. ![Open in new window](https://docs.replit.com/images/tutorials/31-news-digest-app/open-new-window.png) If everything's working correctly, you should see a page like this after clicking your login link: ![Logged in view](https://docs.replit.com/images/tutorials/31-news-digest-app/logged-in.png) ## Adding and removing subscriptions Now that we can log in, let's add the routes that handle subscribing to and unsubscribing from news feeds. These routes will only be available to logged-in users, so we'll use our `authenticated` decorator on them. Add the following code below the `confirm_login` function definition in `main.py`: ```python # Subscriptions @authenticated @app.route("/subscribe", methods=['POST']) def subscribe(): # new feed feed_url = request.form["feed_url"] # Test feed try: items = scraper.get_items(feed_url, None) except Exception as e: print(e) flash("Invalid feed URL.") return redirect(url_for("index")) if items == []: flash("Invalid feed URL") return redirect(url_for("index")) # Get feed title time.sleep(1) feed_title = scraper.get_title(feed_url) ``` This code will validate feed URLs by attempting to fetch their contents. Note that we are passing `None` as the argument for `since` in `scraper.get_items` – this will fetch the whole feed, not just the last day's content. If it fails for any reason, or returns an empty list, an error message will be shown to the user and the subscription will not be added. Once we're sure that the feed is valid, we sleep for one second and then fetch the title. The sleep is necessary to prevent rate-limiting by some websites. Now that we've validated the feed and have its title, we can add it to our MongoDB. Add the following code to the bottom of the function: ```python # Add subscription to Mongodb try: db.subscriptions.insert_one({"email": session["email"], "url": feed_url, "title": feed_title}) except pymongo.errors.DuplicateKeyError: flash("You're already subscribed to that feed.") return redirect(url_for("index")) except Exception: flash("An unknown error occured.") return redirect(url_for("index")) # Create unique index if it doesn't exist db.subscriptions.create_index([("email", 1), ("url", 1)], unique=True) flash("Feed added!") return redirect(url_for("index")) ``` Here, we populate a new document with our subscription details and insert it into our "subscriptions" collection. To prevent duplicate subscriptions, we use [`create_index`](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.create_index) to create a [unique compound index](https://docs.mongodb.com/manual/core/index-unique/) on the "email" and "url" fields. As `create_index` will only create an index that doesn't already exist, we can safely call it on every invocation of this function. Next, we'll create the code for unsubscribing from feeds. Add the following function definition below the one above: ```python @authenticated @app.route("/unsubscribe", methods=['POST']) def unsubscribe(): # remove feed feed_url = request.form["feed_url"] deleted = db.subscriptions.delete_one({"email": session["email"], "url": feed_url}) flash("Unsubscribed!") return redirect(url_for("index")) ``` Run your repl, and try subscribing and unsubscribing from some feeds. You can use the following URLs to test: * Hacker News feed: https://news.ycombinator.com/rss * /r/replit on Reddit feed: https://www.reddit.com/r/replit.rss ![Subscriptions](https://docs.replit.com/images/tutorials/31-news-digest-app/subscriptions.png) ## Sending digests Once you've added some subscriptions, we can implement the `/send-digest` route. Add the following code below the definition of `unsubscribe` in `main.py`: ```python # Digest @authenticated @app.route("/send-digest", methods=['POST']) def send_digest(): tasks.send_digest_email.delay(session["email"]) flash("Digest email sent! Check your inbox.") return redirect(url_for("index")) ``` Then, in `tasks.py`, add the following new Celery task: ```python @celery.task def send_digest_email(to_address): # Get subscriptions from Mongodb db = connect_to_db() cursor = db.subscriptions.find({"email": to_address}) subscriptions = [subscription for subscription in cursor] # Scrape RSS feeds items = {} for subscription in subscriptions: items[subscription["title"]] = scraper.get_items(subscription["url"]) ``` First, we connect to the MongoDB and find all subscriptions created by the user we're sending to. We then construct a dictionary of scraped items for each feed URL. Once that's done, it's time to create the email content. Add the following code to the bottom of `send_digest_email` function: ```python # Build email digest today_date = datetime.today().strftime("%d %B %Y") html = f"<h1>Daily Digest for {today_date}</h1>" for site_title, feed_items in items.items(): if not feed_items: # empty list continue section = f"<h2>{site_title}</h2>" section += "<ul>" for item in feed_items: section += f"<li><a href={item['link']}>{item['title']}</a></li>" section += "</ul>" html += section ``` In this code, we construct an HTML email with a heading and bullet list of linked items for each feed. If any of our feeds have no items for the last day, we leave them out of the digest. We use [`strftime`](https://www.programiz.com/python-programming/datetime/strftime) to format today's date in a human-readable manner. After that, we can send the email. Add the following code to the bottom of the function: ```python # Send email res = requests.post( f"https://api.mailgun.net/v3/{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <digest@{MAILGUN_DOMAIN}>", "to": [to_address], "subject": f"News Digest for {today_date}", "text": html, "html": html }) if res.ok: print(f"Sent digest email to {to_address}") else: print("Failed to send digest email.") ``` Run your repl, and click on the **Send digest** button. You should receive an email digest with today's items from each of your subscriptions within a few minutes. Remember to check your spam! ![Digest email](https://docs.replit.com/images/tutorials/31-news-digest-app/digest-email.png) ## Scheduling digests The last thing we need to implement is scheduled digests, to allow our application to send users a digest every day at a specified time. In `main.py`, add the following code below the `send_digest` function definition: ```python @authenticated @app.route("/schedule-digest", methods=['POST']) def schedule_digest(): # Get time from form hour, minute = request.form["digest_time"].split(":") tasks.schedule_digest(session["email"], int(hour), int(minute)) flash(f"Your digest will be sent daily at {hour}:{minute} UTC") return redirect(url_for("index")) ``` This function retrieves the requested digest time from the user and calls `tasks.schedule_digest`. As `schedule_digest` will be a regular function that schedules a task rather than a task itself, we can call it directly. Celery supports scheduling tasks through its [beat functionality](https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#using-custom-scheduler-classes). This will require us to run an additional Celery process, which will be a beat rather than a worker. By default, Celery does not support dynamic addition and alteration of scheduled tasks, which we need in order to allow users to set and change their digest schedules arbitrarily. So we'll need a [custom scheduler](https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#using-custom-scheduler-classes) that supports this. Many custom Celery scheduler packages are [available on PyPI](https://pypi.org/search/?q=celery+beat&o=), but as of October 2021, none of these packages have been added to Nixpkgs. Therefore, we'll need to create a custom derivation for the scheduler we choose. Let's do that in `replit.nix` now. Open the file, and add the `let ... in` block below: ```nix { pkgs }: let redisbeat = pkgs.python39Packages.buildPythonPackage rec { pname = "redisbeat"; version = "1.2.4"; src = pkgs.python39Packages.fetchPypi { inherit pname version; sha256 = "0b800c6c20168780442b575d583d82d83d7e9326831ffe35f763289ebcd8b4f6"; }; propagatedBuildInputs = with pkgs.python39Packages; [ jsonpickle celery redis ]; postPatch = '' sed -i "s/jsonpickle==1.2/jsonpickle/" setup.py ''; }; in { deps = [ pkgs.python39 pkgs.python39Packages.flask pkgs.python39Packages.celery pkgs.python39Packages.pymongo pkgs.python39Packages.requests pkgs.python39Packages.redis pkgs.python39Packages.feedparser pkgs.python39Packages.dateutil pkgs.mongodb pkgs.redis redisbeat # <-- ALSO ADD THIS LINE ]; } ``` We've chosen to use [`redisbeat`](https://github.com/liuliqiang/redisbeat), as it is small, simple and uses Redis as a backend. We construct a custom derivation for it using the `buildPythonPackage` function, to which we pass the following information: * The package's `name` and `version`. * `src`: Where to find the package's source code (in this case, from PyPI, but we could also use GitHub, or a generic URL). * `propagatedBuildInputs`: The package's dependencies (all of which are available from Nixpkgs). * `postPatch`: Actions to take before installing the package. For this package, we remove the version specification for dependency `jsonpickle` in `setup.py`. This will force `redisbeat` to use the latest version of `jsonpickle`, which is available from Nixpkgs and, as a bonus, does not contain [this critical vulnerability](https://nvd.nist.gov/vuln/detail/CVE-2020-22083). You can learn more about using Python with Nixpkgs in [this section of the official documentation](https://github.com/NixOS/nixpkgs/blob/master/doc/languages-frameworks/python.section.md). To actually install `redisbeat`, we must also add it to our `deps` list. Once you've done that, run your repl. Building custom Nix derivations like this one often takes some time, so you may have to wait a while before your repl finishes loading the Nix environment. While we wait, let's import `redisbeat` in `lib/tasks.py` and create our `schedule_digest` function. Add the following code to the bottom of `lib/tasks.py`: ```python from redisbeat.scheduler import RedisScheduler scheduler = RedisScheduler(app=celery) def schedule_digest(email, hour, minute): scheduler.add(**{ "name": "digest-" + email, "task": "lib.tasks.send_digest_email", "kwargs": {"to_address": email }, "schedule": crontab(minute=minute, hour=hour) }) ``` This code uses `redisbeat`'s `RedisScheduler` to schedule the execution of our `send_digest_email` task. Note that we've used the task's full path, with `lib` included: this is necessary when scheduling. We've used Celery's [crontab](https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#crontab-schedules) schedule type, which is highly suited to managing tasks that run at a certain time each day. If a task with the same name already exists in the schedule, `scheduler.add` will update it rather than adding a new task. This means our users can change their digest time at will. Now that our code is in place, we can add a new Celery beat process to `start.sh`. Add the following line just after the line that starts the Celery worker: ```bash celery -A lib.tasks.celery beat -S redisbeat.RedisScheduler --loglevel=debug & ``` Now run your repl. You can test this functionality out now by scheduling your digest about ten minutes in the future. If you want to receive regular digests, you will need to enable [Always-on](https://docs.replit.com/hosting/enabling-always-on) in your repl. Also, remember that all times must be specified in the UTC timezone. ## Where next? We've built a useful multi-component application, but its functionality is fairly rudimentary. If you'd like to keep working on this project, here are some ideas for next steps: * Set up a custom domain with Mailgun to help keep your digest emails out of spam. * Feed scraper optimization. Currently, we fetch the whole feed twice when adding a new subscription and have to sleep to avoid rate-limiting. The scraper could be optimized to fetch feed contents only once. * Intelligent digest caching. If multiple users subscribe to the same feed and schedule their digests for the same time, we will unnecessarily fetch the same content for each one. * Multiple digests per user. Users could configure different digests with different contents at different times. * Allow users to schedule digests in their local timezones. * Styling of both website and email content with CSS. * A production WSGI and web server to improve the web application's performance, like we used in our [previous tutorial on building with Nix](https://docs.replit.com/tutorials/30-build-with-nix).
ritzaco
885,444
Being a Softare Engineer: A marathon and not a sprint
So I wrote two technical assessments tests yesterday to apply for a Fullstack role and a Backend role...
0
2021-11-02T14:52:47
https://dev.to/lekea4/being-a-softare-engineer-a-marathon-and-not-a-sprint-2o1d
programming, react, career, dotnet
So I wrote two technical assessments tests yesterday to apply for a Fullstack role and a Backend role at two different organizations and I honestly feel I did not do well. In fact, I think I was terrible! The first of the assessments requires building a simple full-stack application (Frontend: React; Backend: ASP.NET Core web API) for simple bank transactions and I was required to do that in less than an hour! Sounds crazy right? it is not actually that difficult as it sounds and even though I was able to build a simple frontend user interface and create a database from the generated migration script using Entity Framework Core at the backend, I was unable to actually write a controller to perform these basic operations for the frontend to consume. I did feel bad, like really bad but then in retrospect, I begin to see a lot of areas I needed to improve upon such as; 1. Critical and fast thinking: I used a monolithic clean architecture as I forgot I have not had the job (even if I had the job I rather used microservice architectures instead), I just need to get something to work and I was way too ahead of myself and that wasted my time and slowed me down. Also, I didn't come up with how my databases schema should look like in time. 2. Working under pressure: Everyone says they work well under pressure until the pressure comes in. The best way to handle pressure is to make sure you prevent the conditions that would create the pressure in the first place from ever happening. This is also tied to the first point, if I had critical had my thought process I would have handled the pressure much better. The second assessment put my data structure and algorithm knowledge to a test in ways I have not prepared for yet. This made me realize that even know I understand those concepts, It is more important to understand how they are implemented. This reinforced my knowledge that my goal to be one of the best Software Engineers, the journey is a marathon and not a sprint which in turn gives me confidence that I may have lost that battle but I can and will still win the war.
lekea4
886,244
dotnet swagger tofile : FileNotFoundException dotnet-swagger.xml
The Problem Trying to generate swagger from the compiled dll using this command with the...
0
2021-11-03T05:30:43
https://dev.to/wallism/dotnet-swagger-tofile-filenotfoundexception-dotnet-swaggerxml-4425
dotnet, swagger
## The Problem Trying to generate swagger from the compiled dll using this command with the [swagger CLI](https://github.com/domaindrivendev/Swashbuckle.AspNetCore): ``` dotnet swagger tofile --output "swagger-output.json" "C:\projectpath\bin\debug\net5.0\project.dll" v1 ``` I encountered this error: ``` FileNotFoundException: Could not find file 'C:\projectpath\bin\debug\net5.0\dotnet-swagger.xml' ``` The suspicious thing here is the name of the xml file it is looking for, it should be looking for my projects xml file, not dotnet-swagger.xml! This [getting started](https://docs.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-5.0&tabs=visual-studio) tutorial has some code that causes this problem...when it loads the xml comments it does this to get the assembly: ``` var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml"; ``` This works fine for most(all?) other use cases but when trying to generate the swagger using the CLI the executing assembly is dotnet-swagger. ## The Fix Instead of: ``` Assembly.GetExecutingAssembly().GetName().Name ``` Use this: ``` Assembly.GetAssembly(typeof(ClassInTheCorrectProject)).GetName().Name ``` ## Other Possible Causes Make sure the xml file is being created in the same folder as the dll and your generate command is passing the correct path.
wallism
886,351
When Should You Use Type Aliases And Interfaces In Typescript?
A post by pariskrit
0
2021-11-03T07:50:18
https://dev.to/pariskrit/when-should-you-use-type-aliases-and-interfaces-in-typescript-1k20
typescript, react
pariskrit
886,433
A very first PicoLisp program
To conclude our "PicoLisp for Beginners" Series, we should talk about naming conventions, and finally...
14,882
2021-11-03T08:19:08
https://picolisp-blog.hashnode.dev/a-very-first-picolisp-program
picolisp, lisp, functional, tutorial
To conclude our "PicoLisp for Beginners" Series, we should talk about naming conventions, and finally create our first own little program! ----------------------------- ### Naming Conventions PicoLisp has a few naming conventions that should be followed, in order to introduce name conflicts and keep the code readable for other programmers. Here are some of the most important ones: - Global variables start with an asterisk "*" - Global constants may be written all-uppercase - Functions and other global symbols start with a lower case letter - Locally bound symbols start with an upper case letter (for example, arguments in a function declaration) **Note:** This list is not exhaustive, since some concepts were left out that we did not talk about yet (for example, classes). For the complete list of conventions refer to https://software-lab.de/doc/ref.html#conv. ---------------- ### What do you mean by "name conflicts"? For example, let's take the built-in PicoLisp function ``car``. Since data and functions are basically the same thing (see "Concepts and Data Types of PicoLisp"), a local variable "car" could overshadow the function definition of ``car``: ``` : (de max-speed (car) (.. (get car 'speeds) ..) ) -> max-speed ``` Inside the body of ``max-speed`` (and all other functions called during that execution) the kernel function ``car`` is redefined to some other value, and will surely crash if something like ``(car Lst)`` is executed. Instead, it is safe to write: ``` : (de max-speed (Car) # 'Car' with upper case first letter (.. (get Car 'speeds) ..) ) -> max-speed ``` Another example of a name conflict: The symbols T and NIL are global constants, so care should be taken not to bind them to some other value by mistake: ``` (de foo (R S T) ... ``` ----------------------- ### Scripting Obviously it is hard to only rely on the REPL if the programs get more complex. Let's now write a minimal program to illustrate how a basic PicoLisp program should look like: a little "greeting"-program that asks the user for a name and prints it. ---------------- **For best learning results, consider to code along while reading. If you want to skip the explanations, you can find the link to the final scripts at the end.** ---------------- As you might remember from the Input-Output section of the "60 basic functions" series, we can read in a line using the ``line`` function. Let's try to store it in a variable called "Name". We use the REPL to test our idea: ``` : (setq Name (line)) Mia -> ("M" "i" "a") ``` Let's check it: ``` : (prinl "Hello " Name) Hello Mia ``` Okay, this seems to work. So, as second step, let's open a text file called **greeting.l** (.l is the ending for PicoLisp scripts) and paste the two lines inside. ``` # greeting.l (prinl "Hello! Who are you?") (setq Name (line)) (prinl "Hello " Name "!") ``` We can execute the script using the interpreter by typing ``pil greeting.l`` from the console. However, unfortunately our output looks kind of strange. ``` $ pil greeting.l Hello! Who are you? Hello ! ``` *[The $ symbolizes a shell command and doesn't need to be typed.]* It seems our ``(line)`` command was skipped! Why is that? According to the [documentation](https://software-lab.de/doc/index.html), ``line`` "*reads a line of characters from the current input channel*". It seems we need to specify that we want to read from the **console**, because during the execution of a file, the current "input channel" is the **file** itself. After searching the documentation, we find that we can define that by the function ``in NIL`` (as opposite to ``in <filename>``, to read in from a file). So we change it and the modified script looks like this: ``` # greeting.l (prinl "Hello! Who are you?") (setq name (in NIL (line))) (prinl "Hello " name "!") ``` After calling it: ``` Hello! Who are you? Mia Hello Mia! : ``` It works! After execution the REPL is still open, as we can see by the colon in the next line. Therefore we add ``(bye)`` as last line of the function to close it. ---------------- Now, for the sake of demonstration, let's give the greeting its own function. We remember that a function is defined using the ``de`` function. ``` (de greeting (Name) (prinl "Hello " Name "!") ) ``` *The function argument "Name" is spelled with an uppercase letter according to the conventions that we saw above. Also, note that the parentheses in the second line have some space in between: ``) )``. This is to visualize that the first bracket is closing the bracket from the same line, but the second one comes from the line above.* Of course, the ``greeting`` function needs to be defined **before** it gets called, otherwise the interpreter will not know what to do with it. The complete program now looks like this: ``` # execute this script by calling "pil greeting.l" (de greeting (Name) (prinl "Hello " Name "!") ) (prinl "Hello! Who are you?") (setq Name (in NIL (line))) (greeting Name) (bye) ``` The complete example can be download it [here](https://gitlab.com/picolisp-blog/single-plage-scripts/-/blob/main/beginners-tutorial/greeting-script.l). ------------------------ ### Creating an executable program It can be a little bit unconvenient to use the interpreter for calling the program, especially if it gets more complex. Let's check the [tutorial](https://software-lab.de/doc/tut.html) how to create an executable file: > It is better to write a single executable file using the mechanisms of "interpreter files". If the first two characters in an executable file are "#!", the operating system kernel will pass this file to an interpreter program whose pathname is given in the first line (optionally followed by a single argument). This is fast and efficient, because the overhead of a subshell is avoided. 1. Step 1: **Find the path to the PicoLisp interpreter.** If PicoLisp is installed **globally**, for example using a package manager, it is most probably in the folder ``/usr/bin``. We could verify this using the shell command ``which``: ``` $ which picolisp /usr/bin/picolisp ``` If the program is installed only **locally**, the path to the "picolisp" interpreter will be in the ``bin/`` folder of the installation folder: ``<Path to Folder>/pil21/lib``. We can find the path using the shell command ``locate``: ``` $ locate "bin/picolisp" /home/user/pil21/bin/picolisp ``` 2. Step 2: **Add the library file ``lib.l``** The interpreter alone is not enough, we also need the library called ``lib.l``. For **global installations**, we will usually find it in the ``/usr/lib`` folder. ``` $ locate "picolisp/lib.l" /usr/lib/picolisp/lib.l ``` For **local installations**, it is in the root of the installation folder. ``` $ locate "lib.l" /home/user/pil21/lib.l ``` Fine! Now back to our script! 3. Step 3: **Adding the path to the interpreter and the library**: This should be the first line in your script. ``` #! /usr/bin/picolisp /usr/lib/picolisp/lib.l ``` Replace these folders by your local folders if PicoLisp is not installed globally. Alternatively, you can also set soft links to ``/usr/bin`` and ``/usr/lib``: ``` $ ln -s /home/foo/pil21 /usr/lib/picolisp $ ln -s /usr/lib/picolisp/bin/picolisp /usr/bin ``` 4. Step 4: **Making the script executable** As a last step, let's make our script executable by ``chmod +x greetings.l``. Now we can execute the script by ``$ ./greeting.l``: ``` $ ./greeting.l Hello World! Who are you? Mia Hello Mia! ``` That's it! The final script can be downloaded [here](https://gitlab.com/picolisp-blog/single-plage-scripts/-/blob/main/beginners-tutorial/greeting-executable.l). ----------------------------- ### Adding arguments Sometimes we want to add the arguments directly at the execution of the script, instead of typing it after it started. To do this, we modify our ``greeting`` function so that it prints a global variable called``*Name`` (according to the convention that global variables should start with an asterisk "*"). ``` (de greeting () (prinl "Hello " *Name "!")) ``` Then we need to pass our command line argument to the global variable ``*Name`` by ``` (setq *Name (opt)) ``` where ``opt`` is a pre-defined functiont hat retrieves the next command line option. The full program so far: ``` #! /usr/bin/picolisp /usr/lib/picolisp/lib.l (de greeting () (prinl "Hello " *Name "!")) (setq *Name (opt)) (greeting) (bye) ``` Now we can pass the name to our script by ``./greetings.l Mia``. ``` $ ./greetings.l Mia Hello Mia! ``` It works! But what if we don't pass any argument? Then the name just stays blank: ``` $ ./greetings.l Hello ! ``` It would be nice if the program would tell us about the name option. Let's write a little check to see if the global variable ``*Name`` is empty. If yes, let's print out a warning, otherwise the ``greeting`` function should be called. For this we can use a simple ``ifn`` ("if not") statement: ``` (ifn *Name (prinl "Please specify a name!") (greeting) ) ``` Now let's try again: ``` $ ./greeting-with-args.l Please specify a name! $ ./greeting-with-args.l Mia Hello Mia! ``` The full script can be downloaded [here](https://gitlab.com/picolisp-blog/single-plage-scripts/-/blob/main/beginners-tutorial/greeting-with-args.l). ----------------------------- In the last part of the beginner's series, we will talk about the documentation and debugging functions of PicoLisp. ----------------------------- ### Sources - https://software-lab.de/doc/index.html - https://software-lab.de/doc/tut.html ### Gitlab links: - https://gitlab.com/picolisp-blog/single-plage-scripts/-/blob/main/beginners-tutorial/greeting-script.l - https://gitlab.com/picolisp-blog/single-plage-scripts/-/blob/main/beginners-tutorial/greeting-executable.l - https://gitlab.com/picolisp-blog/single-plage-scripts/-/blob/main/beginners-tutorial/greeting-with-args.l
miatemma
886,471
7 Useful JS Fiddles
Some useful snippets for you to use.
0
2021-11-03T18:40:01
https://dev.to/davinaleong/7-useful-js-fiddles-1mg0
tutorial, programming, javascript, css
--- title: 7 Useful JS Fiddles published: true description: Some useful snippets for you to use. tags: tutorial, programming, javascript, css cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cu0hj945fss8tbbufvw9.jpg --- Sharing some JSFiddles the rest of you may find useful. I often use JSFiddle as a playground to test out snippets of UI code before putting them into an actual project. I built all these fiddles myself, with some help from Google. Excuse the boring colour scheme; I'm not much of a designer... Anyways, hope you find these code snippets useful. #Custom Checkbox & Radio {% jsfiddle https://jsfiddle.net/davinaleong/c83576a2/49/ result,html,css,js %} Custom checkboxes and radio buttons. Includes hover effects. I had a project from my day job where I had to create custom checkboxes. I already had an idea on how to do it, but needed to test the idea. I got the code to render the checkmark [here](https://www.w3schools.com/howto/howto_css_custom_checkbox.asp). I also decided to include a snippet for radio buttons in-case I needed it in the future. #Ribbon Label {% jsfiddle https://jsfiddle.net/davinaleong/km8fd29a/27/ result,html,css,js %} Product ribbon label. The image is generated from [placeholder.com](https://placeholder.com/). My most recent project required me to style product labels as ribbons. I tried to find solutions online, but many of them were too complicated. In the end, I came up with this solution. I couldn't get pseudo elements to work for the ribbon corner. So I ended up using an inner div to achieve the result. #Custom File Input Placeholder {% jsfiddle https://jsfiddle.net/davinaleong/th18by4d/24/ result,html,css,js %} Custom File Input Placeholder. This snippet uses [jQuery](https://jquery.com/). One of the projects I worked on recently at my day job needed needed a file input to upload the customer's profile picture. There were no input labels in the mockup. It used the `placeholder` attribute as the input's label. The problem is the file input type doesn't render the `placeholder` attribute. This fiddle is my solution to the problem after searching for ideas to the problem. #Custom Select Field {% jsfiddle https://jsfiddle.net/davinaleong/pszruL8a/34/ result,html,css,js %} I often have designs that change the design of the select input arrow. After some searching, I found the code to render the arrow. Remember to make the input field's background **transparent**. #Button with Overlapping Shadow {% jsfiddle https://jsfiddle.net/davinaleong/q8me92nL/12/ result,html,css,js %} I had one project where the button had such a design. Here is the solution. To give a *transparent* appearance, make sure the `inset` `box-shadow` colour is the same as the your `background colour`. #Grid Gallery {% jsfiddle https://jsfiddle.net/davinaleong/h7xfecu8/151/ result,html,css,js %} I had to build a grid gallery for one of my projects for my day job. Since it was company policy to support IE11, I had to find a solution that works for IE11. Here is the solution I came up with. I'm sure there's a better way to code a responsive grid, but this was what I could think of at that time. #Mega Menu Hover {% jsfiddle https://jsfiddle.net/davinaleong/zt38hn6k/177/ result,html,css,js %} This solution uses [jQuery](www.jquery.com). I had to build a mega menu for one of my projects. This was what I came up with.
davinaleong
886,493
Mistrymitra a Growing Firm
Mistrymitra as an Indians local search engine, provide services in numerous ways whether it would be...
0
2021-11-03T11:21:57
https://mistrymitra.com/blog/about-us/mistrymitra-a-growing-firm
Mistrymitra as an Indians local search engine, provide services in numerous ways whether it would be building homes, schools, offices, malls, roads, hospitals, theme parks – everything. It is vital to everyone’s life. Construction is all around us. As India’s population grows famously, there is a greater demand for new houses, offices, and infrastructure, as well as an endless need for repair and maintenance work. Mistrymitra is a growing firm in terms of construction. We are waiting for contractors, construction designers, retailers, dealers, builders, and workers to join us instantly and take the advantage of the offers which we have for them. In addition, we don’t want to grow alone as a firm, we want our customers to grow with us also because we want our customers to benefit from these offers. The offers which Mistrymitra turns to provide offers to their valued customers includes: Job Security There is an increasing need for new buildings and the rebuilding of old structures. No matter what season it is, there will always be a need for construction. As the baby boomer construction workers dwindle out of the workforce, it is becoming an optimal time for you to join us and secure your jobs for the better man of yourself. Good Salary Package As we are a growing firm in the field of construction and maintenance work we provide a good salary price for workers and that too in the nearby location. All it requires you to do is produce high-quality work by simply putting in your time and hard work. Diverse Co-workers As a Mistrymitra firm, we have people from different backgrounds and all walks of life. This abundance of diversity brings unique experience, knowledge, and skills to the workplace, serving as a major benefit to construction projects. The construction industry is hands-on and no two days will be alike. You will be learning new things every day and will be able to apply that knowledge in the field. There are not very many jobs that offer that kind of work environment, join us and take the advantage of such offers which we have for you. If you need any type of Mistry & contractor's then visit, www.mistrymitra.com. Or contact us directly on +917565997701
glensmith088
886,518
5 Ruang Baca yang Mengingat Anda Pada The Secret Garden
Retret luar ruangan di The Secret Garden oleh Frances Hodgson Burnett adalah tempat lamunan dan...
0
2021-11-03T12:16:43
https://dev.to/aaridwan16/5-ruang-baca-yang-mengingat-anda-pada-the-secret-garden-3mco
books, tutorial, webdev, beginners
Retret luar ruangan di The Secret Garden oleh Frances Hodgson Burnett adalah tempat lamunan dan penyembuhan, dan sulit untuk tidak membayangkan bahwa itu akan menjadi tempat yang sempurna untuk membaca juga. Bayangkan: Berada di tempat yang nyaman dengan sebuah buku di tangan, dikelilingi oleh sinar matahari yang hangat, angin sepoi-sepoi, aroma bunga yang manis, dan suara kicau burung yang riang. Dengan datangnya musim semi, kami telah mengumpulkan koleksi taman rahasia yang sempurna untuk membaca musim semi. Sekarang jika saja kita bisa menyewa seorang penata taman untuk membuat salah satu dari ini untuk kita. 1. Langkah Dihapus Jalur batu dan jembatan kayu memberi gazebo taman ini perasaan terisolasi — hanya Anda, buku Anda, dan alam. 2. Bersatu dengan Alam Keindahan ruang ini terletak pada kesinambungannya dengan alam. Gazebo dibangun dari pohon-pohon tumbang dan untuk mencapainya dibutuhkan jalan kaki melewati hutan. Lentera akan memungkinkan untuk membaca bahkan setelah matahari terbenam, ketika jangkrik mulai berkicau dan kunang-kunang muncul. 3. Kamar Mawar Bayangkan aromanya saat Anda duduk untuk membaca di bangku terpencil di antara mawar ini. 4. Tempat yang Canggih Meskipun tidak terlalu "tersembunyi", lampu gantung yang digantung di pohon membuat tempat membaca luar ruangan yang berkelas. 5. Sudut Tengah Sebuah bangku di bawah cabang-cabang yang rimbun di taman yang teduh dikelilingi oleh bunga berwarna-warni membuat tempat membaca yang tenang dengan sedikit kekhawatiran akan sengatan matahari. Source: https://www.penulisgarut.web.id/2021/10/membuat-taman-baca.html
aaridwan16
886,560
DevBox Showcase: CRON Explorer
Hi 👋, This is the fourth post in the series showcasing tools from DevBox 🎉, a desktop application or...
14,629
2021-11-03T12:57:14
https://dev.to/gdotdesign/devbox-showcase-cron-explorer-6df
productivity, tutorial, json, cron
Hi 👋, This is the fourth post in the series showcasing tools from [DevBox](https://www.dev-box.app) 🎉, a desktop application or browser extension packed with everything that every developer needs. ------- DevBox has a **CRON Explorer tool**, which is useful for checking [CRON Expressions](https://en.wikipedia.org/wiki/Cron#CRON_expression). Once DevBox is opened, click on its card: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2btc5opsdq4oe5gtiyob.png) It will show you the interface: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5knf0xa1cdl07ct7j0a5.png) - In the `Expression` section you can write your CRON expression and will show a readable version of it. - In the `Next 10 Occurences - Calendar` section you can see the dates of the next 10 occurrences of the expression. - In the `Next 10 Occurences` section you can see the next 10 exact occurrences of the expression. - In the `Examples` section you can see and click on some basic CRON expressions. ------ You can still buy DevBox [here](https://gdotdesign.gumroad.com/l/devbox/earlybird) for **a lifetime price of ~~$25~~ $20 💰**, get it while it's available! More DevBox showcases are coming, so follow me here if you want to be notified!
gdotdesign
886,647
Formatting numbers in JavaScript
In the last couple of posts, I showed you how you can format dates and relative dates using the...
0
2021-11-09T09:39:37
https://dberri.com/formatting-numbers-in-javascript
javascript, formatting
--- title: Formatting numbers in JavaScript published: true date: 2021-11-03 09:00:00 UTC tags: javascript, formatting canonical_url: https://dberri.com/formatting-numbers-in-javascript --- In the last couple of posts, I showed you how you can format dates and relative dates using the native Internationalization API. Today, I'll we're going to format regular numbers using the same API. Yes, Intl is not only used for date formatting, there's a whole range of use cases including numbers, lists and plurals. So, when I say we are going to format numbers, what I mean is: numbers are not represented the same way in different cultures. For example, in Brazil we the most common way to represent numbers is like so: 1.234.567,89. The comma is the decimal separator and the period is the thousands separators. In other cultures those are switched, or the thousands separator is not used. If your application is served in other locales, you can use the Intl API to resolve these for you and you don't even need to plug in another dependency. Let's see how we can use it: ```js const myNumber = 1234657.89; const formatter = new Intl.NumberFormat('pt-BR'); console.log(formatter.format(myNumber)); // 1.234.567,89 const formatterUS = new Intl.NumberFormat('en-US'); console.log(formatter.format(myNumber)); // 1,234,567.89 ``` If you've read the previous posts, you'll probably be familiar with the syntax. You first instantiate the formatter with the locale and then pass the parameters you want to format. Of course, you can also pass an options object to the formatter if you want to further customize the number format. Let check out a few options: ```js const myNumber = 1234567.89; let formatter = new Intl.NumberFormat('pt-BR', { style: 'currency', currency: 'BRL' }); console.log(formatter.format(myNumber)); // R$ 1.234.567,89 formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' }); console.log(formatter.format(myNumber)); // $ 1,234,567.89 formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', currencyDisplay: 'code' // default: 'symbol' }); console.log(formatter.format(myNumber)); // USD 1,234,567.89 formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', currencyDisplay: 'name' }); console.log(formatter.format(myNumber)); // 1,234,567.89 US dollars ``` There are also options to customize the way it displays negative values, for example instead of using $ -1.00 it can display ($1.00): ```js const myNumber = -1234567.89; // notice the negative sign const formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', currencySign: 'accounting' }); console.log(formatter.format(myNumber)); // ($1,234,567.89) ``` And, if you are working with currencies or percentages, the number of fraction digits is already set using ISO 4217, but if you're working with regular numbers, you can set that option too: ```js const myNumber = 1234567.00; formatter = new Intl.NumberFormat('en-US'); console.log(formatter.format(myNumber)); // 1,234,567 formatter = new Intl.NumberFormat('en-US', { minimumFractionDigits: 2 }); console.log(formatter.format(myNumber)); // 1,234,567.00 formatter = new Intl.NumberFormat('en-US', { maximumFractionDigits: 3 }); const myNumber2 = 1234567.891011 console.log(formatter.format(myNumber2)); // 1,234,567.891 ``` You can customize the notation as well: ```js formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', notation: 'standard' // default }); console.log(formatter.format(myNumber)); // $1,234,567.89 formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', notation: 'scientific' }); console.log(formatter.format(myNumber)); // $1,23E6 formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', notation: 'engineering' }); console.log(formatter.format(myNumber)); // $1,23E6 formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD', notation: 'compact' }); console.log(formatter.format(myNumber)); // $1.2M ``` There are still plenty of options we did not cover but I think the most popular ones are those in the examples, but if you can check out the full list [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat#parameters). And this is it for this post. See you on the next one!
dberri
886,770
Answer: Does LESS have an "extend" feature?
answer re: Does LESS have an "extend"...
0
2021-11-03T13:38:42
https://dev.to/sudharshan24/answer-does-less-have-an-extend-feature-h26
{% stackoverflow 69825821 %}
sudharshan24
887,134
Root to Linux: BIOS
At Forem a few coworkers and I have formed a Linux Club🤓. Joe Doss is leading us on this adventure...
15,339
2021-11-04T20:11:57
https://dev.to/coffeecraftcode/root-to-linux-bios-16jm
linux, beginners, devops
At Forem a few coworkers and I have formed a Linux Club🤓. [Joe Doss](https://twitter.com/jdoss) is leading us on this adventure into Linux. The adventure starts by working through the [Gentoo Handbook](https://wiki.gentoo.org/wiki/Handbook:AMD64) and installing Gentoo(a distribution of Linux) on Lenovo ThinkPads. ![Lenovo ThinkPad specs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtbxxs54rzy84imedtx0.jpg) <small> Our ThinkPad specs </small> During our meetings, I have been [taking notes](https://github.com/cmgorton/linuxclub_notes). These notes are useful for people in our club. I am writing this series to take what we have learned and share with a broader audience outside of the club. One of the first things we did when we got our laptops was set up our BIOS. This post will give a brief description of what BIOS is, how to navigate to it, and some of the common settings you might change. ## What is BIOS? BIOS or Basic Input/Output System is a small piece of code that lives in a “read only” chip on your system board(motherboard). It controls low level/basic functions of a computer. **Note:** read-only means that a file can be opened or read but can't be deleted or renamed. The BIOS will identify, test, and configure your computer's hardware. It will also connect it to your OS. This is called the BOOT process. ## Navigating to the BIOS The first software that runs on your computer is the BIOS. As you start your computer you may see a prompt that says `f2 = Setup`. This prompt will give you access to the BIOS setup utility. This is sometimes referred to as the Setup System. Other keys that can invoke BIOS depending on your computer are F1, F10, F12, Del, or Esc. ## Setting up your BIOS In general you should leave your BIOS settings as the default settings. When you set up your own Linux distribution you may need to change a few settings. You can also change settings to have more control over your hardware. ### Common Settings in BIOS **BOOT Priority Order** One common setting you can change is the BOOT Priority Order. This is a priority list that tells your computer which operating system to boot from. After the BIOS tests your hard drive it will check for a “bootable” drive. This is where tell your computer to use a Linux distribution to boot. The default BOOT Priority Order looks for your computer's OS. On our ThinkPads this was Windows 10. Our goal was to boot Gentoo instead of Windows 10. First, we downloaded Gentoo to a USB drive. Next, we changed the BOOT Priority Order to use our USB as the top priority. This meant our computer will look for our USB to boot up Gentoo instead of the ThinkPad's default OS. ![BOOT Priority Order](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/198yhf7rayqgp4i1hbs6.png) **Secure Boot** > **Note:** Secure Boot is technically a feature of UEFI and not legacy BIOS. Look at the image at the top of this post that shows our ThinkPad specs. You can see it mentions the computers UEFI BIOS version. Secure Boot can help a computer resist attacks and infection from malware. You can find the setting for Secure Boot in the `Security` tab. Typically, you want to keep this setting set to on. We disabled this setting because it prevents non-Microsoft OS's from working. **Date/Time** You can update your systems data and time in the `Main` tab. We updated the date and time in our BIOS to reflect our timezones and correct day, month, and year. There are many other settings in BIOS that you can change. These are the common ones we updated. ## Save and Exit BIOS To Exit out of the BIOS navigate to the Restart tab and choose `Exit Saving Changes`. Or you can exit and save by pressing `F10`. ## A Shift to UEFI BIOS has a long history but there has been a shift away from using it. Companies like Intel are phasing out BIOS in favor of the Unified Extensible Firmware Interface(UEFI) I mentioned in the Secure Boot section. UEFI is a lightweight BIOS alternative and newer computers are shipped with it. It has several advantages over the traditional BIOS. This post won't cover everything about UEFI but some of the advantages of UEFI are: - it can boot from drives of 2TB or greater. - it has more access to [addressable space](https://searchstorage.techtarget.com/definition/address-space). The BIOS is limited to 1MB. - it has a user-friendly interface. Unlike the BIOS, you can navigate it with a mouse. - it ships with security features like Secure Boot.
coffeecraftcode
891,140
DOM (Document Object Model) is really easy to understand!!!
1. What is DOM? 2. What is “D” “O” “M” for? 3. Is DOM same as...
0
2021-11-07T16:30:03
https://dev.to/riyadmahmud2021/-what-is-dom-what-is-d-o-m-for-5g44
javascript, webdev, html, beginners
1. What is DOM? ------------------------------------------------------------- 2. What is “D” “O” “M” for? ------------------------------------------------------------- 3. Is DOM same as HTML? ------------------------------------------------------------- 4. What is the relation between DOM and HTML? ------------------------------------------------------------- -------------------------------------------------------------- Today, we will know 4 queries solution. These are above. Lets learn... DOM contains Document Object Model. D for Document. It Means: HTML doc file O for Object. It Means: HTML element(body,head,p,h1,etc) / javascript object) We can say, HTML element(p,h1,etc) = javascript object. M for Model. It Means: HTML doc’s “element/ javascript object” Model DOM works as a “javascript object model”. Here, “javascript object” is known as HTML element(p,h1,etc). Actually, DOM is an HTML element(p,h1) model. Best of luck.
riyadmahmud2021
892,001
30 Projects Ideas!
To-do list app Note-taking app Calendar Application Chat System Weather application Portfolio...
0
2021-11-08T15:15:06
https://www.buymeacoffee.com/mrdanishsaleem/30-projects-ideas
programming, tutorial, beginners, codenewbie
1. To-do list app 2. Note-taking app 3. Calendar Application 4. Chat System 5. Weather application 6. Portfolio website 7. Image search 8. Chess game 9. Donation website 10. Budget tracker 11. Tic Tac Toe game 12. Form validator 13. Web Scraper 14. Simple FTP client 15. Port Scanner 16. MP3 Player 17. Tetris game 18. Netflix clone 19. Discord bot 20. Video chat system 21. Pacman game 22. Alarm clock 23. Stock trading app 24. Issue tracker 25. Music Store App 26. Twitter Bot 27. Spam Classifier 28. Content Aggregator 29. Snake game 30. File manager --- ## Let's connect! You can follow me on [Twitter](https://twitter.com/MrDanishSaleem), [LinkedIn](https://www.linkedin.com/in/mrdanishsaleem/) & [GitHub](https://github.com/mrdanishsaleem/) --- If you like this post. Kindly support me by [Buying Me a Coffee](https://www.buymeacoffee.com/mrdanishsaleem) ![Buy Me a Coffee](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0ikih5nlsqs0oops11e.png)
mrdanishsaleem
894,101
Getter and Setter - Python
As the title says, I am presenting, Getters and Setters for First, let's review some OOP concepts,...
0
2021-11-10T13:51:04
https://dev.to/wsadev01/getter-setter-4gj6
python, oop, programming, tutorial
As the title says, I am presenting, Getters and Setters for <img align="center" width=75% src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftgfunsb10xfxwa373p0.png"> First, let's review some OOP concepts, then we will see what are those (If you thought of crocs, consider leaving a unicorn) getters and setters. <br> ## OOP Concepts Method = A function from a class Property = A variable from a class A property starting with "_" = Private property (i.e it can only be accessed from within the class) Apopathodiaphulatophobie = Exaggerated fear of constipation <br> ## Getter and Setter A **getter** is a method that is used to obtain a property of a class A **setter** is a method that is used to set the property of a class <br> ## Why do these two exists? Mostly for two reasons, encapsulation and to maintain a consistent interface in case internal details change (Thank you [Danben](https://stackoverflow.com/users/217332/danben) for your [Answer](https://stackoverflow.com/questions/2649096/explain-to-me-what-is-a-setter-and-getter)) <br> ## How do i write it in Python? The syntax for doing that is as follows ```python >>> class MyClass: >>> def __init__(self, A, B=None): >>> self._A = A >>> self._B = B >>> @property >>> def A(self): >>> return self._A >>> @A.setter >>> def A(self, A): >>> self._A = A >>> >>> @property >>> def B(self): >>> return self._B >>> >>> @B.setter >>> def B(self, B): >>> self._B = B >>> >>> instance = MyClass("This is A") >>> print(instance.A) This is A >>> instance.A = "A Changed" >>> print(instance.A) A Changed >>> print(instance.B) None >>> instance.B = "This is B" >>> print(instance.B) This is B ```
wsadev01
894,788
How to Optimize Website for Core Web Vitals--A Guide for Designers
You have created a unique website for your business with great visuals and text. But even that is no...
0
2021-11-11T05:06:03
https://dev.to/hennykel/how-to-optimize-website-for-core-web-vitals-a-guide-for-designers-4dhc
corewebvitals, webdev, websiteoptimization, seo
You have created a unique website for your business with great visuals and text. But even that is no guarantee of the website working well to drive traffic, which is crucial for business growth. What is most important now is that you take into account your users' experience. In other words, you should evaluate your website from the users' perspective, which is about keeping your core web vitals healthy. And you better start thinking in terms of core web vitals since these are now the new set of parameters that Google has come out to rank websites on search results. From June 2021, Google will use Core Web Vitals for ranking of websites. So, the old-fashioned SEO is not enough, and instead, as a web designer, you should now pay attention to optimizing the core web vitals. <strong><h3>What are Core Web Vitals?</h3></strong> Core Web Vitals from Google are the new measures to determine the level of user experience while being on the web. So, it is about the experience of a website’s users rather than how much time a website takes to load. The Core Web Vitals parameters judge a website on how quickly users can interact with its pages. Google has laid down three Core Web Vitals: Largest Contentful Paint (loading performance), Cumulative Layout Shift (visual stability), and First Input Delay (interactivity). Google considers these as the most critical metrics for user experience. <strong><h3>Largest Contentful Paint (LCP)</h3></strong> The LCP lets you know how much time your website's hero content or image takes to load from users' perspectives. For example, if users cannot see your most important image, including your custom logo, on your website, they get frustrated. If you do not have a custom logo or looking to redesign your logo, it is suggested to use an <a href="https://www.designhill.com/tools/logo-maker" target="_blank" rel="noopener">online logo maker</a> tool to create & customize your logo or get a new one! So, a good LCP time to load is less than or equal to 2.5 seconds. But if it is less than or equal to 4.0 seconds, then it needs improvement. If it takes more than 4.0 seconds to load, it is a poor user experience. <strong><h3>Cumulative Layout Shift (CLS)</h3></strong> The Cumulative Layout Shift (CLS) is about a website's layout shift, which results in a terrible user experience. When content is loading, some of the website's elements shift or move around. For example, you can't click on CTA because it has moved somewhere else or as your site loads ads, it keeps shifting the content around. According to Google, CLS scores less than or equal to 1.0 seconds are good scores and excellent user experience. But if it is less than or equal to 0.25 seconds, it requires improvements, and more than 0.25 seconds is a poor score. <strong><h3>First Input Delay (FID)</h3></strong> The First Input Delay is the time between the user clicking on something on your website and the site responding to show that content. Since this aspect of content is heavily done and affected by Java Script, it is a complicated measure. A good FID score is less than or equal to 100 ms. But less than or equal to 300 ms requires improvement, and more than 300 ms is a poor user experience. <strong><h1>Tips to optimize a website for Core Web Vitals </h1></strong> <strong><h2>Here are some essential tips you should consider to improve your Core Web Vitals </h2></strong> <strong><h3>Optimize your JavaScript execution </h3></strong> Check your FID score, and if it is poor, then it means you should be optimizing your JavaScript execution. This measure will help reduce the time between your browser execution JS code and your page. To reduce the execution, you should defer unused JS, which you can find out easily. First, press the right mouse click when visiting your website and go to the ‘Inspect’ button. Then, go to the ‘Sources’ and find the three dots on the bottom. You can then add a tool called ‘Coverage’ and then go for the load function. You will then find out the JavaScript unused on your website page. After you have found the unused JS, reduce it by code splitting. You should distribute one JS bundle, which are combined files, and turn them into small pieces. <strong><h3>Compress images to improve the loading speed </h3></strong> If your website design uses multiple images, consider compressing them to optimize your Core Web VItals. By compressing the images, you will make your website pages much lighter and reduce page size, which improves page loading speed and user experience. This measure should improve your LCP score. Also, when you compress images, you will activate Content Delivery Network [ CDN] for your images. Note that CDN is a network of servers for users across the world. So, when you compress images, a server closest to the users can fast load images and pages. <strong><h3>Use lazy loading </h3></strong> If your site’s LCP score is poor, you can improve it by using lazy loading, computer programming for website design and development. Lazy loading helps keep a website’s loading speed when the user is scrolling down the page. In this way, your LCP scores are always excellent. Also, it will help in limiting your bandwidth usage and your site’s SEO. Consequently, it is a good measure also to reduce your site’s bounce rate. Lazy loading is beneficial for websites having animations, many images, or videos. <strong><h3>Ensure a better server response time </h3></strong> Your browser should receive content immediately from the server. Bit if it takes a longer time, it will delay showing up the heavy content such as infographics. As a result, your SEO and UX will be on the negative side. So, if you want to improve your LCP score and page-load metric, then pay attention to the ways to make your server respond quickly. To find out your server response time, you should use First Byte [TTFB], which shows calculates the time a user will take to receive the first byte of your page content. At the same time, do not forget first to ensure that your web hosting service is fast and you use CDN for your website. You should also review your plugins as they make your page heavier. So, keep only the plugins that are essential to your business. <strong><h3>Pay attention to images and embeds sizes. </h3></strong> If your CLS score is poor, visit your images and embeds sizes to correct their dimensions. The score might be poor due to the wrong dimensions of images, ads, or embeds in the CSS file. You should set their width and height properly so that your browser can allocate the right amount of space on your website page when some element is loading. Make sure also that embeds such as YouTube videos on your web page are of correct dimensions. The default size of the video might be too big for your site page, and you need to resize it. To resize the video, go to YouTube and open the video you want to use on your website, and then by clicking the share button go to the option <> Embed. Then, copy the code you get into your site’s back-end to change the video dimensions. <strong><h3>Wrapping Up </h3></strong> Google will now be ranking websites on the new Core Web Vitals such as the LCP, CLS, and FID. To optimize your website, you need to optimize your JavaScript execution, compress images on your site, use lazy loading, ensure better server response time and use the proper sizes of images and videos.
hennykel
894,891
Reactjs Overview - (EchLus Community - Part 1)
Halo kawan - kawan kali ini saya akan sharing materi tentang "Reactjs Overview". ini merupakan seri...
0
2021-11-11T07:17:39
https://dev.to/_satria_herman/reactjs-overview-echlus-community-a06
react, javascript, webdev, programming
Halo kawan - kawan kali ini saya akan sharing materi tentang "Reactjs Overview". ini merupakan seri sharing session batch tentang Reactjs. ##**Overview Materi** Di batch ini kita akan belajar tentang - Install and Setup Reactjs - Basic Props and State - React Hooks - React State Management - Communicating With Server - Esbuild #1. Install dan Setup Reactjs Pada proses instalasi kita bisa melihatnya di [Sini](https://reactjs.org/docs/create-a-new-react-app.html). untuk instalasinya sebenarnya ada dua cara. yakni dengan menggunakan [CDN](https://www.niagahoster.co.id/blog/apa-itu-cdn/). atau dengan menggunakann Create React App(CRA). Untuk kesempatan kali ini saya akan menginstall React menggunakan CRA. oke langsung saja ke tutorial. - Buka Terminal atau CMD. lalu jalankan `npx i create-react-app belajar_react` . npx sendiri kepanjangan dari node package execute. yang mana jika dijalankan maka akan mengeksekusi package yang sudah maupun belum kita install. dengan menggunakan npx kita tidak perlu menginstall CRA ke dalam local computer kita. ``` D:\programming\belajar Javascript\react>npx create-react-app belajar_react Creating a new React app in D:\programming\belajar Javascript\react\belajar_react. Installing packages. This might take a couple of minutes. Installing react, react-dom, and react-scripts with cra-template... [ ................] - fetchMetadata: sill resolveWithNewModule scheduler@0.20.2 checking installable stat ``` tunggu prosesnya sampai selesai - jika sudah selesai ketik perintah `cd belajar_react`. lalu ketik perintah `code .` agar membuka project kita di [Visual Studio](https://code.visualstudio.com/).
_satria_herman
894,909
HacktoberFest Badge
Now i have one more Hacktoberfest Badge
0
2021-11-11T08:02:33
https://dev.to/rishi098/hacktoberfest-badge-kd
hacktoberfest, devlive, programming, webdev
Now i have one more Hacktoberfest Badge
rishi098
894,918
Working MongoDB with Golang
Every tutorial has a story. In that tutorial you'll find out different contents that is related to...
0
2021-11-11T14:23:47
https://dev.to/burrock/working-mongodb-with-golang-2g76
go, mongodb, programming, database
Every tutorial has a story. In that tutorial you'll find out different contents that is related to MongoDB, GoLang and working with mock data and deployment. Here is my content. ## Project structure ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/si76vvkkrnej4p7fnrm7.png) PS: Here is the one different folder that name is dummy_api. That folder has own main file. What does it mean? When I run the main.go file before I'll add mock data. If you didn't catch up [Working with the marshal and unmarshal](https://dev.to/bseyhan/hey-marshal-1134) tutorial you should read it. Another important topics is "context". [Essential Go](https://www.programming-books.io/essential/go/context.todo-vs.context.background-d5224e27ff724a33a79cb4e03a5eb333.html) > context.TODO() **TODO returns a non-nil, empty Context.** Code should use context.TODO when it’s unclear which Context to use or it is not yet available (because the surrounding function has not yet been extended to accept a Context parameter). TODO is recognized by static analysis tools that determine whether Contexts are propagated correctly in a program. > context.Background() Background returns a non-nil, empty Context. **It is never canceled**, has no values, and has no deadline. It is typically used by the main function, initialization, and tests, and as the top-level Context for incoming requests. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7vxyfskn1d4ubz6xvah.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rj2xahd9pmue1bw7hwa.png) Another different package is about MongoDb Client however I'll talk about below. ``` package main import ( "context" "encoding/json" "io/ioutil" "os" "github.com/bburaksseyhan/contact-api/src/pkg/client/mongodb" "github.com/bburaksseyhan/contact-api/src/pkg/model" "github.com/sirupsen/logrus" ) func main() { contactsJson, err := os.Open("contacts.json") if err != nil { logrus.Error("contact.json an error occurred", err) } defer contactsJson.Close() var contacts []model.Contact byteValue, _ := ioutil.ReadAll(contactsJson) //unmarshall data if err := json.Unmarshal(byteValue, &contacts); err != nil { logrus.Error("unmarshall an error occurred", err) } logrus.Info("Data\n", len(contacts)) //import mongo client client := mongodb.ConnectMongoDb("mongodb://localhost:27017") logrus.Info(client) defer client.Disconnect(context.TODO()) collection := client.Database("ContactDb").Collection("contacts") logrus.Warn("Total data count:", &contacts) for _, item := range contacts { collection.InsertOne(context.TODO(), item) } logrus.Info("Data import finished...") } ``` Firstly let's open the terminal and goes to dummy_api directory. Another important thing, is database running? Let's have look. `docker compose up` `docker container ls` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d16ayf3uyojwwayoijzd.png) ## Working with mock data I was creating mock data from [Mockaroo](https://www.mockaroo.com/) ## Working with MongoDB queries ``` docker exec -it ad2d44477f28 mongo //connect to mongodb cli help show dbs // return all database names use ContactDb show collections // return collection name db.contacts.find() //return all collections db.contacts.find({}).count() // return row count db.contacts.find({}).pretty({}) // return rows with a format db.contacts.find({"email":""}) db.dropDatabase() // remove database ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hy6w8lu2ongg1yklzhzo.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymr8d14tf030y2nfgz7x.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78uwjsnba5st7xk4bd9s.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm5rjo9g3puwdcszitjz.png) ## Using packages * "go get -u go.mongodb.org/mongo-driver/bson" * "go get -u go.mongodb.org/mongo-driver/mongo" * "go get -u go.mongodb.org/mongo-driver/mongo/options" * "go get -u github.com/gin-gonic/gin" * "go get -u github.com/sirupsen/logrus" * "go get -u github.com/spf13/viper" ## Implementation ### main.go file That file read config.yml or .env file after that call the Init function. ``` package main import ( "os" "github.com/bburaksseyhan/contact-api/src/cmd/utils" "github.com/bburaksseyhan/contact-api/src/pkg/server" log "github.com/sirupsen/logrus" "github.com/spf13/viper" ) func main() { config := read() log.Info("Config.yml", config.Database.Url) mongoUri := os.Getenv("MONGODB_URL") if mongoUri != "" { config.Database.Url = mongoUri } log.Info("MONGODB_URL", mongoUri) server.Init(config.Database.Url) } func read() utils.Configuration { //Set the file name of the configurations file viper.SetConfigName("config") // Set the path to look for the configurations file viper.AddConfigPath(".") // Enable VIPER to read Environment Variables viper.AutomaticEnv() viper.SetConfigType("yml") var config utils.Configuration if err := viper.ReadInConfig(); err != nil { log.Error("Error reading config file, %s", err) } err := viper.Unmarshal(&config) if err != nil { log.Error("Unable to decode into struct, %v", err) } return config } ``` ConnectMongoDb function takes the MongoDB URL parameter so this function opens the connection and check the status. [Related Documentation](https://www.mongodb.com/blog/post/mongodb-go-driver-tutorial) ``` package mongodb import ( "context" log "github.com/sirupsen/logrus" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) func ConnectMongoDb(url string) *mongo.Client { clientOptions := options.Client().ApplyURI(url) // Connect to MongoDB client, err := mongo.Connect(context.TODO(), clientOptions) if err != nil { log.Fatal(err) } // Check the connection err = client.Ping(context.TODO(), nil) if err != nil { log.Fatal(err) } log.Info("MongoClient connected") return client } ``` unfinished contact_handler.go file. HealthCheck function is not only health check. That function checks the MongoDb database status with a timeout. If any cancellation comes from the server, context will be triggered and response will be un-health. Let's think the opposite result will be pong. ``` package handler import ( "context" "net/http" "time" "github.com/gin-gonic/gin" "github.com/sirupsen/logrus" "go.mongodb.org/mongo-driver/mongo" ) type ContactHandler interface { GetAllContacts(*gin.Context) GetContactByCity(*gin.Context) HealthCheck(*gin.Context) } type contactHandler struct { client *mongo.Client } func NewContactHandler(client *mongo.Client) ContactHandler { return &contactHandler{client: client} } func (ch *contactHandler) GetAllContacts(c *gin.Context) { _, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() //request on repository c.JSON(http.StatusOK, gin.H{"contacts": "pong"}) } func (ch *contactHandler) GetContactByCity(c *gin.Context) { _, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() //request on repository c.JSON(http.StatusOK, gin.H{"contacts": "pong"}) } func (ch *contactHandler) HealthCheck(c *gin.Context) { ctx, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() if ctxErr != nil { logrus.Error("somethig wrong!!!", ctxErr) } if err := ch.client.Ping(ctx, nil); err != nil { c.JSON(http.StatusOK, gin.H{"status": "unhealty"}) } c.JSON(http.StatusOK, gin.H{"status": "pong"}) } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhfryzgpyghgx4hiwng4.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tqtsmwnvkyp53mjxs5cl.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xv7rdnfwtfqxrm0kg9kz.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58j0jyc7dnz6bomhcblw.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8w0jdfb2lnwpmins1kil.png) # Completed codes ## main ``` package main import ( "os" "github.com/bburaksseyhan/contact-api/src/cmd/utils" "github.com/bburaksseyhan/contact-api/src/pkg/server" log "github.com/sirupsen/logrus" "github.com/spf13/viper" ) func main() { config := read() log.Info("Config.yml", config.Database.Url) mongoUri := os.Getenv("MONGODB_URL") serverPort := os.Getenv("SERVER_PORT") dbName := os.Getenv("DBNAME") collection := os.Getenv("COLLECTION") if mongoUri != "" { config.Database.Url = mongoUri config.Server.Port = serverPort config.Database.DbName = dbName config.Database.Collection = collection } log.Info("MONGODB_URL", mongoUri) server.Init(config) } func read() utils.Configuration { //Set the file name of the configurations file viper.SetConfigName("config") // Set the path to look for the configurations file viper.AddConfigPath(".") // Enable VIPER to read Environment Variables viper.AutomaticEnv() viper.SetConfigType("yml") var config utils.Configuration if err := viper.ReadInConfig(); err != nil { log.Error("Error reading config file, %s", err) } err := viper.Unmarshal(&config) if err != nil { log.Error("Unable to decode into struct, %v", err) } return config } ``` ## config ``` package utils type Configuration struct { Database DatabaseSetting Server ServerSettings } type DatabaseSetting struct { Url string DbName string Collection string } type ServerSettings struct { Port string } ``` ## server ``` package server import ( "github.com/bburaksseyhan/contact-api/src/cmd/utils" "github.com/bburaksseyhan/contact-api/src/pkg/client/mongodb" "github.com/bburaksseyhan/contact-api/src/pkg/handler" repository "github.com/bburaksseyhan/contact-api/src/pkg/repository/mongodb" "github.com/gin-gonic/gin" log "github.com/sirupsen/logrus" ) func Init(config utils.Configuration) { // Creates a gin router with default middleware: // logger and recovery (crash-free) middleware router := gin.Default() client := mongodb.ConnectMongoDb(config.Database.Url) repo := repository.NewContactRepository(&config, client) handler := handler.NewContactHandler(client, repo) router.GET("/", handler.GetAllContacts) router.GET("/contacts/:email", handler.GetContactByEmail) router.POST("/contact/delete/:id", handler.DeleteContact) router.GET("/health", handler.HealthCheck) log.Info("port is :8080\n", config.Database.Url) // PORT environment variable was defined. router.Run(":" + config.Server.Port + "") } ``` ## handler ``` package handler import ( "context" "net/http" "strconv" "time" "github.com/bburaksseyhan/contact-api/src/pkg/model" db "github.com/bburaksseyhan/contact-api/src/pkg/repository/mongodb" "github.com/gin-gonic/gin" "github.com/sirupsen/logrus" "go.mongodb.org/mongo-driver/mongo" ) type ContactHandler interface { GetAllContacts(*gin.Context) GetContactByEmail(*gin.Context) DeleteContact(*gin.Context) HealthCheck(*gin.Context) } type contactHandler struct { client *mongo.Client repository db.ContactRepository } func NewContactHandler(client *mongo.Client, repo db.ContactRepository) ContactHandler { return &contactHandler{client: client, repository: repo} } func (ch *contactHandler) GetAllContacts(c *gin.Context) { ctx, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() var contactList []*model.Contact //request on repository if result, err := ch.repository.Get(ctx); err != nil { logrus.Error(err) } else { contactList = result } c.JSON(http.StatusOK, gin.H{"contacts": &contactList}) } func (ch *contactHandler) GetContactByEmail(c *gin.Context) { ctx, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() var contactList *model.Contact //get parameter email := c.Param("email") //request on repository if result, err := ch.repository.GetContactByEmail(email, ctx); err != nil { logrus.Error(err) } else { contactList = result } c.JSON(http.StatusOK, gin.H{"contacts": contactList}) } func (ch *contactHandler) HealthCheck(c *gin.Context) { ctx, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() if ctxErr != nil { logrus.Error("somethig wrong!!!", ctxErr) } if err := ch.client.Ping(ctx, nil); err != nil { c.JSON(http.StatusOK, gin.H{"status": "unhealty"}) } c.JSON(http.StatusOK, gin.H{"status": "pong"}) } func (ch *contactHandler) DeleteContact(c *gin.Context) { ctx, ctxErr := context.WithTimeout(c.Request.Context(), 30*time.Second) defer ctxErr() //get parameter id, err := strconv.Atoi(c.Param("id")) if err != nil { logrus.Error("Can not convert to id", err) } //request on repository result, err := ch.repository.Delete(id, ctx) if err != nil { logrus.Error(err) } c.JSON(http.StatusOK, gin.H{"deleteResult.DeletedCount": result}) } ``` ## repository ``` package repository import ( "context" "github.com/bburaksseyhan/contact-api/src/cmd/utils" "github.com/bburaksseyhan/contact-api/src/pkg/model" "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/bson/primitive" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) type ContactRepository interface { Get(ctx context.Context) ([]*model.Contact, error) GetContactByEmail(email string, ctx context.Context) (*model.Contact, error) Delete(id int, ctx context.Context) (int64, error) } type contactRepository struct { client *mongo.Client config *utils.Configuration } func NewContactRepository(config *utils.Configuration, client *mongo.Client) ContactRepository { return &contactRepository{config: config, client: client} } func (c *contactRepository) Get(ctx context.Context) ([]*model.Contact, error) { findOptions := options.Find() findOptions.SetLimit(100) var contacts []*model.Contact collection := c.client.Database(c.config.Database.DbName).Collection(c.config.Database.Collection) // Passing bson.D{{}} as the filter matches all documents in the collection cur, err := collection.Find(ctx, bson.D{{}}, findOptions) if err != nil { log.Fatal(err) return nil, err } // Finding multiple documents returns a cursor // Iterating through the cursor allows us to decode documents one at a time for cur.Next(context.TODO()) { // create a value into which the single document can be decoded var elem model.Contact if err := cur.Decode(&elem); err != nil { log.Fatal(err) return nil, err } contacts = append(contacts, &elem) } cur.Close(ctx) return contacts, nil } func (c *contactRepository) GetContactByEmail(email string, ctx context.Context) (*model.Contact, error) { findOptions := options.Find() findOptions.SetLimit(100) var contacts *model.Contact collection := c.client.Database(c.config.Database.DbName).Collection(c.config.Database.Collection) filter := bson.D{primitive.E{Key: "email", Value: email}} logrus.Info("Filter", filter) collection.FindOne(ctx, filter).Decode(&contacts) return contacts, nil } func (c *contactRepository) Delete(id int, ctx context.Context) (int64, error) { collection := c.client.Database(c.config.Database.DbName).Collection(c.config.Database.Collection) filter := bson.D{primitive.E{Key: "id", Value: id}} deleteResult, err := collection.DeleteOne(ctx, filter) if err != nil { log.Fatal(err) return 0, err } return deleteResult.DeletedCount, nil } ``` ## Deployment `docker compose up` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhludq029z827nlr1z6f.png) [Repository](https://github.com/bburaksseyhan/contact-api.git) Thank you,
burrock
895,013
5 Challenges In Business Intelligence Mobile Application Development
In modern times, data being produced by organizations is in massive numbers. Every small to large...
0
2021-11-11T11:05:59
https://dev.to/rachael_ray018/5-challenges-in-business-intelligence-mobile-application-development-51ic
mobile, ai, android, programming
In modern times, data being produced by organizations is in massive numbers. Every small to large business is generating data consistently and taking the investigation to the next level. It also involves making an informed decision for the company, allowing you to focus on its core competencies. However, it is not yet adopted on a universal level. There are numerous challenges that organizations face while integrating the new methods of functioning with BI. These challenges include data management issues, different data infrastructures, tracking new BI capabilities as per market updates, and varying levels of data literacy required in the workforce. Moreover, on the one hand, BI teams are required to ensure that the data holds strong security and is governed accurately to avoid any malware activity. On the other hand, the experts need to exhibit how BI can prove beneficial for the employees and the fewer data literate ones. Nonetheless, you can outsource a group of experts from the top data analytics companies to ensure you manage and analyze the data without any hassle and meet business needs accurately. **Challenges Faced While Developing Business Intelligence Mobile Application** Let's take a look at the common challenges in detail that companies came across while developing a business intelligence mobile application. * **Lack of Adoption Company-Wide** The biggest challenge in BI is the failed universal adoption and mismanaged BI practices. Therefore, it is crucial to consult employees and stakeholders about the new functioning methods to bring everyone together on the same page. You are required to curate a practical and realistic business plan with realistic goals to achieve. While developing a plan, you need to consult everyone in the firm, like the IT team, finance management team, marketing team, and sales and operations team, to ensure they can fathom the new business objectives. Predetermined key performance indicators and accurate goals will help your company to adopt BI successfully. Moreover, for small businesses, it might be challenging to adopt BI for enterprise functionalities. The small teams will not be able to [understand the benefits of BI](https://dev.to/designcoders/benefits-of-business-intelligence-for-retail-stores-inetsoft-2ffi) the first time. The organizations may be discouraged due to lack of time, resources, and data awareness, which reduces the employees' excitement to adopt BI. Therefore it is essential to make them understand that the adoption costs outweigh the advantages of BI. The BI tools in action will help the team function with enhanced efficacy. These tools allow the enterprise to connect with the users quickly, interact effectively, visualize and communicate the data without any difficulty. However, you can connect with professionals from top business intelligence (BI) app development companies to empower your staff to adopt BI to manage, understand and store complex data within the given time. * **Highly Expensive** Budgets are a significant concern for small to medium enterprises when adopting a new functioning method like BI. Due to the higher prices, the small and medium-sized enterprises hold back investing in beneficial software. It also makes the companies hold back searching for experts for various functions like data science experts, IT professionals, and BI experts. These issues further discourage the enterprise from building expensive infrastructure investments required to deploy BI software. Moreover, entrepreneurs are shifting to BI to enhance the company's operations and save money to hire an expert team of operational managers. If you find the right business intelligence tool, then it becomes easier and quicker to earn ROI. A BI tool's primary benefit is to reduce the costs of implementation compared with other tools like training, infrastructure, and IT support. Robust dashboards can be quickly implemented, enabling you to save and earn profits with the accurate slicing of the correct data. However, the data is still being analyzed by experts, but by using a long process. Therefore, to fathom BI accurately, you can gain assistance from professional teams from [mobile app development in India](https://www.goodfirms.co/directory/country/app-development/in). * **Difficulty in Delivering Mobile Based BI** The changing market scenario shows increased use of mobile phones rather than computers for various activities like planning and sharing information, developing work, or holding meetings for an urgent collaboration meeting. The sudden shift into the smartphone era means there's never been a higher demand for BI solutions for mobile applications. Nonetheless, various challenges are emerging as business intelligence-based demands increase in the enterprise industry. Seeing the fast-paced work environment of the businesses due to the cutthroat digital landscape has made it essential to produce data-driven information within 24 hours and deliver it to your audience within the timeline. When you develop mobile-based BI solutions, it may seem challenging to find the proper functionalities to offer a smooth working platform. However, with the right interactive BI platform, you can quickly come across valuable and invaluable information from smartphones without hampering other features and functions of the mobile phone. * **Examining Data From Different Sources** It becomes challenging to keep track of every piece of information that comes your way when businesses must analyze and report multiple systems. No matter the size of your company, it is still focusing on collecting data from different networks to extract the correct information. The challenge here is that the data is stored in other systems like CRMs, ERP systems, excel spreadsheets, and databases. Thus, finding the data later becomes a difficult task for the users. The enterprises, therefore, are shifting towards innovative BI tools to reduce the time in fulfilling the project. These tools can efficiently merge datasets without having to restructure the databases or set up a data warehouse. Thus, it allows SMEs to function effectively without depending on external sources to manage the data. Moreover, you can outsource from a list of top mobile app development companies to find the best solutions for data-driven sources. * **Managing the Issue of Poor Data Quality** With the consistent multiplying of the information across the globe, it is essential to manage the data and find accurate information from the bulk. Every time you find golden data from the load of data you hold, you come one step closer to success in the competitive marketplace. However, the critical data is buried deep into the systems, platforms, and applications of the other data collected every day. You also find various sources delivering inaccurate data, wasting time and money while slowing the process of data mining. Hence, you can outsource from the top [mobile app development companies](https://www.goodtal.com/app-developer) in India to strategize ways to enhance the organization's quality of functionality and fulfill business needs and objectives. **To Summarize!** BI is not an indulgence that only large enterprises can afford. It is an investment in today's time to function effectively and sustain the brand value created by every business. There are several challenges when you shift from the old methods to the latest trends of BI. Still, it is crucial to adopt BI solutions to earn profits with the upcoming digital age without any difficulty. Moreover, you can always outsource from the best mobile app development company in India to efficiently gain accurate business intelligence insights and efficiently fulfill your business goals.
rachael_ray018
895,190
The EyeDropper API: Pick colors from anywhere on your screen
With the new EyeDropper API in Chromium, websites can let visitors pick colors from anywhere on their...
0
2021-11-11T13:52:32
https://polypane.app/blog/the-eye-dropper-api-pick-colors-from-anywhere-on-your-screen/
javascript, webdev, ux
With the new EyeDropper API in Chromium, websites can let visitors pick colors from anywhere on their screen, adding another feature to the web that used to require hacky solutions and is now just a few lines of code. The API is clean and modern and easy to use. In this article we'll discuss how to set it up, handle edge cases and additional features we hope will land in future updates. We've been following the EyeDropper API since it was first proposed and have been experimenting with it as different parts became available as well as providing input while the feature was being developed. In [Polypane 7](/blog/polypane-7) we started using it extensively for the new color picker and new palettes. ## How to use the EyeDropper API The API adds a new global, `EyeDropper` (or `window.EyeDropper`) that you can use to set up a new eyedropper object: ```javascript const eyeDropper = new EyeDropper(); ``` This eyeDropper object has one function, `eyeDropper.open()`. This starts the color picker and changes the users cursor into a color picker, complete with magnified area and a highlighted pixel. This function returns a promise, so you can use it either with `await` or as a promise. One gotcha is that it only works when called from **a user-initiated event**. This is part of the security model, to prevent websites from potentially scraping pixels without the user knowing. ### Detecting support for the EyeDropper API Because the API is only available in Chromium you will need to check for support before using it. The most straightforward way to do that is to only offer your color picking UI when `window.EyeDropper` is not undefined: ```javascript if (window.EyeDropper) { // Okay to use EyeDropper } else { // Hide the UI } ``` ### `await` based version ```javascript // won't work const result = await eyeDropper.open(); // works document.queryselector('.colorbutton') .addEventListener('click', async () => { const result = await eyeDropper.open(); }); ``` The `eyeDropper.open()` call will resolve in two situations: 1. The user clicks anywhere on the screen. 2. The user pressed the <kbd>Esc</kbd> key. In the last situation the eyeDropper will throw an exception, but in the first situation you will get a `ColorSelectionResult` object, which has an `sRGBHex` property containing the picked color in hexadecimal format. In your code you can check if `result.sRGBHex` is defined and then do with it what you want. ```javascript document.queryselector('.colorbutton') .addEventListener('click', async () => { const result = await eyeDropper.open(); if (result.sRGBHex) { console.log(result.sRGBHex); } }); ``` You don't _have_ to handle the exception but if you wanted to provide the user feedback that they cancelled the eyedropper, you need to add a `try .. catch` to the code: ```javascript document.queryselector('.colorbutton') .addEventListener('click', async () => { try { const result = await eyeDropper.open(); if (result.sRGBHex) { console.log(result.sRGBHex); } } catch (e) { console.log(e); // "DOMException: The user canceled the selection." } }); ``` ### Promise based version You don't have to use the `await` version. `eyeDropper.open()` returns a promise, so adding a `.then()` and `.catch()` also works: ```javascript document.queryselector('.colorbutton') .addEventListener('click', () => { eyeDropper .open() .then((result) => { console.log(result.sRGBHex); }) .catch((e) => { console.log(e); // "DOMException: The user canceled the selection." }); }); ``` ## Things to keep in mind when using the EyeDropper API There are two gotchas with the API, at least as it's currently implemented in Chromium that we've found that you should be aware of. ### Color picking does not use the live screen At least in the current implementation, the color picker get the pixels as shown on the screen when you call `.open()`. This means that if you're playing video the color picker will show the pixels of the frame that was visible then, not the live video. This is dependent on the implementation and we hope a future update of Chromium will allow for live data. ### Color picking only works as the result of a user action As mentioned earlier you need a user initiated event to open the eye dropper. This is to prevent sites from opening the eyedropper UI to start scraping your screen right on load. Instead the user needs to perform an action for the API to work, like a click or keypress. ## Features we want to see added The EyeDropper API is still very young and minimal. During our implementation we encountered a number of features that we would like to see added to the API in future updates. ### Live preview of the hovered color A major component of many eye droppers, like those in design tools, is that they also show a preview swatch of the hovered color. You can use this to compare it to another swatch or quickly check a HEX code. The current API does not offer this over security concerns. We have filed an issue against the EyeDropper API on GitHub for this: [#6 Live feedback is needed](https://github.com/WICG/eyedropper-api/issues/6). ### A more extensive color model Currently, all colors are returned in the sRGB color model. This means the API won't accurately return colors outside the sRGB spectrum, for example those on Apple's P3 screens. How to deal with this is [an open issue](https://github.com/WICG/eyedropper-api/issues/3). Work is also happening on a [new Color API for the web](https://github.com/WICG/color-api). The EyeDropper API could use this Color API when it lands in future versions of browsers. ### A more natural way to select multiple colors Because of the current security model, each time a user picks a color they need to re-initiate a user action which can be tedious. For example if you want to create a palette of colors in one go, you want to start picking colors, click on all the colors you want to add and then close out of the eye dropper. We similarly filed an issue for this on Github: [#9 Do we expect multiselect to work?](https://github.com/WICG/eyedropper-api/issues/9) and this feature is currently being considered. For this it would be nice if we could designate a part of the page (like a button) as an area where the EyeDropper doesn't work, that instead functions as a "done" button. This way users can select multiple colors and then click that button when they're done. ## Other browsers For now, the API is only available in Chromium based browsers from version 95 on and there has not been a signal from Safari and Firefox yet. If you want those browsers to support the EyeDropper API as well, add your support to the open issues: [Issue #1728527 for Firefox](https://bugzilla.mozilla.org/show_bug.cgi?id=1728527) and [Issue #229755 for Safari](https://bugs.webkit.org/show_bug.cgi?id=229755). The EyeDropper API is a nice addition to the browser that we hope to see land in more browsers. We make good use of it in Polypane and would like to see it be developed further.
kilianvalkhof
895,553
Advent of Code 2020 - Day 15
In the spirit of the holidays (and programming), I’ll be posting my solutions to the Advent of Code...
0
2021-11-28T02:23:23
https://ericburden.work/blog/2020/12/16/advent-of-code-2020-day-15/
--- title: Advent of Code 2020 - Day 15 published: true date: 2020-12-16 00:00:00 UTC tags: canonical_url: https://ericburden.work/blog/2020/12/16/advent-of-code-2020-day-15/ --- In the spirit of the holidays (and programming), I’ll be posting my solutions to the [Advent of Code 2020](https://adventofcode.com/) puzzles here, at least one day after they’re posted (no spoilers!). I’ll be implementing the solutions in R because, well, that’s what I like! What I won’t be doing is posting any of the actual answers, just the reasoning behind them. Also, as a general convention, whenever the puzzle has downloadable input, I’m saving it in a file named `input.txt`. # Day 15 - Rambunctious Recitation Find the problem description [HERE](https://adventofcode.com/2020/day/15). ## Part One - ~~Reindeer~~ Elf Games For part one, we have elves playing a memory game. Given a list of numbers, the elves say each number in turn. When they reach the end of the list, they begin saying the number of rounds it has been since the number spoken on the previous turn had been spoken prior to the previous turn. So, for the list `0, 3, 6`, you get `[0, 3, 6], 0, 3, 1, 0, 4, 0` for the first 9 turns. The real question is: What will the number spoken on turn 2020 be? I’ll be honest, the implementation below isn’t my first crack at this one, but, for reasons that will become clear later (or sooner if you’re starting to get an idea of how these puzzles are structured), it was the fastest implementation I could come up with in R. ```r library(testthat) # For tests # Given a starting sequence `start_vec` and a number `n`, returns the `n`th # number of the elve's counting game, according to the rules in the puzzle # instructions. Note, the `rounds` vector below contains, at each index, the # last round (prior to the most recent round) that the number (index - 1) was # spoken. This is because R is 1-indexed, so the value for '0' is stored in # index '1' and so on. number_spoken <- function(start_vec, n) { rounds <- numeric(0) # Empty vector for the round a number was last spoken start_len <- length(start_vec) # Length of the `start_vec` last_number <- start_vec[start_len] # Last number spoken, starting value # Fill in the starting numbers into `rounds` for (i in 1:start_len) { rounds[[start_vec[i]+1]] <- i } # For each number after the starting veector... for (i in (start_len+1):n) { index <- last_number + 1 # Correction for 1-indexing # If the `index` either contains the number of the last round or an NA, # then the last round was the first time `last_number` was spoken and the # `next_number` should be '0'. Otherwise, the `next_number` should be the # number of the last round (i-1) minus the number of the previous round # in which the number was spoken (rounds[last_number+1]) next_number <- if (is.na(rounds[index]) || rounds[index] == i - 1) { 0 } else { (i - 1) - rounds[last_number+1] } rounds[last_number+1] <- i - 1 # Update the round number stored for this number last_number <- next_number # The new `last_number` is this `next_number` # Sanity Check if (i %% 10000 == 0) { cat('\r', paste('Simulating round:', i)) } } next_number # Return the `n`th number spoken } test_that("sample inputs return expected results", { expect_equal(number_spoken(c(0, 3, 6), 2020), 436) expect_equal(number_spoken(c(1, 3, 2), 2020), 1) expect_equal(number_spoken(c(1, 2, 3), 2020), 27) expect_equal(number_spoken(c(2, 3, 1), 2020), 78) expect_equal(number_spoken(c(3, 2, 1), 2020), 438) expect_equal(number_spoken(c(3, 1, 2), 2020), 1836) }) answer1 <- number_spoken(c(2, 0, 1, 7, 4, 14, 18), 2020) # Answer: 496 ``` For reasons that I can’t recall, I was inspired to use testing for this puzzle. I think it had something to do with the number of test inputs we were given, since the `testthat` package and functions give me an easy way to test them all at once. I could also make tweaks to my implementation while being sure I didn’t miss any known edge cases. ## Part Two - MORE! [![](https://ericburden.work/blog/2020-12-16-advent-of-code-2020-day-15/index.en_files/day_15_elf_meme.jpeg)](https://www.reddit.com/r/adventofcode/comments/kdiu1k/2020_day_15_when_elves_wanna_play_with_you/) ```r source('exercise_1.R') # Yep, that's it. answer2 <- number_spoken(c( 2, 0, 1, 7, 4, 14, 18), 30000000) ``` This is why the speed of the implementation mattered. ## Wrap-Up Ah, there are some days when I’m jealous of the folks using compiled languages, but then I realize that my day job makes MUCH better use of `shiny` dashboards and data analysis with `tidyverse` than it does blazing fast array lookups, and I snap out of it. That said, this was an excellent opportunity to learn some things about doing things quickly in R (which is something I hadn’t spent much time on up to now). My brief takeaway is: getting a value from a known index in a vector, even if the vector contains a lot of `NA`’s , is pretty snappy. It’s definitely faster than looking up values by key in an environment, and it’s _way_ faster than using a named list as a sort of pseudo-dictionary. Overall, a good day! If you found a different solution (or spotted a mistake in one of mine), please drop me a line!
ericwburden
895,571
What is the difference between JOIN and INNER JOIN in SQL?
Introduction If you've ever used SQL, you probably know that JOINs can be very confusing....
0
2021-11-11T18:52:38
https://devdojo.com/tutorial/what-is-the-difference-between-join-and-inner-join-in-sql
database, sql, 100daysofcode, webdev
# Introduction If you've ever used SQL, you probably know that `JOIN`s can be very confusing. In this quick post we are going to learn what the difference between `JOIN` and `INNER JOIN` is! # Difference between JOIN and INNER JOIN Actually, `INNER JOIN` AND `JOIN` are functionally equivalent. You can think of this as: ``` INNER JOIN == JOIN ``` What you need to remember is that `INNER JOIN` is the default if you don't specify the type when you use the word JOIN. However you need to keep in mind that `INNER JOIN` can be a bit clearer to read. Especially in cases that you have a query containing other join types. Also, keep in mind that some database management system like Microsoft Access doesn't allow just join. It requires you to specify `INNER` as the join type. # What is an `INNER JOIN` Once we know that the functionality is qeuvealent, let's start by quickly mentioning what an `INNER JOIN` is. ![INNER JOIN](https://imgur.com/np7HEeX.png) The `INNER` join is used to join two tables. However, unlike the `CROSS` join, by convention, it is based on a condition. By using an `INNER` join, you can match the first table to the second one. As we have a one-to-many relationship, a best practice would be to use a primary key for the posts `id` column and a foreign key for the `user_id`; that way, we can 'link' or relate the users table to the posts table. However, this is beyond the scope of this SQL basics eBook, though I might extend it in the future and add more chapters. As an example and to make things a bit clearer, let's say that you wanted to get all of your users and the posts associated with each user. The query that we would use will look like this: ```sql SELECT * FROM users INNER JOIN posts ON users.id = posts.user_id; ``` Rundown of the query: * `SELECT * FROM users`: This is a standard select we've covered many times in the previous chapters. * `INNER JOIN posts`: Then, we specify the second table and which table we want to join the result set. * `ON users.id = posts.user_id`: Finally, we specify how we want the data in these two tables to be merged. The `user.id` is the `id` column of the `user` table, which is also the primary ID, and `posts.user_id` is the foreign key in the email address table referring to the ID column in the users table. The output will be the following, associating each user with their post based on the `user_id` column: ``` +----+----------+----+---------+-----------------+ | id | username | id | user_id | title | +----+----------+----+---------+-----------------+ | 1 | bobby | 1 | 1 | Hello World! | | 2 | devdojo | 2 | 2 | Getting started | | 3 | tony | 3 | 3 | SQL is awesome | | 2 | devdojo | 4 | 2 | MySQL is up! | | 1 | bobby | 5 | 1 | SQL | +----+----------+----+---------+-----------------+ ``` Note that the INNER JOIN could (in MySQL) equivalently be written merely as JOIN, but that can vary for other SQL dialects: ```sql SELECT * FROM users JOIN posts ON users.id = posts.user_id; ``` The main things that you need to keep in mind here are the `INNER JOIN` and `ON` clauses. With the inner join, the `NULL` values are discarded. For example, if you have a user who does not have a post associated with it, the user with NULL posts will not be displayed when running the above `INNER` join query. To get the null values as well, you would need to use an outer join. # Conclusion This is pretty much it! Now you know what the difference between a JOIN and an INNER JOIN is! In case that you are just getting started with SQL, I would suggest making sure to check out this free eBook here: [💡 Introduction to SQL eBook](https://github.com/bobbyiliev/introduction-to-sql) In case that you are already using SQL on daily basis, and are looking for a way to drastically reduce the latency of your data analytics, make sure to out [Materialize](https://materialize.com/)! ![Materialize - a streaming database](https://imgur.com/52d9a6h.png) Materialize is a Streaming Database for Real-time Analytics. Materialize is a reactive database that delivers incremental view updates and it helps developers easily build with streaming data using standard SQL.
bobbyiliev
895,591
Lessons I learned during my first year of programming
I started programming in June of 2020, and have grown so much throughout this process. I wanted to...
0
2021-11-11T21:09:59
https://dev.to/codergirl1991/lessons-i-learned-during-my-first-year-of-programming-5891
beginners, webdev, programming, codenewbie
I started programming in June of 2020, and have grown so much throughout this process. I wanted to share the lessons I learned during my first year of programming. ## It's ok not to listen to everyone's opinions One of things that I realized early on is that everyone has an opinion. The tech world has a lot of strong opinions on what you should do and the right way to do them. This creates a lot of conflicting advice for new developers. It's ok not to listen to everyone's opinion and focus on choosing the advice that works best for your situation. Here is a good example of that. I remember a discussion on Reddit where a guy was conflicted because even though he loved his CS degree, people online were telling him that his degree would be useless for web development. He was seriously considering dropping out of his degree program to pursue the self taught route even though he had one year left to go. I just remember thinking, "OMG no! Please don't make such an important life decision based on advice from complete strangers." At the end of the day, you should feel comfortable listening to advice and applying it to your situation if it makes sense to. But please don't blindly follow people's advice just because they speak so passionately about it. You have to remember it is just someone's opinion. Not fact. :smile: ## Fail forward When I first started, I was very conservative and did not want to take risks with my code. I was afraid to make mistakes and see error messages in the console. That is probably one of the reasons why I was stuck in tutorial hell for a few weeks in my learning journey. Tutorials created a safety net for me and walked me through each step of the process. Once I started to take more risks and try things, then I felt like I was learning more. That approach forced me to read error messages, research more, read through documentation and ask more questions. Sometimes, you need to fail a few times before you arrive at the correct answer. Then you can look at your failed attempts and understand why it didn't work. Also, when someone else is struggling with a similar issue, you can share your tips for debugging it. ## Know your learning style Everyone has different learning styles and it is important to know what works best for you. During this learning process, I quickly realized that I don't do well with video formats. For some reason, I tend to zone out after 15 minutes of a video course. I started using articles and documentation for my main source of learning. I felt I could retain the information better through the written form. If I had to watch a video, I made sure to pause it every 15-20 minutes just to absorb what I was learning. Sometimes, I would also take breaks to practice the examples provided in the video. It is really important to identify your learning style so you can choose the appropriate resources. ## Some concepts take a while to understand and that's ok I think one of the hardest parts about learning, is struggling to understand a concept. The first time I was introduced to recursion, it didn't make sense to me at all. People just kept saying it was a function that calls itself. My reaction was, "But how?" :laughing: I remember reading through many articles and kind of understanding it. But it wasn't until I watched a video by [CS 50's Doug Lloyd](https://www.youtube.com/watch?v=mz6tAJMVmfM) where it finally clicked for me. The way he was able to explain what was really happening "underneath the hood" made so much sense to me. While you are learning, it is completely normal to struggle with certain concepts. Sometimes it take a few tries before the light bulb goes off. ## There is no benefit to learning to code quickly I have never been a fan of the learn to code quickly narrative. There are so many videos and articles talking about how you can learn to code in 3 months and land a job. Most of the time this narrative is pushed by those with an agenda to try and sell you something. I see no benefit to rush the learning process because you are going to end up creating serious holes in your education. Plus, your boss isn't going to pay you more money because you landed a job in 6 months versus someone who took longer. I found that it was better to take my time and build a healthy solid foundation instead of rushing through things. I think that has served me well in the junior developer role I have now. ## Have the courage to define your own success I think a lot of people are heavily influenced by cultural or societal expectations when it comes to careers. You shouldn't feel pressured to take a certain type of job just to impress your friends and family. Just because you are not interested in pursuing a FAANG job doesn't mean you won't have a successful career. There are so many options out there and you should feel comfortable defining what success means to you. I love software development where I get to contribute to great projects and tackle new challenges. But I also love to write technical articles. I want to create a career that incorporates both of those things. Maybe you want to launch your own startup. Or you want to be a successful content creator and educator. Or maybe your dream really is to work at FAANG. Whatever your ambitions are, you should be able to pursue a path that makes sense for you. I hope you enjoyed this article and love to hear your thoughts in the comments below.
codergirl1991
895,764
Herding elephants: Wrangling a 3,500-module Gradle project
Note: please do not attempt to herd any actual elephants Every day in Square's Seller organization,...
0
2021-11-12T01:33:53
https://developer.squareup.com/blog/herding-elephants/
android, gradle
_Note: please do not attempt to herd any actual elephants_ Every day in Square's Seller organization, dozens of Android engineers run close to 2,000 local builds, with a cumulative cost of nearly 3 days per day on those builds. Our CI system runs more than 11,000 builds per day, for a cumulative total of over 48 days per day. (All data pulled from [Gradle Enterprise](https://gradle.com/).) To our knowledge, Square's Android mega-repo is one of the largest such in the world, and holds the source code for all of our Point of Sale apps, and more. The repository is a mix of Android applications, Android libraries, and JVM libraries, all in a mix of Kotlin (2 million LOC) and Java (1 million LOC), spread across more than 3,500 Gradle modules. We also have a handful of tooling modules written in Kotlin, Java, and Groovy. Not to mention innumerable scripts written in bash, Ruby, and Python. It doesn't bear thinking about the thousands and thousands of lines of YAML. Suffice it to say, ensuring all of this remains buildable day-to-day is a full-time job. Several full-time jobs, in fact. (Obligatory: [we're hiring](https://jobs.smartrecruiters.com/Square/743999777949577-software-engineer-mobile-developer-experience).) Roughly seven months ago, in April, the Mobile Developer Experience Android (MDXA) team embarked on a journey to modernize the code responsible for building everything—that is, the build logic. Prior to that effort, we had a mix of code in `buildSrc`, the root build.gradle, and many script plugins. Some of the script plugins were applied to the entire build, some were applied to subtrees, and some were applied in an ad hoc fashion to one or more modules. In other words, there was no "single source of truth" for the build, responsibilities were muddled, and it required considerable obstinance to want to interact with Gradle directly. Here were some of our goals when we began the project: * Enhance maintainability of our build logic. * Make it easier to upgrade to newer versions of Gradle, as well as core ecosystem plugins like Android Gradle Plugin (AGP) and Kotlin Gradle Plugin (KGP). * Eliminate cross-project configuration (think: `allprojects` and `subprojects`) so that every module (or "project", to use Gradle terminology) could be configured independently from every other. This should improve the configuration phase of the build, since those APIs defeat [configuration on demand](https://docs.gradle.org/current/userguide/multi_project_configuration_and_execution.html#sec:configuration_on_demand); and it also sets us up to be compatible with a new experimental Gradle feature known as [project isolation](https://gradle.github.io/configuration-cache/), which holds the promise of making slow Android Studio syncs a thing of the past. * Regularize our build scripts to such an extent that they could be parsed with build-system-agnostic tools, including regex and AST parsers. This lets us, for certain narrow use-cases, build Very Fast Tools™ that don't have Gradle's configuration bottleneck. One of these tools will be a Gradle-to-Bazel translator, which we'll use to run Gradle vs Bazel experiments as one input to that perennial question: which build system is best for us? ## What to expect if you keep reading This is not a technical deep dive. Instead, it is a high-level overview of what we did, why we did it, and what we gained from the effort. ## All code is production code Gradle is famously very flexible. This flexibility is convenient for spiking code and experimentation, but all too often such experiments end up becoming the (very rickety) foundation for a critical piece of software—the software that builds the other software! The cure is to treat the build code as an important piece of software in its own right, and to apply the same rigor you would to your "production", consumer-facing code. It just so happens that the consumer of the build logic is other engineers, rather than the general public. What this looks like in practice is that we do our best to follow software design and development best practices: the single responsibility principle, intention-revealing interfaces, preference of composition over inheritance, an extensive suite of unit and integration tests, and the use of many common object-oriented patterns such as the use of facades and adaptors, which let us abstract over ecosystem plugins such as AGP. This last is particularly important, as it helps us write future-proof code that is guaranteed to work™ for upcoming releases. The build domain is its own domain. Treat it with rigor and respect, and it will pay dividends. ## Model your build Every build follows a model, even if it is an implicit model named "hairball." One of our top objectives was to dramatically reduce, and even eliminate, all Gradle scripts other than those that are strictly necessary. That is, we wanted only **settings.gradle** and **build.gradle** scripts (also referred to as settings scripts and build scripts). No more script plugins (`apply from: 'complicated_thing.gradle'`) with hundreds of lines of imperative build logic that had to be interpreted dynamically by Gradle at run time, and which consumed _nearly a gigabyte_ of additional heap. Critically, we wanted to eliminate all conditional logic, as well as any _injection_ of configuration external to the module itself (e.g., from `allprojects` or `subprojects` blocks). We sought a kind of [Platonic ideal](https://en.wikipedia.org/wiki/Theory_of_forms) where every build script had, at most, three blocks: a `plugins` block declaring what kind of module it defined, a `dependencies` block declaring its dependencies, and a custom extension block (named, straightforwardly enough, `square`) that configured various optional characteristics that corresponded with our model of how we build our software. You should be able to look at the build script for a module and know what that module is all about from just those three declarations. That last part is worth emphasizing: we created a model of how we build _our_ software. We then formalized that model with so-called _convention plugins_. Your model for how you build your software would likely be different, but the concept, or pattern, of convention plugins is powerful enough that we think you'll find it useful, even with a different model. ### Convention plugins Convention plugins apply _conventions_ to a build. They do this by applying _ecosystem_ plugins such as AGP and KGP, and then configuring those plugins according to the convention used by your build. A simple example may help illustrate the change from the client (feature engineer) perspective. Consider this un-conventional build script: ```groovy apply plugin: 'com.android.library' apply plugin: 'kotlin-android' android { // lots of boilerplate } tasks.withType(KotlinCompile).configureEach { kotlinOptions { jvmTarget = JavaVersion.VERSION_11 } } ``` Now consider the new, conventional version of this script: ```groovy plugins { id 'com.squareup.android.lib' } ``` In many cases, this is precisely what we've achieved. The boilerplate disappears, and the common conventions (such as `jvmTarget` for compilation) are entirely baked into the new plugin where, critically, _they can be tested_. We've also dramatically reduced the cognitive load on feature engineers. The new convention plugins are a kind of "headline", indicating at a glance the _kind of module_ this is. The above example is for a "Square-flavored Android library" module. We also have convention plugins for `com.squareup.android.app`, `com.squareup.jvm.lib`, and more. ### Cognitive load It's not just an idle claim to say we've reduced the cognitive load when it comes to interpreting our Gradle scripts—we have metrics to prove it! We used the [CodeNarc](https://codenarc.org/) CLI tool to measure the [cyclomatic complexity](https://en.wikipedia.org/wiki/Cyclomatic_complexity) of those scripts. According to this tool, we've reduced the complexity of our root build script from 41 to 16, a 61% reduction. Another very complex script, `square_gradle_module.gradle`, which embedded many of the rules for how all modules were configured, was reduced from 75 to 0 (we deleted it!). I don't want to minimize the fact that some of this complexity has clearly been pushed elsewhere, but the _elsewhere_ in question is a place clearly designated as part of the build domain, separated from the feature domain, with a different context _and many tests_. The prior Groovy-based Gradle scripts had no tests, other than the implicit integration test named "does my build still work?" ### Tests I've mentioned tests several times now. Our convention plugins have a thorough suite of integration tests that use Gradle [TestKit](https://docs.gradle.org/current/userguide/test_kit.html). The tests are all data-driven specifications written with the [Spock](https://spockframework.org/spock/docs/2.0/all_in_one.html) testing framework, which makes it very easy to run tests against a matrix of important third-party dependencies. For example, most of our tests run against a matrix of Gradle and AGP versions (including unreleased versions), which give us confidence that everything will "just work" when there's a new release. We also plan to add dimensions for Kotlin and various JDKs, but those are "nice to haves" at the moment. One of the advantages of integration tests is that we don't have to worry about the minutiae of implementation details for our suite of plugins. Instead, we just invoke normal Gradle tasks and verify the outputs from the full build are as expected. Nevertheless, we also have many unit tests for complex logic—most of these are written in Kotlin and use JUnit5. ## From `buildSrc` to build-logic It is clear that having build logic in [`buildSrc`](https://docs.gradle.org/current/userguide/organizing_gradle_projects.html#sec:build_sources) is an improvement over imperative code directly embedded in a build script or script plugin. Nevertheless, it's not ideal for a few reasons. To start, `buildSrc` is on the hot path for every build: Gradle must compile it _and run its check task_ (which includes tests) on every single build. While it's true that these tasks are generally pulled from the cache, even that can impose a non-negligible cost. `buildSrc` is also a bit too magical and prone to becoming a hairball of code that Gradle conveniently places on the build classpath so you don't have to think about it. This is fine for smaller projects, but for large projects, we want to think about it very carefully. One of the costs of this magic is that any change to `buildSrc`, no matter how seemingly trivial, invalidates the configuration of every module in your build, because it changes the classpath for the entire build—resulting in an awful developer experience. By cleanly separating our build logic from the rest of the build via the [included build facility](https://docs.gradle.org/current/userguide/composite_builds.html), we break this chain of misery and achieve a separation of concerns that is simply impossible with `buildSrc`. As a side benefit, build authors can import the build-logic build into an IDE separately from the main project, dramatically improving their productivity. ### A wild performance regression appeared! From included builds to publishing our build logic When we initially migrated from `buildSrc` to an included build, it seemed an elegant solution to the problems outlined above. We were dropping `buildSrc`, with all its problems, and using the modern facility that Gradle considers the preferred, idiomatic replacement for `buildSrc`. (For a very elaborate example of this, see the [idiomatic-gradle repo](https://github.com/jjohannes/idiomatic-gradle) by former Gradle engineer Jendrik Johannes.) Included builds are indeed the best of both worlds in many ways: they can exist under version control in the same repository as the build that uses them; they can be easily opened in a separate IDE instance, improving build engineer productivity; they don't have the automatic test behavior of `buildSrc`, which improves feature engineer performance; and they're generally very well-behaved, without any of the magic of `buildSrc`. It turns out, however, that they come with a cost. When an included build provides plugins used by the main build, Gradle must do some dependency resolution very early in the configuration process, which is unfortunately single-threaded. Essentially, for every plugin request, Gradle needs to check the included build(s) for plugins that might be substituted in. In the case of our Very Large Build, this [added 30s](https://developer.squareup.com/blog/measure-measure-measure) (about 33%) to the configuration phase for every single build! That kind of regression is totally unacceptable, but so was a revert to `buildSrc`. We ended up making the decision to publish our plugins to our Artifactory instance and resolve them like any other third-party Gradle plugin. This eliminated the regression and even improved over the original situation, since resolving a binary plugin is measurably faster than compiling (or pulling from the build cache) a plugin that lives in `buildSrc`. We also maintained the ability to use build-logic (a collection of 32 Gradle modules at time of writing) as an included build for rapid prototyping by build engineers. Our feature engineers only use the binary plugins, however, which is a boon for their productivity. ## Faster tools Just because you build your app with Gradle doesn't mean you have to do everything with Gradle! One benefit of having a regularized set of build scripts that follow a strict convention is then you can do things like: write a tiny Kotlin app to parse project dependencies recursively to build a trimmed-down list of projects for use by a settings script. I alluded to that in [this post](https://dev.to/autonomousapps/tools-of-the-build-trade-the-making-of-a-tiny-kotlin-app-3eba) when I mentioned that we had replaced a Gradle task that took an average of 2 minutes to run with a Kotlin app that took about 300 ms, for an estimated savings in recovered developer productivity of over $100,000 per year. ## Herding elephants The observant reader will have noticed that this post is about migrating over 3,500 Gradle modules to a new structure, which sounds like a lot of work just updating build.gradle files. And they'd be right. While we did migrate several hundred modules manually during the earliest stages of the project, mainly as a proof of concept, this was incredibly tedious and error prone. Ultimately, we wrote a tool based on Groovy AST transforms to parse and rewrite our gradle scripts in place, translating the old way of doing things into the new way. We're now working to productionize this tool to make future migrations even easier. 🎉 ## What we've gained The above barely scratches the surface of what we've done over the past half of a year, and the effort involved to do it for a project as large as ours. The gains range from the abstract to the very particular: we're modeling our build and embracing software development best practices throughout our codebase; we've reduced complexity and cognitive overhead for our feature engineers; we have built a thorough test suite that protects against regressions and enables quicker adoption of new releases of third-party software; we've reduced memory pressure and brought build times down even in the face of a growing codebase; and we've made it possible (through our build model) to express our build in various ways not necessarily tied to Gradle. _Special thanks to Roger Hu, Corbin McNeely-Smith, and Pierre-Yves Ricau for reviewing early drafts, to Zac Sweers for bouncing ideas off of, and to everyone else for reviewing the many, many PRs it took to get to this point._
autonomousapps
895,843
Catch2 - Testing Framework
Introduction This week, I work on my Static Site Generator (SSG) - Potato Generator. I...
0
2021-11-12T02:51:28
https://dev.to/kiennguyenchi/catch2-testing-framework-2cdg
opensource
# Introduction This week, I work on my Static Site Generator (SSG) - [Potato Generator](https://github.com/kiennguyenchi/potato-generator). I implemented the Catch2 - Testing Framework to my project. There are many testing tools for C++, I could say MSTest, Google Test, built-in test in Visual Studio,... It may take you times to set up the tools. In contrast, I pick a Catch2 framework for testing because it is easy to set up and easy to use. # Process * I downloaded *catch.hpp* file from [Catch2 Repo](https://github.com/catchorg/Catch2). * I created a folder *test* to maintain *catch.hpp* and other testing files. * I added testing file with the format *test#.cpp* file (# represents for number accordingly) * After completing the test, I executed the command *g++ ./test/test#.cpp -o test# --std=c++17* in the CMD and then opened the Output window to see the success or failure of the test. # Testing file structure * First of all, every testing file has the following structure. I defined *CONFIG_CATCH_MAIN*, include *catch.hpp* file at the beginning of each test file and created each test case with the following format. ``` #define CONFIG_CATCH_MAIN #include "catch.hpp" TEST_CASE("This is the name of the test case") { //code } ``` * In the *TEST_CASE*, I would use REQUIRE function to ensure the outputs of my testing functions are correctly returned. * If you want to test any functions in any specific file, you need to include it in the testing file. In my first test case *test1.cpp*, I included *pgprogram.cpp* file to test some functions to validate the command line arguments. * I checked if there is no arguments provided. Does the function output the correct messages? ``` TEST_CASE("No Arguments provided") { char* list[1] = {"./pgprogram"}; REQUIRE(checkArguments(1, list) == "Failed arguments provided"); } ``` * I checked if true argument is provided. Does the function output correct messages? There are tons of tests relating to arguments version, help, input, output, language,... But I gave 1 example here. ``` TEST_CASE("Arguments Check") { char* list[2] = {"./pgprogram", "-v"}; REQUIRE(checkArguments(2, list) == "Potato Generator - Version 0.1"); } ``` * In *test2.cpp*, I included "HTMLFile.h" and I implemented some tests to check for the HTML File implementations. * Firstly, I checked if the program prints nothing if no files are provided. ``` TEST_CASE("Check empty files") { HTMLFile file; REQUIRE(file.getTitle() == ""); REQUIRE(file.getURL() != ""); } ``` * Secondly, I checked if correct file is input and its URL and title are implemented correctly. ``` TEST_CASE("Check existing files") { HTMLFile file; file.openFile("../Sherlock-Holmes-Selected-Stories/Silver Blaze.txt", "fr"); REQUIRE(file.getTitle() == "Silver Blaze"); REQUIRE(file.getURL() != "Silver Blaze.html"); } ``` * Lastly, of course, I need to check if incorrect file is input, does the program process anything? In this case, nothing is implemented. ``` TEST_CASE("Check non-existing files") { HTMLFile file; file.openFile("../Sherlock-Holmes-Selected-Stories/notafile.txt", "fr"); REQUIRE(file.getTitle() == ""); REQUIRE(file.getURL() != ""); } ``` # Conclusion You can take a closer look at all the testing files that I created so far and folder structure in the [commit](https://github.com/kiennguyenchi/potato-generator/commit/b2b5321ccf4ec40e7a14506bd15a348dfd27e8dc). Overall, the testing tool is really helpful to test our code and it is worth to implement. But also it is overwhelming to brainstorm all the possibilities that could happen to put into the test.
kiennguyenchi
895,948
Learning Workflows: Four Ways to Invoke a Workflow
In this video you will learn how to invoke a workflow. There are four ways and they are: scheduled,...
0
2022-02-10T17:52:23
https://maxkatz.org/2021/11/11/learning-workflows-four-ways-to-invoke-a-workflow/
nocode, workflows
--- title: Learning Workflows: Four Ways to Invoke a Workflow published: true date: 2021-11-12 05:10:40 UTC tags: NoCode,Workflows canonical_url: https://maxkatz.org/2021/11/11/learning-workflows-four-ways-to-invoke-a-workflow/ --- In this video you will learn how to invoke a workflow. There are four ways and they are: scheduled, app event, API and helper flow. [![Type column shows how a flow will be invoked](https://katzmax.files.wordpress.com/2021/11/learningworkflows_invoke_types2.png?w=1024)](https://katzmax.files.wordpress.com/2021/11/learningworkflows_invoke_types2.png) {% youtube iMk_GNnGabc %} Want to learn more? Watch other videos in the [Learning Workflows series](https://www.youtube.com/playlist?list=PLSAWywyhniCM6CKkp5vIiD0-pxSbdX71y).
maxkatz
896,078
How to get randomly sorted recordsets in Strapi
Lately I had to build page that shows the details of a recordset, and at the bottom a section...
0
2021-11-12T16:31:28
https://dev.to/drazik/how-to-get-randomly-sorted-recordsets-in-strapi-522d
strapi, headlesscms, javascript
--- title: How to get randomly sorted recordsets in Strapi published: true description: tags: strapi,headlesscms,javascript cover_image: https://images.unsplash.com/photo-1605870445919-838d190e8e1b?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1172&q=80 --- Lately I had to build page that shows the details of a recordset, and at the bottom a section "Others" that shows two randomly picked recordsets that the user can click on to see their details. Of course in the "others" recordsets we shouldn't see the recordset the user is currently viewing. This project stack is [Next.js](https://nextjs.org/) for the frontend and [Strapi](https://strapi.io/) for the backend. In this post, we will focus on the backend side and see how we can return random recordsets of a Strapi collection type. You may think "wait, Strapi exposes an API with lots of parameters available, it should be possible to simply pass a param and this job is done". The thing is... there is no value that we can pass to the `_sort` parameter to sort randomly. So, we will need to build a custom endpoint for our Partnerships collection type to get some randomly picked recordsets. First, we need to add a route. Let's add it to `api/partnership/config/routes.json`: ```json { "routes": [ { "method": "GET", "path": "/partnerships/random", "handler": "partnership.random", "config": { "policies": [] } } ] } ``` Nice, now we can create the `random` method in the Partnership controller. Let's go in `api/partnership/controllers/partnership.js` and implement a dumb `random` method to see if we can reach it: ```js "use strict"; module.exports = { async random() { return "Hello world" } } ``` Then go to `http://localhost:1337/partnerships/random` in our browser... to see a HTTP 403 error. This is normal, by default Strapi endpoints are not reachable. We should go to Strapi's admin UI and check the `random` endpoint user the Partnership model in Settings > Role > Public. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/10lda88mztzpa7wvo6u7.png) Save this settings and retry to reach the random endpoint. It now shows our Hello world :tada:. We can now implement the endpoint. First, we need to get all recordsets randomly sorted. To achieve this, we will need to build a query. Strapi is using [Bookshelf](https://bookshelfjs.org) as an ORM. So we can start by getting our Partnership model, so we can run a query on it. Inside the query, we get a [knex](https://knexjs.org/) (this is the query builder that Bookshelf uses under the hood) query builder instance. On this query builder instance, we can there ask to order recordsets randomly. Let's try this: ```js async random() { const result = await strapi .query("partnership") .model.query((qb) => { qb.orderByRaw("RANDOM()") }) .fetchAll() return result.toJSON() } ``` Try to reach the `/partnerships/random` endpoint and see that we get all partnerships randomly sorted. This can do the trick if you just want to get all the recordsets. But in my case, I wanted to have to possibility to exclude some recordsets by their ID, and to limit the number of recordsets returned. Here is how I did it: ```js async random({ query }) { const DEFAULT_LIMIT = 10 const limit = query._limit || DEFAULT_LIMIT const excludedIds = query.id_nin || [] const result = await strapi .query("partnership") .model.query((qb) => { qb .whereNotIn("id", excludedIds) .orderByRaw("RANDOM()") .limit(limit) }) .fetchAll() return result.toJSON() } ``` This way I can get 2 random partnerships and I will never have the partnership with the ID `1` in the returned recordsets by doing: ```js const url = new URL("http://localhost:1337/partnerships/random") url.search = new URLSearchParams({ "id_nin[]": [1], _limit: 2 }).toString() const response = await fetch(url) const data = await response.json() ``` Hope it helps! Cover photo by <a href="https://unsplash.com/@edge2edgemedia?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Edge2Edge Media</a> on <a href="https://unsplash.com/s/photos/random?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
drazik
896,119
Discovering Scrum Artifacts and Their Commitments
Scrum is a technique that helps teams work together. Just as a sports team prepares for a decisive...
0
2021-11-12T11:13:04
https://dev.to/fireartd/discovering-scrum-artifacts-and-their-commitments-21en
devops, ux, webdev
<p class="wow fadeIn animated" data-wow-delay=".05s">Scrum is a technique that helps teams work together.</p> <blockquote class="wow fadeIn" data-wow-delay=".05s"> <p><em>Just as a sports team prepares for a decisive game, the team of company employees should learn from the experience gained, master the principles of self-organization, work on solving a problem and analyze their successes and failures to constantly improve.</em>&nbsp;(AZN Research)</p> </blockquote> <p class="wow fadeIn" data-wow-delay=".05s">Scrum contributes to this. It also has scrum artifacts and their commitment to deal with. Let&rsquo;s revise them.<span id="more-17072"></span></p> <figure id="attachment_17073" class="wp-caption aligncenter wow fadeIn" data-wow-delay=".05s"><img class="wp-image-17073 size-full" src="https://fireart.studio/wp-content/uploads/2021/11/scrum-1.jpg" sizes="(max-width: 864px) 100vw, 864px" srcset="https://fireart.studio/wp-content/uploads/2021/11/scrum-1.jpg 864w, https://fireart.studio/wp-content/uploads/2021/11/scrum-1-300x208.jpg 300w, https://fireart.studio/wp-content/uploads/2021/11/scrum-1-768x533.jpg 768w, https://fireart.studio/wp-content/uploads/2021/11/scrum-1-616x428.jpg 616w, https://fireart.studio/wp-content/uploads/2021/11/scrum-1-278x193.jpg 278w, https://fireart.studio/wp-content/uploads/2021/11/scrum-1-640x444.jpg 640w, https://fireart.studio/wp-content/uploads/2021/11/scrum-1-450x313.jpg 450w" alt="" width="864" height="600" /> <figcaption id="caption-attachment-17073" class="wp-caption-text">Taken from&nbsp;<a href="https://dribbble.com/Fireart-d" target="_blank" rel="nofollow">Dribbble</a></figcaption> </figure> <h2 class="wow fadeIn" data-wow-delay=".05s">What Are Scrum Artifacts?</h2> <p class="wow fadeIn" data-wow-delay=".05s">What are scrum artifacts meaning?</p> <p class="wow fadeIn" data-wow-delay=".05s">Artifacts are the material representation of work or value. There are three artifacts of scrum which are the main artifacts: Product Backlog, Sprint Backlog, and Increment.</p> <p class="wow fadeIn" data-wow-delay=".05s">Each has its commitment to the overall result.</p> <figure id="attachment_17074" class="wp-caption aligncenter wow fadeIn" data-wow-delay=".05s"><img class="wp-image-17074 size-full" src="https://fireart.studio/wp-content/uploads/2021/11/team.jpg" sizes="(max-width: 858px) 100vw, 858px" srcset="https://fireart.studio/wp-content/uploads/2021/11/team.jpg 858w, https://fireart.studio/wp-content/uploads/2021/11/team-300x200.jpg 300w, https://fireart.studio/wp-content/uploads/2021/11/team-768x511.jpg 768w, https://fireart.studio/wp-content/uploads/2021/11/team-643x428.jpg 643w, https://fireart.studio/wp-content/uploads/2021/11/team-290x193.jpg 290w, https://fireart.studio/wp-content/uploads/2021/11/team-640x426.jpg 640w, https://fireart.studio/wp-content/uploads/2021/11/team-450x299.jpg 450w" alt="" width="858" height="571" /> <figcaption id="caption-attachment-17074" class="wp-caption-text"><a href="https://fireart.studio/about-us/" target="_blank" rel="noopener">Fireart Studio</a>&nbsp;agile team</figcaption> </figure> <h2 class="wow fadeIn" data-wow-delay=".05s">The Main Artifacts of Agile Scrum</h2> <p class="wow fadeIn" data-wow-delay=".05s">Thus, Product Backlog, Sprint Backlog, and Increment are the minimum required artifacts in a Scrum project. At the same time, these project artifacts are not limited because more events like the Product Vision or Sprint Goal may also be referred to as them. Let&rsquo;s consider each of them a bit in a detail.</p> <h3 class="wow fadeIn" data-wow-delay=".05s">Product Vision</h3> <p class="wow fadeIn" data-wow-delay=".05s">Alongside the 3 scrum artifacts, you may come across the so-called Product Vision.</p> <p class="wow fadeIn" data-wow-delay=".05s">Product Vision is the starting point for getting started on a project. The mission and vision have global ambitions &ndash; they help in developing the right product, close in spirit to the creators.</p> <p class="wow fadeIn" data-wow-delay=".05s">In a general, Product Vision reflects the internal understanding of the product, its information and purpose by the team and stakeholders, and acts as a guideline for the development of the product.</p> <h3 class="wow fadeIn" data-wow-delay=".05s">Product Backlog</h3> <p class="wow fadeIn" data-wow-delay=".05s">A product backlog is a list of all product elements, requirements for them, and any product-related information. The product backlog is formed throughout the project (it can be supplemented, detailed, or reduced, etc). It is good practice to involve the entire Team in correcting the Product Backlog, although it&rsquo;s the&nbsp;<a href="https://searchsoftwarequality.techtarget.com/definition/product-owner#:~:text=A%20product%20owner%20is%20a,and%20optimizing%20the%20product%20backlog." target="_blank" rel="nofollow">Product Owner</a>&nbsp;who is mainly responsible for maintaining the Product Backlog.</p> <p class="wow fadeIn" data-wow-delay=".05s">There is a product backlog in almost any&nbsp;<a href="https://fireart.studio/blog/how-developers-may-approach-choosing-the-right-technology-stack-for-the-project/" target="_blank" rel="noopener">Agile project</a>: it is a convenient tool for recording requirements, tasks and product elements. To assess the quality of the Backlog, the&nbsp;<a href="https://www.productplan.com/glossary/deep-backlog/" target="_blank" rel="nofollow">DEEP tool</a>&nbsp;is used: a good Backlog should contain detailed, estimated, independent, and prioritized elements.</p> <p class="wow fadeIn" data-wow-delay=".05s">The highest priority elements should have the maximum granularity. Evaluating the Backlog items helps us plan what we plan for the next sprint and what we won&rsquo;t be able to do in time. As the product is created, new requirements may be added to the Backlog.</p> <h3 class="wow fadeIn" data-wow-delay=".05s">Sprint Backlog</h3> <p class="wow fadeIn" data-wow-delay=".05s">This artifacts scrum aims to determine the amount of work that the Team will take in the next sprint. During Sprint Planning, the Team selects the highest priority elements (product requirements) from the Product Backlog, details them down to the task level, and estimates the effort and interrelationships between tasks.</p> <p class="wow fadeIn" data-wow-delay=".05s">At this stage, the Product Owners should remember that they can only help the team understand what needs to be done in the sprint but not specify or select the tasks that need to be included in the Sprint Backlog. The team, in turn, takes on the amount of design work that it can actually create only. Also, you need to understand that a team&rsquo;s decision does not guarantee that all planned development tasks will be completed. Unforeseen difficulties or new external factors may arise, naturally.</p> <h3 class="wow fadeIn" data-wow-delay=".05s">Product Increment</h3> <p class="wow fadeIn" data-wow-delay=".05s">The Product Increment is the sum of the Product Backlog Items completed during the sprint and all the tracking increments from the previous sprints. That is, the current state of the product being developed, including the finalization of the last sprint.</p> <h3 class="wow fadeIn" data-wow-delay=".05s">Sprint Goal</h3> <p class="wow fadeIn" data-wow-delay=".05s">This is an open meeting to showcase the results of the&nbsp;<a href="https://fireart.studio/services/agile-team-of-js-programmers/" target="_blank" rel="noopener">Agile sprint.</a>&nbsp;The Sprint Goal should provide feedback from users on the reported results. If necessary, you can update the Product Backlog. In a nutshell, sprint goals clarify your purpose during a program increment.</p> <p class="wow fadeIn" data-wow-delay=".05s">This is more than a multi-stakeholder project status meeting. This meeting provides an opportunity for the team to communicate directly with users and stakeholders, hear their requirements and comments on the work performed. It is worth remembering that the goal of the Sprint Review is not to show the product increment in all its glory but to provide an opportunity for users to analyze the work that has been done and ensure transparency of the project&rsquo;s progress.</p> <h3 class="wow fadeIn" data-wow-delay=".05s">The Burndown Chart</h3> <p class="wow fadeIn" data-wow-delay=".05s">The Burndown Chart visually shows the Team&rsquo;s progress in&nbsp;<a href="https://www.visual-paradigm.com/scrum/what-is-story-point-in-agile/" target="_blank" rel="nofollow">Story Points</a>. It is a graphical representation of how much work has already been done and how much remains to be done. As the Sprint days pass and the tasks on the Sprint Board go to Done, the burn-down schedule goes down, showing the Team its progress towards the Sprint Goal.</p> <figure id="attachment_17075" class="wp-caption aligncenter wow fadeIn" data-wow-delay=".05s"><img class="wp-image-17075 size-full" src="https://fireart.studio/wp-content/uploads/2021/11/team2.jpg" sizes="(max-width: 781px) 100vw, 781px" srcset="https://fireart.studio/wp-content/uploads/2021/11/team2.jpg 781w, https://fireart.studio/wp-content/uploads/2021/11/team2-300x216.jpg 300w, https://fireart.studio/wp-content/uploads/2021/11/team2-768x554.jpg 768w, https://fireart.studio/wp-content/uploads/2021/11/team2-594x428.jpg 594w, https://fireart.studio/wp-content/uploads/2021/11/team2-268x193.jpg 268w, https://fireart.studio/wp-content/uploads/2021/11/team2-640x461.jpg 640w, https://fireart.studio/wp-content/uploads/2021/11/team2-450x324.jpg 450w" alt="" width="781" height="563" /> <figcaption id="caption-attachment-17075" class="wp-caption-text"><a href="https://dribbble.com/shots/11220196-Tips-to-Hire-a-Mobile-App-Development-Company" target="_blank" rel="nofollow">Tips to Hire a Mobile App Development Company</a></figcaption> </figure> <h2 class="wow fadeIn" data-wow-delay=".05s">Artifact Transparency</h2> <p class="wow fadeIn" data-wow-delay=".05s">For safe and practical work, decisions to optimize value and control risks are made based on the perceived state of the agile scrum artifacts. That&rsquo;s why they are essential to stick to when you produce software with a dedicated agile team.</p> <p class="wow fadeIn" data-wow-delay=".05s">It&rsquo;s also vital to the team to preserve this transparency while communicating with the customer. So, the three scrum artifacts primary purpose is to make the work transparent for the Scrum Team, the Owner, and other stakeholders.</p> <h2 class="wow fadeIn" data-wow-delay=".05s">Conclusion</h2> <p class="wow fadeIn" data-wow-delay=".05s">Scrum artifacts list, which you may find described in the official guide, contains a minimum set of recommendations for organizing the teamwork, where there are three key roles, Scrum 3 artifacts, and five events.</p> <p class="wow fadeIn" data-wow-delay=".05s">Applying artifacts associated with scrum in practice and the expertise exchange in the professional community may also lead to the emergence of many extra techniques that may be added to the basics. So, it&rsquo;s nice to inquiry and revises the Scrum framework artifacts as often as you work at product development using Agile methodologies.</p> <p class="wow fadeIn" data-wow-delay=".05s">Feel free to ask the professionals for more detailed requirements for roles in Scrum, techniques for assessing the complexity of Backlog elements, and examples of successful Scrum events.</p>
fireartd
896,120
5 Underrated resources to learn Git and Github
Save you time and use these resources to perfect your Git and GitHub knowledge. This week, I finally...
0
2021-11-19T13:15:09
https://dev.to/toru/5-underrated-resources-to-learn-git-and-github-4edi
github, git, webdev, beginners
**Save you time and use these resources to perfect your Git and GitHub knowledge.** This week, I finally built up the courage to deep dive into learning Git and GitHub without having to relying on GUI and using the command line. We are fortunate enough to have unlimited resources available to us at the click of the button. However, this can quickly become overwhelming and result in spending more time browsing articles and tutorials instead of actually learning! I have curated this list with hopes that time and energy can be saved. ## 1.[Github Minesweeper[Profy.dev - Free]](https://profy.dev/project/github-minesweeper) This is one of the best free courses to actively learn Git through playing a game with a bot. ![screenshot of the website profy.dev](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chhjbdn3pgttuy2qmfd0.png) It is a hand-on learning experience which helps prepare your Git skills for a job in a real-world team. It combines practice, theory and repetition to create the best learning environment to allow all the skills you learn to stick. ## 2. [Version Control with Git[Udacity - Free]](https://www.udacity.com/course/version-control-with-git--ud123) This is a Udacity course which covers the essentials of using the version control system Git which takes approximately 4 weeks on average to complete. ![Screenshot of udacity Git course landing page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t5lboy0zuw89lwz7moij.png) This course teaches you up to date skills using self-paced learning and interactive quizzes and tasks. ## 3.[Hacktoberfest](https://hacktoberfest.digitalocean.com/) Every October, Hacktoberfest takes place and is an event which encourages participation in the open source community with prices upon completion. ![Screenshot of Hacktoberfest landing page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfduqoktc7nypjx51i4w.png) Highly recommended for beginners who wish to put into practice their Git skills in real world projects. Notably, most beginner repositories have step by step instructions of how to contribute to their projects. ## 4.[Oh My Git!](https://ohmygit.org/) Oh My Git! is an open source game about learning Git created for complete beginners. It is fun and interactive which is a refreshing break from the usual theoretical courses found online. ![Screenshot of Oh My Git! Gameplay](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ku55efim76rhrgtzv2rc.png) It uses card game mechanics tasking players to work on repositories with others and fixing mistakes. It uses a mix of arrows and command cards to illustrate how changes in repositories flow. ## 5.[The Odin Project](https://www.theodinproject.com/paths/foundations/courses/foundations/lessons/git-basics) The Odin Project collects the best resources to supplement your learning and present them in a logical order. In their Git Basics course provide clear step by step instructions are given to complete assignments as well as knowledge checks and cheatsheets throughout the course. ![Screenshot of The Odin Project page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ed304johfle98fcrv8s.png) The Odin Project also has a community via discord chatrooms which provides additional help and motivation.
ifrah
896,143
Why You Need an Next-Gen AWS Managed Services Partner
The undoubted leader in all things cloud, AWS has over a million users in their active user base. But...
0
2021-11-12T12:14:13
https://dev.to/teleglobal/why-you-need-an-next-gen-aws-managed-services-partner-1pdd
aws, awsmanagedservices, awspartner
The undoubted leader in all things cloud, AWS has over a million users in their active user base. But the cloud is a dynamic, ever evolving place and navigating the AWS ecosystem, with its many, many services and features, to identify the best solution to your needs can be a complicated affair. To help their customers leverage AWS to the maximum, the AWS Managed Services Partners program was launched. The program aimed at ensuring every customer had a dedicated expert—through Amazon’s partner companies—to help them meet their needs by helping them set up and optimize their cloud and guiding them towards the best-fit cloud solutions. For instance, an AWS cloud partner can help reduce down time to mere minutes a year, i.e. 99.99% SLA. Cloud partners can assist businesses at every step of the cloud journey, from preparing for migration, to executing the migration with minimal disruption, and post-migration, optimization of infrastructure and cost.
teleglobal
896,477
Επικοινωνία ανθρώπου-υπολογιστή: Γιατί είναι σημαντική η μελέτη της
Λόγοι μελέτης αλληλεπίδρασης ανθρώπου-υπολογιστή (Α.Α.Υ) και Μοντέλα εργασίας...
0
2021-11-12T17:14:42
https://blog.eleftheriabatsou.com/logoi-meletis-allilepidrasis-anthropou-upologisti
hci, design, ux
## Λόγοι μελέτης αλληλεπίδρασης ανθρώπου-υπολογιστή (Α.Α.Υ) και Μοντέλα εργασίας ανθρωποκεντρικής σχεδίασης Ο τομέας της Α.Α.Υ.() σχετίζεται με πολυάριθμες πτυχές του σύγχρονου τρόπου ζωής και για αυτό η μελέτη της κρίνεται αναγκαία. Πιο συγκεκριμένα μπορεί να έχει μία κοινωνική διάσταση αφού οι υπολογιστές χρησιμοποιούνται σε όλες τις εκφάνσεις της καθημερινότητας του ανθρώπου (δουλειά, διασκέδαση, ελεύθερος χρόνος, χόμπι κ.τλ). Σε αντίθεση λοιπόν με το παρελθόν που οι υπολογιστές χρησιμοποιούνταν μόνο από εξειδικευμένους χρήστες, τώρα χρησιμοποιούνται από την πλειοψηφία των σύγχρονων ανθρώπων, γεγονός που φανερώνει και την σημαντικότητα του κλάδου της Α.Α.Υ. (Αβούρης, 2015˙). Ακόμα ένας πολύ σημαντικός λόγος μελέτης της Α.Α.Υ. είναι η ηθική διάσταση και η ευθύνη που προκύπτει για το μηχανικό και το σχεδιαστή. Ο μηχανικός έχει ευθύνη απέναντι στους ανθρώπους που θα χρησιμοποιήσουν τις κατασκευές του. Αυτές πρέπει να λειτουργούν σωστά και να μην βλάπτουν την υγεία ή την περιουσία του ιδιοκτήτη. Όταν δημιουργείται ένα πρόβλημα πολλές φορές αναζητείται η πηγή του, αυτή μπορεί να προέρχεται από τον λάθος χειρισμό ενός μηχανήματος, ο οποίος με τη σειρά του μπορεί να προέρχεται από τον κακό σχεδιασμό. Σε αυτές τις περιπτώσεις προκύπτει το ερώτημα ποιος είναι υπεύθυνος, ο χειριστής ή ο μηχανικός; Ήδη αναφέρθηκαν παραδείγματα κακού σχεδιασμού (όπως το Τσερνόμπιλ ή το three Mile Island) τα οποία επέφεραν άσχημες επιπτώσεις στον άνθρωπο και στο περιβάλλον. Η απάντηση σε αυτό το ερώτημα δεν είναι πάντα ξεκάθαρη. Τυπικά, συχνά αποτελεί ευθύνη του χειριστή αφού αυτός είναι υπεύθυνος για τον χειρισμό ενός μηχανήματος, έμμεσα όμως το πρόβλημα μπορεί να ξεκινάει από την ίδια την κατασκευή του μηχανήματος (Αβούρης, 2015). Άρα προκύπτει τουλάχιστον συνυπευθυνότητα με το σχεδιαστή. Φυσικά, υπάρχουν και λιγότερο κρίσιμες περιπτώσεις στις οποίες ένας κακός χειρισμός μπορεί να επιφέρει σύγχυση ή εκνευρισμό στον χρήστη ή περιπτώσεις όπου ο χρήστης μπορεί να γίνει λιγότερο αποδοτικός, παραγωγικός ή γρήγορος. Για όλους τους παραπάνω λόγους θεωρείται ότι ο μηχανικός είναι υπεύθυνος απέναντι στο κοινωνικό σύνολο (Αβούρης, 2015˙). Ακόμα ένας σημαντικός λόγος μελέτης της Α.Α.Υ. είναι η σχέση της με την προσβασιμότητα (accessibility). Ένα σύστημα πρέπει να σχεδιάζεται με τέτοιο τρόπο ώστε να μπορεί να χρησιμοποιηθεί από την ευρύτερη δυνατή ομάδα ατόμων. Πολλές φορές παρατηρείται το φαινόμενο να μην “υπολογίζονται” άνθρωποι μεγαλύτερης ηλικίας, άνθρωποι με προβλήματα όρασης ή κινητικά προβλήματα. Ο σχεδιαστής μέσα από μία σειρά ερωτήσεων π.χ. θα μπορούσε να χρησιμοποιηθεί διαφορετικό υλικό; διαφορετικό ύψος; υπάρχει κάποια άλλη λύση; είναι οικονομικά συμφέρουσα; είναι σίγουρα κατάλληλη/ασφαλής αυτή η λύση για το πιο ευρύ κοινό; πόσο ευρύ πρέπει να είναι το κοινό που χρησιμοποιεί το συγκεκριμένο εργαλείο; προσπαθεί να καλύψει ένα μεγάλο τμήμα χρηστών. Τις περισσότερες φορές οι απαντήσεις σε αυτές τις ερωτήσεις μπορεί να είναι χρονοβόρες ενώ δεν είναι και καθόλου εύκολες. Εφόσον εκτός από το σχεδιαστή και τους χρήστες συνδιαμορφώνονται και από τον μηχανικό, τον προγραμματιστή, τον επιχειρηματία, τον διευθυντή, τον οικονομικό συντελεστή κ.λπ. (Αβούρης, 2015). Τέλος, εξίσου σημαντικό ρόλο παίζει η μελέτη της Α.Α.Υ. και σε σχέση με την οργάνωση, κάτι που συναντάται ιδιαίτερα στις μέρες μας μέσα σε μικρές ή μεγάλες επιχειρήσεις (Αβούρης, 2015). Η τεχνολογία μπορεί να παίζει έναν ιδιαίτερο ρόλο και μάλιστα μπορεί να φτάσει σε σημείο να επηρεάσει ακόμα και την δημοκρατία (για παράδειγμα η ψήφιση του πλανητάρχη μέσω ηλεκτρονικού συστήματος στις ΗΠΑ) ή να αποτελέσει θέμα κοινωνικής ανισότητας (για παράδειγμα η τηλεκπαίδευση κατά την περίοδο covid-19 η οποία μπορεί να πραγματοποιηθεί μόνο από άτομα που κατέχουν τις απαραίτητες τεχνολογικές συσκευές). ## Μοντέλα εργασίας ανθρωποκεντρικής σχεδίασης Για την ανάπτυξη αλληλεπιδραστικών συστημάτων πολύ σημαντικό κρίνεται το στάδιο της ανάλυσης απαιτήσεων και σχεδίασης. Η σχεδίαση ενός συστήματος θεωρείται τόσο τέχνη όσο και τεχνική. Η τέχνη είναι κάτι πολύ δύσκολο να διδαχτεί, για αυτό τόσο στις περισσότερες σχολές όσο και σε αυτήν την εργασία θα δοθεί έμφαση κυρίως στην τεχνική, δηλαδή στις μεθόδους και τα εργαλεία που μπορούν να διδαχθούν και να χρησιμοποιηθούν για την ανάπτυξη υλικού και λογισμικού με βάση τον ανθρωποκεντρικό σχεδιασμό (user centered design) (Αβούρης, 2015). Πρόκειται για ένα τρόπο σχεδιασμού κατά τον οποίο, ο χρήστης και οι ανάγκες του σχετικά με τις εργασίες που θέλει να κάνει χρησιμοποιώντας το σύστημα, αποτελούν τη βασική προτεραιότητα. Χαρακτηριστικό αυτής της μεθόδου είναι ότι για κάθε φάση περιλαμβάνει την καταγραφή και τη μέτρηση αντιδράσεων των χρηστών. Ίσως ένα από τα σημαντικότερα στοιχεία της είναι ότι το σχέδιο “περνάει” μέσα από πολλούς κύκλους επανάληψης, κάτι που επιτρέπει τη σταδιακή βελτίωσή του μέσα από την ανατροφοδότηση (δρ. Ακουμιανάκης, 2010˙ Κουτσαμπάσης, 2011). Μέρος της ανθρωποκεντρικής σχεδίασης είναι η ανάλυση του προβλήματος, η ανάλυση απαιτήσεων των χρηστών καθώς και η ανάλυση των χαρακτηριστικών τους, των εργασιών τους και του περιβάλλοντός τους. Πώς ορίζεται όμως η ίδια η επιστήμη της σχεδίασης (design science); Κατά τον Herbert A. Simon η σχεδίαση ορίζεται ως “ένα σώμα από συγκεκριμένες, αναλυτικές, μερικώς προτυποποιημένες, μερικώς εμπειρικές, ικανές να διδαχθούν οδηγίες για τη σχεδιαστική διαδικασία (design process)” (Αβούρης, 2015). Θεωρείται ότι η σχεδίαση δεν αποτελεί μία κανονική επιστήμη αφού εξετάζει τα πράγματα από τη σκοπιά του “πώς θα έπρεπε να είναι” και όχι “πως πραγματικά είναι”. Δεν υπάρχει μοναδική λύση για την επίλυση ενός σχεδιαστικού προβλήματος, ενώ μάλιστα οι πληροφορίες και τα δεδομένα του σχεδιαστή μπορεί να μεταβάλλονται κατά τη διάρκεια ολοκλήρωσης ενός συστήματος. Πέρα από αυτό, ο σχεδιαστής έχει να αντιμετωπίσει και περιορισμούς οι οποίοι μπορεί να αφορούν τα υλικά, το χρόνο, την τεχνολογία κ.λπ. (δρ. Ακουμιανάκης, 2010). ### Βασικά χαρακτηριστικά των μοντέλων ανάπτυξης συστημάτων λογισμικού Η επιστήμη της τεχνολογίας λογισμικού που είναι επίσης ένας κλάδος που μελετά την σχεδίαση προϊόντων λογισμικού, κάτι στο οποίο δίνει έμφαση και αυτή η εργασία, έχει αναπτύξει μεθοδολογίες και τεχνικές για μία δομημένη μελέτη και δράση των σχεδιαστών. Αυτές οι μεθοδολογίες και οι τεχνικές περιγράφουν τις σημαντικές φάσεις ανάπτυξης ενός συστήματος και συχνά ονομάζονται “μοντέλα ανάπτυξης” ή “κύκλοι ζωής συστημάτων”. Είναι συνεπώς χρήσιμο για τους ανθρώπους που ασχολούνται με την Α.Α.Υ. να γνωρίζουν τις σχετικές μεθόδους και τεχνικές ώστε να εργαστούν με βάση αυτές (Αβούρης, 2015˙ δρ. Ακουμιανάκης, 2010˙ Κουτσαμπάσης, 2011). **Μοντέλο καταρράκτη (waterfall model):** Αποτελείται από μια διαδοχική αλληλουχία βημάτων ή σταδίων. Το πρώτο βήμα είναι περιγραφή της εφαρμογής (Application Description) και του προτεινόμενου τρόπου υλοποίησής της. Ακολουθεί το βήμα της ανάλυσης όπου καταγράφονται επίσημα οι απαιτήσεις της εφαρμογής και είναι γνωστό ως “προδιαγραφές απαιτήσεων” (Requirements Specifications). Αυτό μπορεί να λειτουργήσει και ως συμβόλαιο ανάμεσα σε αυτούς που θα αναλάβουν το έργο και στον εργοδότη. Στο τρίτο στάδιο γίνεται η λεπτομερής περιγραφή του τελικού προϊόντος (Design Specifications), ώσπου καταλήγουμε στην υλοποίηση της εφαρμογής, δηλαδή στο τελικό προϊόν (Final Product). Το τελικό στάδιο περιλαμβάνει το προγραμματισμό, τις βάσεις δεδομένων, τα εγχειρίδια χρήσης, κ.λπ. Το κάθε στάδιο θεωρείται ολοκληρωμένο αφού έχει πραγματοποιηθεί έλεγχος για την κάλυψη των απαιτήσεων. Χαρακτηριστικό του μοντέλου ανάπτυξης καταρράκτη είναι ότι κάθε στάδιο είναι διακριτό, ενώ από το ένα στάδιο στο άλλο έχουμε αυξανόμενες πληροφορίες και λεπτομέρειες για το προϊόν. Το πλεονέκτημα αυτού του μοντέλου είναι η σαφής περιγραφή των φάσεων ανάπτυξης, ενώ γενικά θεωρείται μια εύκολα κατανοητή διαδικασία για όλα τα εμπλεκόμενα μέρη. Σημαντικό, ωστόσο, μειονέκτημα είναι η αδυναμία της λεπτομερούς περιγραφής του προϊόντος πριν την υλοποίηση του, κάτι που συχνά οδηγεί σε τροποποιήσεις των λεπτομερειών των προδιαγραφών που συναντάται στο τρίτο στάδιο (Κουτσαμπάσης, 2011˙ Αβούρης, 2015). **Ελικοειδές μοντέλο (spiral model):** Συναντάται και με το όνομα σπειροειδές μοντέλο ή μοντέλο εξελικτικής ανάπτυξης και λύνει τα προβλήματα του μοντέλου καταρράκτη. Το χαρακτηριστικό αυτού του μοντέλου είναι ότι η ανάπτυξη του προϊόντος γίνεται βάσει μιας επαναληπτικής διαδικασίας η οποία περιλαμβάνει τη σύνταξη προδιαγραφών ή απαιτήσεων, το σχεδιασμό, την ανάπτυξη πρωτοτύπων και τέλος την αξιολόγηση και μέτρηση της ευχρηστίας. Σε κάθε κύκλο επανάληψης δημιουργούνται πρωτότυπα στα οποία προστίθενται όλο και περισσότερες λεπτομέρειες. Κάθε φορά αυτά τα πρωτότυπα αξιολογούνται και αποτελούν τη βάση για την ανάπτυξη της επόμενης επανάληψης. Το ελικοειδές μοντέλο συναντάται στις αντικειμενοστραφείς μεθοδολογίες ανάλυσης και σχεδιασμού. Το μοντέλο αυτό ταιριάζει σε εφαρμογές που έχουν έντονη αλληλεπίδραση με χρήστες, αφού μπορούν να δοθούν πρωτότυπα σε αυτούς, να βρεθούν οι αντιδράσεις τους και με βάση αυτές να εξελιχθεί το προϊόν. Με βάση το ανθρωποκεντρικό μοντέλο σχεδίασης, οι χρήστες ενός συστήματος όπως και οι απαιτήσεις τους και το περιβάλλον τους πρέπει να είναι καταγεγραμμένα. Επίσης οι χρήστες πρέπει να εμπλέκονται σε κάθε φάση σχεδίασης και αξιολόγησης του συστήματος (Κουτσαμπάσης, 2011˙ Αβούρης, 2015). **Αστεροειδές μοντέλο (star model):** Βασικό χαρακτηριστικό του αστεροειδούς μοντέλου είναι η συνεχής αξιολόγηση του προϊόντος και η συμμετοχή των χρηστών σε κάθε φάση. Οι φάσεις δεν είναι αυστηρά ορισμένες, ωστόσο μετά την ολοκλήρωση κάθε μίας (φάση ανάλυσης, σχεδιασμού, υλοποίησης), γίνεται αξιολόγηση του προϊόντος με την συμμετοχή χρηστών ή ειδικών (Κουτσαμπάσης, 2011˙ Αβούρης, 2015). **Μοντέλο ανθρωποκεντρικού ή χρηστοκεντρικού σχεδιασμού κατά ISO 9241-210:2010 (human-centred design for interactive systems):** Περιλαμβάνει τις φάσεις του σχεδιασμού της ανθρωποκεντρικής διαδικασίας, το προσδιορισμό πλαισίου χρήσης, το προσδιορισμό απαιτήσεων χρηστών και οργανωτικών απαιτήσεων, την παραγωγή σχεδιαστικών λύσεων και την αξιολόγηση σχεδίασης βάσει απαιτήσεων. Οι τέσσερις τελευταίες φάσεις εκτελούνται επαναληπτικά ώσπου να ικανοποιηθούν οι απαιτήσεις. Για να διασφαλιστεί ο ανθρωποκεντρικός σχεδιασμός προτείνονται έξι βασικές αρχές: ο σχεδιασμός να βασίζεται σε μία σαφή κατανόηση των χρηστών, των εργασιών και του περιβάλλοντος χρήσης, οι χρήστες να εμπλέκονται σε όλη τη διάρκεια του σχεδιασμού και της ανάπτυξης, ο σχεδιασμός να γίνεται μέσω αξιολογήσεων και να αφορά το σύνολο της εμπειρίας του χρήστη, η διαδικασία να είναι επαναληπτική και η ομάδα σχεδιασμού να περιλαμβάνει άτομα από διαφορετικούς κλάδους (Κουτσαμπάσης, 2011˙ Αβούρης, 2015). **Μοντέλα ανάπτυξης δικτυακών τόπων:** Ο δικτυακός τόπος παρουσιάζει μία ιδιομορφία σε σχέση με άλλα λογισμικά. Είναι ταυτόχρονα μία εφαρμογή με την οποία αλληλοεπιδρά ο χρήστης αλλά και ένας πληροφοριακός χώρος ενώ δίνεται έμφαση στην ευχρηστία, τη συνολική εμπειρία άρα και το τρόπο δομής, οργάνωσης και διασύνδεσης του περιεχομένου. Με βάση το χρηστοκεντρικό σχεδιασμό, συναντούνται στη βιβλιογραφία τουλάχιστον δύο τεχνικές ανάπτυξης συστημάτων: α) ο χάρτης διαδικασίας του κεντρικού σχεδιασμού (human-centred design for interactive systems), ο οποίος περιλαμβάνει την φάση σχεδιασμού της διαδικασίας, την φάση ανάλυσης, την φάση σχεδιασμού και ανάπτυξης, και την φάση ελέγχου και βελτίωσης. Το μοντέλο ακολουθεί μια επαναληπτική διαδικασία και η ανάπτυξη του συστήματος καθορίζεται από την ομάδα σχεδιασμού και ανάπτυξης. β) τα στοιχεία εμπειρίας χρήστη (elements of user experience), που είναι ένα γενικευμένο μοντέλο ως προς την εμπειρία χρήστη σε διαδικτυακούς τόπους και εφαρμογές. Η ιδιαιτερότητά του είναι ότι προσεγγίζει την σχεδίαση με γνώμονα ότι ένα πληροφοριακό σύστημα είναι τόσο ένας υπερκειμενικός χώρος όσο και μια διεπιφάνεια. Συναντούνται πέντε στάδια σχεδιασμού: η επιφάνεια, ο σκελετός, η δομή, ο σκοπός και η στρατηγική. Πρόκειται για μια επαναληπτική διαδικασία στην οποία το ένα στάδιο τροφοδοτεί το επόμενο (Κουτσαμπάσης, 2011˙ Αβούρης, 2015). ******************* Και αυτή η ενότητα κλείνει κάπου εδώ! Μπορείτε να διαβάσετε την προηγούμενη [εδώ](https://blog.eleftheriabatsou.com/eisagwgh-sthn-epikoinwnia-an8rwpoy-ypologisth), ενώ στην επόμενη θα αναφερθούμε στον χρήστη, στις εργασίες του και στο περιβάλλον χρήσης. ## Αναφορές - Αβούρης, Ν., Κατσάνος, Χρ., Τσέλιος, Ν., Μουστάκας, Κ. (2015). Εισαγωγή στην Αλληλεπίδραση Ανθρώπου-Υπολογιστή. Αθήνα: ΣΕΑΒ - Ακουμιανάκης, Δρ. Δ. (2010). Διεπαφή Χρήστη-Υπολογιστή Μια σύγχρονη προσέγγιση. Αθήνα: Κλειδάριθμος - Κουτσαμπάσης, Π. (2011). Αλληλεπίδραση Ανθρώπου-Υπολογιστή Αρχές Μέθοδοι και Παραδείγματα. Αθήνα: Κλειδάριθμος ### Σημείωση: Το παραπάνω άρθρο αποτελεί μέρος της Διπλωματικής Εργασίας, Επικοινωνία ανθρώπου-υπολογιστή: Σχεδιασμός, Υλοποίηση και Αξιολόγηση για την Εκπαίδευση Προπτυχιακών Φοιτητών/τριών σε Εργαλεία και Τεχνικές που Αφορούν την Εμπειρία Χρήστη (User Experience Design) σε Διαδικτυακούς Τόπους και Εφαρμογές. Σχολή Εφαρμοσμένων Τεχνών και Βιώσιμου Σχεδιασμού, τμήμα Γραφικές Τέχνες - Πολυμέσα. ************************************** 👋 Hello, I'm Eleftheria, Community Manager at Hashnode, developer, public speaker, and chocolate lover. 🥰 If you liked this post please share. 🍩 Would you care about buying me a coffee? You can do it [here](https://www.paypal.com/paypalme/eleftheriabatsou) but If you can't that's ok too! ************************************** 🙏It would be nice to subscribe to my [Youtube](https://www.youtube.com/c/EleftheriaBatsou) channel. It’s free and it helps to create more content. 🌈[Youtube](https://www.youtube.com/c/EleftheriaBatsou) | [Codepen](https://codepen.io/EleftheriaBatsou) | [GitHub](https://github.com/EleftheriaBatsou) | [Twitter](https://twitter.com/BatsouElef) | [Site](http://eleftheriabatsou.com/) | [Instagram](https://www.instagram.com/elef_in_tech) | [LinkedIn](https://www.linkedin.com/in/eleftheriabatsou/)
eleftheriabatsou
896,845
Code testing with Jest
This week I looked at code testing for my Static Site Generator static-dodo. I picked Jest for this...
0
2021-11-13T03:10:33
https://dev.to/menghif/code-testing-with-jest-3b4h
This week I looked at code testing for my Static Site Generator [static-dodo](https://github.com/menghif/static-dodo). I picked [Jest](https://jestjs.io) for this since it is a very popular unit testing tool used by many open source projects. ### How to setup Jest I started by installing Jest using npm ```console npm install --save-dev jest ``` then I added `"test": "jest"` to the `"scripts"` section of my `package.json` to be able to run tests with the simple command: ```console npm test ``` Before getting into writing tests, I had to tackle a problem that I have been putting to the side: having a long code all in one file. I spent a lot of time refactoring to move some logic into different files and export those as modules. Here is an [example](https://github.com/menghif/static-dodo/blob/main/lib/writeHTML.test.js) of a test I wrote to make sure that the correct html is returned by the [`writeHTML`](https://github.com/menghif/static-dodo/blob/main/lib/writeHTML.js) function: ```js test("writeHTML with Title, Body and Stylesheet should return correct html", () => { expect(writeHTML("Title", "Body", "Stylesheet")).toBe( `<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Title</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="Stylesheet"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/highlightjs@9.16.2/styles/github.css"> </head> <body> <h1>Title</h1> Body </body> </html>` ); }); ``` ### Thoughts I only scratched the surface of unit testing but I do find the process of writing tests pretty complicated, especially when it comes to creating Mock functions. I will need to get more familiar with Jest in the future and I have a lot more tests to add to the project.
menghif
896,856
JavaScript Tips: Using Array.filter(Boolean)
What does .filter(Boolean) do on Arrays? This is a pattern I've been coming across quite a...
15,530
2021-11-13T12:26:14
https://mikebifulco.com/posts/javascript-filter-boolean
javascript, react, programming
--- title: JavaScript Tips: Using Array.filter(Boolean) published: true date: 2021-11-12 00:00:00 UTC tags: [javascript, react, programming] cover_image: https://res.cloudinary.com/mikebifulco-com/image/upload/f_auto,q_auto/v1/posts/javascript-filter-boolean/cover.webp canonical_url: https://mikebifulco.com/posts/javascript-filter-boolean series: JavaScript Tips --- ## What does .filter(Boolean) do on Arrays? This is a pattern I've been coming across quite a bit lately in JavaScript code, and can be extremely helpful once you understand what's going on. In short, it's a bit of [functional programming](https://mikebifulco.com/tags/functional-programming) which is used to remove `null` and `undefined` values from an array. ```js const values = [1, 2, 3, 4, null, 5, 6, 7, undefined]; console.log(values.length); // Output: 9 console.log(values.filter(Boolean).length); // Output: 7 // note that this does not mutate the value original array console.log(values.length); // Output: 9 ``` ## How does the Boolean part of .filter(Boolean) work? We're using a function built into arrays in JavaScript, called [Array.prototype.filter](https://developer.mozilla.org/en-US/docs/web/javascript/reference/global_objects/array/filter), which _creates a new array_ containing all elements that pass the check within the function it takes as an argument. In this case, we're using the JavaScript `Boolean` object wrapper's constructor as that testing function. `Boolean` is a helper class in JavaScript which can be used to test whether a given value or expression evaluates to `true` or `false`. There's a subtle, but **really important point** here - `Boolean()` follows the JavaScript rules of _truthiness_. That means that the output `Boolean()` might not always be what you imagine. In this context, passing `Boolean` to `.filter` is effectively shorthand for doing this: ```js array.filter((item) => { return Boolean(item); }); ``` which is also approximately the same as ```js array.filter((item) => { return !!item; // evaluate whether item is truthy }); ``` or, simplified ```js array.filter(item => !!item) ``` I suspect that you may have seen at least one of these variations before. In the end, `array.filter(Boolean)` is just shorthand for any of the other options above. It's the kind of thing that can cause even seasoned programmers to recoil in horror the first time they see it. Near as I can tell, though, it's a perfectly fine replacement. ### Examples of Boolean evaluating for _truthiness_ ```js // straightforward boolean Boolean(true) // true Boolean(false) // false // null/undefined Boolean(null) // false Boolean(undefined) // false // hmm... Boolean(NaN) // false Boolean(0) // false Boolean(-0) // false Boolean(-1) // true // empty strings vs blank strings Boolean("") // false Boolean(" ") // true // empty objects Boolean([]) // true Boolean({}) // true // Date is just an object Boolean(new Date()) // true // oh god Boolean("false") // true Boolean("Or any string, really") // true Boolean('The blog of Mike Bifulco') // true ``` ## Warning: Be careful with the truth(y) So - `someArray.filter(Boolean)` is really helpful for removing `null` and `undefined` values, but it's important to bear in mind that there are quite a few confusing cases above... this trick will remove items with a value of `0` from your array! That can be a significant difference for interfaces where displaying a `0` is perfectly fine. **EDIT:** Hi, Mike from The Future™️ here - I've edited the next paragraph to reflect the _actual_ truth... I had confused `-1` with `false` from my days as a BASIC programmer, where we'd sometimes create infinite loops with `while (-1)`... but even that means "while `true`"! I also want to call some attention to cases that evaluate to `-1`. The `-1` case can also be unintuitive if you're not expecting it, but true to form, in JavaScript, `-1` is a truthy value! ## Array.filter(Boolean) For React Developers I tend to come across this pattern being used fairly often for iterating over collections in React, to clean up an input array which may have had results removed from it upstream for some reason. This protects you from scary errors like `Can't read property foo of undefined` or `Can't read property bar of null`. ```js const people = [ { name: 'Mike Bifulco', email: 'hello@mikebifulco.com', }, null, null, null, { name: "Jimi Hendrix", email: 'jimi@heyjimihimi@guitarsolo', } ] // display a list of people const PeopleList = ({people}) => { return ( <ul> {people.map(person) => { // this will crash if there's a null/undefined in the list! return ( <li>{person.name}: {person.email}</li> ); }} </ul> ); } // a safer implementation const SaferPeopleList = ({people}) => { return ( <ul> {people .filter(Boolean) // this _one weird trick!_ .map(person) => { return ( <li>{person.name}: {person.email}</li> ); } } </ul> ); } ``` ## Functional Programming reminder Like I mentioned above, this is a handy bit of functional programming -- as is the case with nearly all clever bits of functional programming, it's important to remember that we're not _mutating_ any arrays here - we are creating new ones. Let's show what that means in a quick example: ```js const myPets = [ 'Leo', 'Hamilton', null, 'Jet', 'Pepper', 'Otis', undefined, 'Iona', ]; console.log(myPets.length); // 8 myPets .filter(Boolean) // filter null and undefined .forEach((pet) => { console.log(pet); // prints all pet names once, no null or undefined present }); console.log(myPets.length); // still 8! filter _does not mutate the original array_ ``` ## Wrapping up Hopefully this has helped to demystify this little code pattern a bit. What do you think? Is this something you'll use in your projects? Are there dangers/tricks/cases I didn't consider here? Tell me all about it on twitter [@irreverentmike](https://twitter.com/irreverentmike). If you _really_ like what I've got to say, I'd love it if you [subscribed to my newsletter](https://mikebifulco.com/newsletter) as well. Occasional useful stuff, no spam, and I promise it doesn't suck. Thanks for reading! 🎉 _note: Cover photo for this article is from [Pawel Czerwinski](https://unsplash.com/@pawel_czerwinski?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/s/photos/array?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)_
irreverentmike
897,108
Aplikasi VPN Terbaik
Dibanding kebingungan harus check dan ricek untuk dapetin program VPN terbaik, lebih bagus sahabat...
0
2021-11-13T12:36:14
https://dev.to/genemil/aplikasi-vpn-terbaik-289m
Dibanding kebingungan harus check dan ricek untuk dapetin program VPN terbaik, lebih bagus sahabat tekno baca saja pembahasan berikut ini mengenai program VPN terbaik versus website tehnologi indonesia. Baca sampai akhir ya sob! 1. VPN Master VPN gratis yang ini mempunyai size yang yang paling kecil sama ukuran 6,7 MB doang. Walau demikian, program ini mempunyai banyak feature. Salah satunya feature yang sering dicari pemakai VPN ialah Unblock Site yang dapat terhubung web yang telah terblokir karena internet positif. Luar biasanya kembali, VPN Master ini dapat bekerja secara lancar jaya di konektivitas 3G, 4G, Wifi, dan data connection yang lain. Dan yang terpenting , program ini benar-benar referensi sekali sich untuk sahabat yang mempunyai internet lemot karena program ini bisa juga menambahkan kecepatan internet sahabat. Sampai tulisan ini dibikin, VPN Master telah terunduh sekitar 50 juta kali dengan peringkat 4,7. Takut sekali datanya. Mantul sekali dech! 2. Turbo VPN Turbo VPN dapat anda pakai dengan gratis 100%. VPN dengan kecepatan tinggi dan di-claim paling hebat khusus pemakai Android. VPN yang tersambung seperti kelinci ini dapat secara mudah buka website yang terblokir, amankan Wifi Hotspot, dan membuat perlindungan privacy. Sampai tulisan ini termuat di website teknobaik, program ini telah terdownload sekitar 100 juta kali dengan peringkat 4,6. Salah satunya program VPN terbanyak dipakai di dunia. Takut! 3. Hi VPN Program VPN setelah itu Hi VPN yang sebuah program yang sanggup lakukan unblock pada tiap web yang dikunci. Program ini cepat sekali dan gratis selama-lamanya dan dapat dipakai untuk amankan internet, baik itu WiFi atau Connection Data. Hal unik dari Hi VPN ialah kemapuannya yang dapat atur Fake GPS, di mana sahabat tekno dapat tentukan lokasi palsu dari penjuru dunia. Sampai tulisan ini termuat, Hi VPN telah terunduh sekitar 10 juta kali dengan peringkat 4,4. 4. Freedome VPN Unlimited Program VPN yang dipercayai oleh beberapa ahli penjuru dunia yang disembahkan oleh perusahaan dengan rekam jejak terbaik yang paling menghargai privacy beberapa pemakainya. Nikmatnya sich, program ini tidak mempunyai batas bandwith sama sekalipun. Jadi dapat dijangkau selama-lamanya secara gratis. Program ini telah didownload sekitar 1 juta kali dengan peringkat 4,4. Angka yang lumayan baik untuk sebuah program VPN. 5. VPN Free - Betternet Program VPN gratis dan terbaik setelah itu VPN Free - Betternet. Program ini mempunyai banyak feature yang dihandalkan, salah satunya yang paling mencolok selainnya unblock site ialah sanggup sembunyikan alamat IP dan menambahkan kecepatan internet saat sahabat tekno memakai program ini. Sudah pasti program ini aman untuk sahabat utamakan saat lakukan aktivitas menjelajahi internet. Sayang, program ini tidak ada di playstore. Tapi… sahabat tekno bisa mengambil langsung lewat web resminya. Beberapa lalu ada rumor mengenai keamanan data personal saat memakai VPN gratis. Saya berpikir itu bisa jadi terjadi, karena memanglah tidak ada program yang betul-betul aman. Tapi… faksi google sebagai penyuplai jual-beli program (Google Play Toko) tidak tinggal diam dengan rumor keamanan itu. Karena itu hadirlah feature yang namanya Google Play Proteksi yang sebuah mekanisme keamanan android seperti klarifikasi program, pelindungan browser, perlakuan anti perampokan, dan lain-lain. Pokoknya sich, semuanya yang berada di play-store telah betul-betul di pantau secara ketat oleh faksi google. Bila bermain-main, faksi google langsung akan bertindak banned. Untuk meminimalkan perampokan data, sebaiknya sahabat tekno tidak mengambil program dari web gadungan. Harus dari web sah atau penyuplai service seperti Google Play Toko ini. Sampai sini memahami kan? Jadi sahabat tekno tak perlu cemas kembali bila ingin mengambil program VPN di android, karena beberapa aplikasi yang telah diterangkan di atas dipantau ketat sama Google. Demikian info yang dapat <a href="https://nirvanaharapan.com">https://nirvanaharapan.com</a> berikan buat anda semua pembaca setia website tehnologi indonesia. Mudah-mudahan berguna dan pakailah dengan arif program VPN itu untuk beberapa hal yang berguna. Terima kasih!
genemil
897,124
Create a simple editable table with search and pagination in React JS in 2 min | React JS…
Create a simple editable table with search and pagination in React JS in 2 min | React JS...
0
2021-11-13T15:39:53
https://gyanendraknojiya.medium.com/create-a-simple-editable-table-with-search-and-pagination-in-react-js-in-2-min-react-js-56328e067733
editabletable, frontend, reactjsdevelopment, reactjstutorials
--- title: Create a simple editable table with search and pagination in React JS in 2 min | React JS… published: true date: 2021-11-01 18:22:03 UTC tags: editabletable,frontenddevelopment,reactjsdevelopment,reactjstutorials canonical_url: https://gyanendraknojiya.medium.com/create-a-simple-editable-table-with-search-and-pagination-in-react-js-in-2-min-react-js-56328e067733 --- ### Create a simple editable table with search and pagination in React JS in 2 min | React JS development ![image](https://cdn-images-1.medium.com/max/1024/0*eqK_VyWoJwpg7aIA.jpg) The tabular view with pagination is the best view to show data. If we need a listing of large data like posts, users, etc in the dashboard, then we can create a simple table view. But creating a custom table takes a long time. So here we are going to see how can we create a best practice table view in just 2 min. ## **Installing-** We are going to use a **material-table** package. We can install it by using NPM or yarn. ```bash npm install material-table @material-ui/core // or yarn add material-table @material-ui/core ``` **Optionally, you can also install material icons-** To install @material-ui/icons with npm: ```bash npm install @material-ui/icons ``` To install @material-ui/icons with yarn: ```bash yarn add @material-ui/icons ``` ## **Configuration-** After installing it, we can directly import it into the respected component and need some required data. It needs an array of columns- ```js const columns = [ { title: 'First Name', field: 'firstName' }, { title: 'Last Name', field: 'lastName', initialEditValue: 'initial value', }, { title: 'Mobile Number', field: 'mobileNumber', type: 'numeric' }, { title: 'Email', field: 'email', editable: 'never' }, ] ``` Now, we need an array of data for for columns. Make sure field name should match in columns with keys of of object in data. ```js const data = [ { firstName: 'Gyanendra', lastName: 'Knojiya', mobileNumber: 8802879231, email: 'gyanendrak064@gmail.com', }, { firstName: 'Virat', lastName: 'Kohli', mobileNumber: 9876543210, email: 'virat@gmail.com', }, { firstName: 'Rohit', lastName: 'Sherma', mobileNumber: 9984572157, email: 'rohit@gmail.com', }, ] ``` ###### **Action Icons-** ###### You can also add material icons. First we need to import all icons and after that we need to add icons ref for every action- **// import-** ```js import AddBox from '@material-ui/icons/AddBox' import ArrowDownward from '@material-ui/icons/ArrowDownward' import Check from '@material-ui/icons/Check' import ChevronLeft from '@material-ui/icons/ChevronLeft' import ChevronRight from '@material-ui/icons/ChevronRight' import Clear from '@material-ui/icons/Clear' import DeleteOutline from '@material-ui/icons/DeleteOutline' import Edit from '@material-ui/icons/Edit' import FilterList from '@material-ui/icons/FilterList' import FirstPage from '@material-ui/icons/FirstPage' import LastPage from '@material-ui/icons/LastPage' import Remove from '@material-ui/icons/Remove' import SaveAlt from '@material-ui/icons/SaveAlt' import Search from '@material-ui/icons/Search' import ViewColumn from '@material-ui/icons/ViewColumn' ``` **// Add-** ```js const tableIcons = { Add: forwardRef((props, ref) => <AddBox {...props} ref={ref} />), Check: forwardRef((props, ref) => <Check {...props} ref={ref} />), Clear: forwardRef((props, ref) => <Clear {...props} ref={ref} />), Delete: forwardRef((props, ref) => <DeleteOutline {...props} ref={ref} />), DetailPanel: forwardRef((props, ref) => <ChevronRight {...props} ref={ref} />), Edit: forwardRef((props, ref) => <Edit {...props} ref={ref} />), Export: forwardRef((props, ref) => <SaveAlt {...props} ref={ref} />), Filter: forwardRef((props, ref) => <FilterList {...props} ref={ref} />), FirstPage: forwardRef((props, ref) => <FirstPage {...props} ref={ref} />), LastPage: forwardRef((props, ref) => <LastPage {...props} ref={ref} />), NextPage: forwardRef((props, ref) => <ChevronRight {...props} ref={ref} />), PreviousPage: forwardRef((props, ref) => <ChevronLeft {...props} ref={ref} />), ResetSearch: forwardRef((props, ref) => <Clear {...props} ref={ref} />), Search: forwardRef((props, ref) => <Search {...props} ref={ref} />), SortArrow: forwardRef((props, ref) => <ArrowDownward {...props} ref={ref} />), ThirdStateCheck: forwardRef((props, ref) => <Remove {...props} ref={ref} />), ViewColumn: forwardRef((props, ref) => <ViewColumn {...props} ref={ref} />), } ``` # **Creating table-** All done. Our table is ready. Not we can show data ```js import React, { useState, forwardRef } from 'react' import MaterialTable from 'material-table' import AddBox from '@material-ui/icons/AddBox' import ArrowDownward from '@material-ui/icons/ArrowDownward' import Check from '@material-ui/icons/Check' import ChevronLeft from '@material-ui/icons/ChevronLeft' import ChevronRight from '@material-ui/icons/ChevronRight' import Clear from '@material-ui/icons/Clear' import DeleteOutline from '@material-ui/icons/DeleteOutline' import Edit from '@material-ui/icons/Edit' import FilterList from '@material-ui/icons/FilterList' import FirstPage from '@material-ui/icons/FirstPage' import LastPage from '@material-ui/icons/LastPage' import Remove from '@material-ui/icons/Remove' import SaveAlt from '@material-ui/icons/SaveAlt' import Search from '@material-ui/icons/Search' import ViewColumn from '@material-ui/icons/ViewColumn' const tableIcons = { Add: forwardRef((props, ref) => <AddBox {...props} ref={ref} />), Check: forwardRef((props, ref) => <Check {...props} ref={ref} />), Clear: forwardRef((props, ref) => <Clear {...props} ref={ref} />), Delete: forwardRef((props, ref) => <DeleteOutline {...props} ref={ref} />), DetailPanel: forwardRef((props, ref) => <ChevronRight {...props} ref={ref} />), Edit: forwardRef((props, ref) => <Edit {...props} ref={ref} />), Export: forwardRef((props, ref) => <SaveAlt {...props} ref={ref} />), Filter: forwardRef((props, ref) => <FilterList {...props} ref={ref} />), FirstPage: forwardRef((props, ref) => <FirstPage {...props} ref={ref} />), LastPage: forwardRef((props, ref) => <LastPage {...props} ref={ref} />), NextPage: forwardRef((props, ref) => <ChevronRight {...props} ref={ref} />), PreviousPage: forwardRef((props, ref) => <ChevronLeft {...props} ref={ref} />), ResetSearch: forwardRef((props, ref) => <Clear {...props} ref={ref} />), Search: forwardRef((props, ref) => <Search {...props} ref={ref} />), SortArrow: forwardRef((props, ref) => <ArrowDownward {...props} ref={ref} />), ThirdStateCheck: forwardRef((props, ref) => <Remove {...props} ref={ref} />), ViewColumn: forwardRef((props, ref) => <ViewColumn {...props} ref={ref} />), } const App = () => { const columns = [ { title: 'First Name', field: 'firstName' }, { title: 'Last Name', field: 'lastName', initialEditValue: 'initial value', }, { title: 'Mobile Number', field: 'mobileNumber', type: 'numeric' }, { title: 'Email', field: 'email', editable: 'never' }, ] const [data, setData] = useState([ { firstName: 'Gyanendra', lastName: 'Knojiya', mobileNumber: 8802879231, email: 'gyanendrak064@gmail.com', }, { firstName: 'Virat', lastName: 'Kohli', mobileNumber: 9876543210, email: 'virat@gmail.com', }, { firstName: 'Rohit', lastName: 'Sherma', mobileNumber: 9984572157, email: 'rohit@gmail.com', }, ]) return ( <> <h1>Editable table example</h1> <MaterialTable title="Editable Table" icons={tableIcons} columns={columns} data={data} editable={{ onRowAdd: (newData) => new Promise((resolve, reject) => { setTimeout(() => { setData([...data, newData]) resolve() }, 1000) }), onRowUpdate: (newData, oldData) => new Promise((resolve, reject) => { setTimeout(() => { const dataUpdate = [...data] const index = oldData.tableData.id dataUpdate[index] = newData setData([...dataUpdate]) resolve() }, 1000) }), onRowDelete: (oldData) => new Promise((resolve, reject) => { setTimeout(() => { const dataDelete = [...data] const index = oldData.tableData.id dataDelete.splice(index, 1) setData([...dataDelete]) resolve() }, 1000) }), }} /> </> ) } export default App ``` ### Preview- ![image](https://cdn-images-1.medium.com/max/1024/0*PAb4C6WqBMjAvZgn.png) buy a coffee for me [https://www.buymeacoffee.com/gyanknojiya](https://www.buymeacoffee.com/gyanknojiya) Thanks for reading this article. You can play with this sandbox [https://codesandbox.io/s/editable-example-0wctb](https://codesandbox.io/s/editable-example-0wctb) to explore more. If you have any queries, feel free to contact me: [https://gyanendra.tech/#contact](https://gyanendra.tech/#contact) _Originally published at_ [_https://codingcafe.co.in_](https://codingcafe.co.in/post/create-simple-editable-table-with-search-and-pagination-in-react-js-in-2-min-%7C-react-js-development)_._
gyanendraknojiya
897,251
How do i connect with www.mywifiext.net login setup page for Netgear Extender setup
To get to mywifiext Setup, plug in your Netgear extender and turn it on. Using your web browser, go...
0
2021-11-13T14:51:22
https://dev.to/mywifiext_login_setup/how-do-i-connect-with-wwwmywifiextnet-login-setup-page-for-netgear-extender-setup-22b4
mywifiextsetup, mywifiextlocal, mywifiextnet, mywifiextnetsetup
To get to mywifiext Setup, plug in your Netgear extender and turn it on. Using your web browser, go to the mywifiext configuration page right now. Mywifiext.net is a local url that cannot be found online. On the configuration tab, you should see a New extender setup button.  Click on the finish Setup button. Connect your Netgear extender and toggle it on to access mywifiext Setup. Go to the mywifiext setup page right now in your web browser. Mywifiext net is a private address that cannot be searched on the internet. A New extender setup button should appear on the configuration page. To complete the setup, click the Finish button. [Mywifiext net login](https://www.mywifiextsetuphelp.com/mywifiext-net-setup) [Mywifiext.local](https://www.mywifiextsetuphelp.com/mywifiext-local/) [Mywifiext](https://www.mywifiextsetuphelp.com/mywifiext-net-setup) [Mywifiext.local](https://www.mywifiextsetuphelp.com/mywifiext-local/) Mywifiext Setup might be a great starting point for connecting our wireless and wired devices to the internet. We can't imagine our lives now if we don't have simple access to the internet wherever we are. With the help of such WiFi boosters, you'll be able to take benefit of our house's perfect connection.
mywifiext_login_setup
897,257
Top 33 JavaScript Projects on GitHub (November 2021)
2021 is coming to its end, and we may do another snapshot of 33 most starred open-sourced JavaScript...
0
2021-11-13T15:23:08
https://dev.to/trekhleb/top-33-javascript-projects-on-github-november-2021-41d4
javascript, webdev, opensource, github
2021 is coming to its end, and we may do another snapshot of **33 most starred open-sourced JavaScript repositories** on GitHub **as of November 13th, 2021**. > Previous snapshots: [2018](https://trekhleb.dev/blog/2018/top-33-javascript-projects-on-github-august-2018/), [2020](https://trekhleb.dev/blog/2020/top-33-javascript-projects-on-github-december/), [2021](http://localhost:8000/blog/2021/top-33-javascript-projects-on-github/). > You may also [query the GitHub](https://github.com/search?l=&o=desc&q=stars%3A%3E1+language%3AJavaScript&s=stars&type=Repositories) to fetch the latest results. ## #1 [freeCodeCamp/freeCodeCamp](https://github.com/freeCodeCamp/freeCodeCamp) freeCodeCamp.org's open-source codebase and curriculum. Learn to code for free. *★ 335k* *(+18k)* ## #2 [vuejs/vue](https://github.com/vuejs/vue) Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web. *★ 190k* *(+14k)* ## #3 [facebook/react](https://github.com/facebook/react) A declarative, efficient, and flexible JavaScript library for building user interfaces. *★ 177k* *(+17k)* ## #4 [twbs/bootstrap](https://github.com/twbs/bootstrap) The most popular HTML, CSS, and JavaScript framework for developing responsive, mobile first projects on the web. *★ 154k* *(+8k)* ## #5 [trekhleb/javascript-algorithms](https://github.com/trekhleb/javascript-algorithms) Algorithms and data structures implemented in JavaScript with explanations and links to further readings *★ 126k* *(+37k)* ## #6 [airbnb/javascript](https://github.com/airbnb/javascript) JavaScript Style Guide *★ 116k* *(+13k)* ## #7 [facebook/react-native](https://github.com/facebook/react-native) A framework for building native applications using React *★ 99k* *(+7k)* ## #8 [d3/d3](https://github.com/d3/d3) Bring data to life with SVG, Canvas and HTML. *★ 99k* *(+4k)* ## #9 [facebook/create-react-app](https://github.com/facebook/create-react-app) Set up a modern web app by running one command. *★ 91k* *(+7k)* ## #10 [axios/axios](https://github.com/axios/axios) Promise based HTTP client for the browser and node.js *★ 89k* *(+9k)* ## #11 [30-seconds/30-seconds-of-code](https://github.com/30-seconds/30-seconds-of-code) Short JavaScript code snippets for all your development needs *★ 88k* *(+22k)* ## #12 [nodejs/node](https://github.com/nodejs/node) Node.js JavaScript runtime *★ 83k* *(+7k)* ## #13 [vercel/next.js](https://github.com/vercel/next.js) The React Framework *★ 76k* *(+18k)* ## #14 [mrdoob/three.js](https://github.com/mrdoob/three.js) JavaScript 3D Library. *★ 75k* *(+10k)* ## #15 [mui-org/material-ui](https://github.com/mui-org/material-ui) MUI (formerly Material-UI) is the React UI library you always wanted. Follow your own design system, or start with Material Design. *★ 72k* *(+9k)* ## #16 [goldbergyoni/nodebestpractices](https://github.com/goldbergyoni/nodebestpractices) The Node.js best practices list (October 2021) *★ 72k* *(+15k)* ## #17 [awesome-selfhosted/awesome-selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted) A list of Free Software network services and web applications which can be hosted on your own servers *★ 67k* *(+16k)* ## #18 [FortAwesome/Font-Awesome](https://github.com/FortAwesome/Font-Awesome) The iconic SVG, font, and CSS toolkit *★ 66k* *(+1k)* ## #19 [yangshun/tech-interview-handbook](https://github.com/yangshun/tech-interview-handbook) Curated interview preparation materials for busy engineers *★ 60k* *(+60k)* ## #20 [ryanmcdermott/clean-code-javascript](https://github.com/ryanmcdermott/clean-code-javascript) Clean Code concepts adapted for JavaScript *★ 59k* *(+59k)* ## #21 [webpack/webpack](https://github.com/webpack/webpack) A bundler for javascript and friends. Packs many modules into a few bundled assets. Code Splitting allows for loading parts of the application on demand. Through "loaders", modules can be CommonJs, AMD, ES6 modules, CSS, Images, JSON, Coffeescript, LESS, ... and your custom stuff. *★ 59k* *(+3k)* ## #22 [angular/angular.js](https://github.com/angular/angular.js) AngularJS - HTML enhanced for web apps! *★ 59k* *(+0k)* ## #23 [hakimel/reveal.js](https://github.com/hakimel/reveal.js) The HTML Presentation Framework *★ 57k* *(+2k)* ## #24 [typicode/json-server](https://github.com/typicode/json-server) Get a full fake REST API with zero coding in less than 30 seconds (seriously) *★ 57k* *(+6k)* ## #25 [atom/atom](https://github.com/atom/atom) The hackable text editor *★ 56k* *(+2k)* ## #26 [jquery/jquery](https://github.com/jquery/jquery) jQuery JavaScript Library *★ 55k* *(+1k)* ## #27 [chartjs/Chart.js](https://github.com/chartjs/Chart.js) Simple HTML5 Charts using the canvas tag *★ 55k* *(+4k)* ## #28 [expressjs/express](https://github.com/expressjs/express) Fast, unopinionated, minimalist web framework for node. *★ 54k* *(+3k)* ## #29 [adam-p/markdown-here](https://github.com/adam-p/markdown-here) Google Chrome, Firefox, and Thunderbird extension that lets you write email in Markdown and render it before sending. *★ 53k* *(+4k)* ## #30 [h5bp/html5-boilerplate](https://github.com/h5bp/html5-boilerplate) A professional front-end template for building fast, robust, and adaptable web apps or sites. *★ 51k* *(+51k)* ## #31 [gatsbyjs/gatsby](https://github.com/gatsbyjs/gatsby) Build blazing fast, modern apps and websites with React *★ 51k* *(+3k)* ## #32 [lodash/lodash](https://github.com/lodash/lodash) A modern JavaScript utility library delivering modularity, performance, and extras. *★ 51k* *(+3k)* ## #33 [resume/resume.github.com](https://github.com/resume/resume.github.com) Resumes generated using the GitHub informations *★ 50k* *(+3k)*
trekhleb
897,432
Fast and easy way to setup web developer certificates
Modern days having that cookies auth etc depends on https we need to have https local web...
0
2021-11-13T21:00:22
https://dev.to/istarkov/fast-and-easy-way-to-setup-web-developer-certificates-450e
devops, webdev
Modern days having that cookies auth etc depends on https we need to have https local web environment. Before to generate local certificates I used [minica](https://github.com/jsha/minica). The main issue that you need a big readme for osx, linux and windows users, how to regenerate keys, how to add minica certificate to Keychain, how to change hosts file. Having that we use vscode remote for development it was 2x more work to register all that keys on local and remote machines. The solution below doesnt need any setup from developers. ## Solution in short Register on DNS provider A records for development like: `A blabla.devdomain.com 127.0.0.1` Then using letsencrypt [certbot](https://certbot.eff.org/) for your provider just generate needed certificates. They are already trusted and the only issue is 3 month expiration period, what can be easily fixed with cron. ## Full solution. In our case we use cloudflare as DNS. Generation certificates for few domains on cloudflare looks: Create cloudflare API token https://support.cloudflare.com/hc/en-us/articles/200167836-Managing-API-Tokens-and-Keys#12345680 ```bash TF_VAR_CLOUDFLARE_API_KEY={YOURAPITOKEN} mkdir -p /tmp/certbot/ mkdir -p /tmp/letsencrypt/ cat > /tmp/certbot/cloudflare.ini <<-DOCKERFILE dns_cloudflare_api_token = ${TF_VAR_CLOUDFLARE_API_KEY} DOCKERFILE docker run -it --rm --name certbot \ -v "/tmp/letsencrypt/data:/etc/letsencrypt" \ -v "/tmp/certbot:/local/certbot" \ certbot/dns-cloudflare:v1.15.0 certonly \ -m istarkov@gmail.com \ --dns-cloudflare \ --dns-cloudflare-credentials /local/certbot/cloudflare.ini \ --agree-tos \ --noninteractive \ -d subdomain.mydomain.com \ -d other.mydomain.com \ -d blabla.hello.com # subdomain.mydomain.com, other.mydomain.com, blabla.hello.com must have A records on cloudflare pointing to 127.0.0.1 cp /tmp/letsencrypt/data/live/subdomain.mydomain.com/* ./ cat ./fullchain.pem ./privkey.pem > ./haproxy.pem ``` thats all, now for nodejs apps use following https options ```js key: fs.readFileSync('./privkey.pem'), cert: fs.readFileSync('./fullchain.pem'), ``` for haproxy use `haproxy.pem` like in simple config below ```ini # haproxy -f ./playground/haproxy-http-2.cfg -db frontend rgw-https bind *:3009 ssl crt /root/realadvisor/https-dev-keys/haproxy.pem alpn h2,http/1.1 default_backend rgw backend rgw balance roundrobin mode http server rgw1 127.0.0.1:3000 check ``` This is fast and simple way I prefer now to have development certificates, which doesnt need any additional documentation for developers.
istarkov
897,463
React Tic Tac Toe
GitHub Repo // index.js import React from 'react'; import ReactDOM from 'react-dom'; import...
0
2021-11-13T22:10:16
https://dev.to/sagordondev/react-tic-tac-toe-2gab
react, programming, computerscience, gamedev
[GitHub Repo](https://github.com/sagordon-dev/my-app) ![Tic Tac Toe Game](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8syrunm2i3hthb6dvg0r.png) ```react // index.js import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; function Square(props) { return ( <button className="square" onClick={props.onClick}> {props.value} </button> ); } class Board extends React.Component { renderSquare(i) { return ( <Square value={this.props.squares[i]} onClick={() => this.props.onClick(i)} /> ); } render() { return ( <div> <div className="board-row"> {this.renderSquare(0)}{this.renderSquare(1)}{this.renderSquare(2)} </div> <div className="board-row"> {this.renderSquare(3)}{this.renderSquare(4)}{this.renderSquare(5)} </div> <div className="board-row"> {this.renderSquare(6)}{this.renderSquare(7)}{this.renderSquare(8)} </div> </div> ); } } class Game extends React.Component { constructor(props) { super(props); this.state = { history: [{ squares: Array(9).fill(null), }], stepNumber: 0, xIsNext: true, }; } handleClick(i) { const history = this.state.history.slice(0, this.state.stepNumber + 1); const current = history[history.length - 1]; const squares = current.squares.slice(); if (calculateWinner(squares) || squares[i]) { return; } squares[i] = this.state.xIsNext ? 'X' : 'O'; this.setState({ history: history.concat([{ squares: squares, }]), stepNumber: history.length, xIsNext: !this.state.xIsNext, }); } jumpTo(step) { this.setState({ stepNumber: step, xIsNext: (step % 2) === 0, }); } render() { const history = this.state.history; const current = history[this.state.stepNumber]; const winner = calculateWinner(current.squares); const moves = history.map((step, move) => { const desc = move ? 'Go to move #' + move : 'Go to game start'; return ( <li key={move}> <button onClick={() => this.jumpTo(move)}> {desc} </button> </li> ); }); let status; if (winner) { status = 'Winner: ' + winner; } else { status = 'Next player: ' + (this.state.xIsNext ? 'X' : 'O'); } return ( <div className="game"> <div className="game-board"> <Board squares={current.squares} onClick={(i) => this.handleClick(i)} /> </div> <div className="game-info"> <div>{status}</div> <ol>{moves}</ol> </div> </div> ); } } // ======================================== ReactDOM.render( <Game />, document.getElementById('root') ); function calculateWinner(squares) { const lines = [ [0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8], [0, 4, 8], [2, 4, 6], ]; for (let i = 0; i < lines.length; i++) { const [a, b, c] = lines[i]; if (squares[a] && squares[a] === squares[b] && squares[a] === squares[c]) { return squares[a] } } return null; } ``` ```css /* index.css */ body { font: 14px "Century Gothic", Futura, sans-serif; margin: 20px; } ol, ul { padding-left: 30px; } .board-row:after { clear: both; content: ""; display: table; } .status { margin-bottom: 10px; } .square { background: #fff; border: 1px solid #999; float: left; font-size: 24px; font-weight: bold; line-height: 34px; height: 34px; margin-right: -1px; margin-top: -1px; padding: 0; text-align: center; width: 34px; } .square:focus { outline: none; } .kbd-navigation .square:focus { background: #ddd; } .game { display: flex; flex-direction: row; } .game-info { margin-left: 20px; } ``` Photo by <a href="https://unsplash.com/@treatzone?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Matthew Davis</a> on <a href="https://unsplash.com/s/photos/tic-tac-toe?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
sagordondev
897,575
Implementing Dark Mode (Part 1)
I'd like to share the story behind one of my favorite contributions to Open Sauced so far, which is...
15,537
2021-11-16T15:34:47
https://dev.to/opensauced/implementing-dark-mode-part-1-3ono
I'd like to share the story behind one of my favorite contributions to Open Sauced so far, which is the addition of "Dark Mode", [PR #1020](https://github.com/open-sauced/open-sauced/pull/1020). This PR touched 25 files and was pretty substantial in scope, so I'm going to break this up into 3 parts. Part 1 is here is just the background - the what and the why, as @bdougieyo sometimes says. If there's one takeaway though, it's the value of opening a PR early in your process - you can share your progress and your roadblocks, and maintainers can help you get unblocked! With that, the original idea for the dark mode feature came from @filiptronicek in [Issue #607](https://github.com/open-sauced/open-sauced/issues/607), and I decided to try to take this one on for a few reasons: - 1) I'd read an article once about `prefers-color-scheme`, and that was my level of knowledge about dark mode. - 2) I wanted to learn the ins and outs of how the project handled styles. - 3) My Macbook was set for the color preference following the time of day, so based on #1, I knew I was going to test the feature with (literally) endless delight! In this project I picked up a habit of starting talking about the PR relatively early in the cycle. So as I got started on things, I had inferred that the intention was to fall back to the system preference but that the user could override it to either or light or dark mode and have that setting persisted to local storage. The more I read articles, the more I saw a pattern that they either just simply followed the system preference, or they would start with a default and toggle back and forth. So there were some cases using a single state for user preference, and some with two states, but none with three states. Talking with other contributors about it, @0vortex pointed out a [post on the StackOverflow blog](https://stackoverflow.blog/2020/03/31/building-dark-mode-on-stack-overflow/), and it talked in passing about a three-state solution. I was sold! I felt compelled to see this thing through with a three-state solution, and I'm glad I did because since then I've seen a handful of blog posts and examples in the wild, which I really like. Stay tuned for Part 2 where I'll cover some implementation details and learnings, and Part 3 were I'll cover some of the unexpected things about this PR!
mtfoley
897,580
Integrating Live Chat to Your WordPress, Shopify or Webflow Site Has Never Been this Easy!
Full post originally published at Aviyel, do check it out. As a business owner who has to interact...
0
2021-11-15T18:35:00
https://aviyel.com/post/1340/integrating-live-chat-to-your-wordpress-shopify-or-webflow-site-has-never-been-this-easy
chatwoot, livechat, wordpress, shopify
<a href="https://aviyel.com/post/1340/integrating-live-chat-to-your-wordpress-shopify-or-webflow-site-has-never-been-this-easy">Full post originally published at Aviyel, do check it out.</a> As a business owner who has to interact with customers quite often, live chat support is the best plausible option. However, the results amplify if you integrate your support methods with applications frequently used by the customers- in this case, I am talking about Shopify, WordPress, and Webflow. But before we begin, it is important to know what is live chat and why is it important for businesses? <a href="https://www.chatwoot.com/features/live-chat/">Live chat</a> is a customer relations tool businesses use in interacting with their customers. It is a type of technical support usually added to a website or social account that enables companies and customers to interact in real-time. ## Importance of Integrating a Live Chat To Your Website Live chat offers so many benefits if utilized properly. Here are a few advantages of integrating a live chat into your website. ### Low Cost for Support Live chat offers so much value with little or no extra cost to a business. <a href="https://aviyel.com/post/1328/why-chatwoot-is-my-favorite-open-source-project-integrations-integrations-and-integrations">Integrating a live chat</a> in your business is cheaper than setting up traditional call centers that require call agents to answer emails and calls. <img src="https://aviyel.com/assets/uploads/files/1636548775735-img3.jpg"> ### Keeps the User Engaged on Your Website While customers await replies to their messages, they can use this time to go through your website and see what you’re working on in your business. Make sure to put up interesting blogs, testimonials, or case studies on your website. This is helpful because the more visitors spend time on your website, the more Google Analytics recommends you. A very wonderful <a href="https://aviyel.com/post/799/writing-an-engaging-seo-friendly-technical-content-tips-from-fellow-creators">SEO benefit</a>. <img src="https://aviyel.com/assets/uploads/files/1636548836201-img4.jpg"> ### Better Customer Service With the live chat feature on your website, support agents don’t need to worry about the language difference of customers. Answering multiple customers while keeping track of all their requests becomes a breeze here. This helps support agents stay on their game, and in situations customers experience similar issues, support agents can log these issues and refer to the solution easily to fix them even faster in the future. Now you completely understand how much value live chat adds to your business, let’s learn about the tools you can use to add this live chat to your WordPress, Shopify, or Webflow websites. ## Best Tools for Integrating Live Chat <img src="https://aviyel.com/assets/uploads/files/1636548991759-6.jpg"> If you Google live chat, you’ll notice that there are quite a large number of CRM tools you can use to integrate live chat into your website. CRM stands for <a href="https://aviyel.com/post/802/how-to-integrate-facebook-whatsapp-and-slack-into-your-customer-engagement-platform">customer relationship management</a>, which shows the strategy, principles, or guidelines businesses use to organize and manage customers. I have listed down some of the <a href="https://aviyel.com/post/379/10-live-chat-tools-like-intercom-drift-zendesk-tawk-io-and-livechat-compared-with-chatwoot">top CRM tools</a> that have caught my attention for a while: - <a href="https://aviyel.com/projects/6/chatwoot">Chatwoot</a> - Zendesk - <a href="https://www.salesforce.com/in/">Salesforce</a> - Pipedrive - Insightly - <a href="https://www.forbes.com/advisor/business/software/hubspot-crm-review/">Hubspot</a> - Scoro - Keap - <a href="https://www.techradar.com/reviews/freshdesk-crm-review">Freshdesk</a> I recently started using Chatwoot for customer engagement needs and would be using the tool today to integrate live chat support with other websites. Unlike other mentioned software for integrating a live chat, Chatwoot stands out as an <a href="https://opensource.com/article/21/6/chatwoot?utm_campaign=intrel">open source non-proprietary solution</a> with tons of tweakable features. You can read more about that <a href="https://aviyel.com/post/805/chatwoot-an-open-source-customer-engagement-tool-that-challenges-freshworks-zendesk-and-intercom">here</a>. If you are an open source enthusiast, you can also look forward to contribute to <a href="https://aviyel.com/post/801/how-to-contribute-to-chatwoot-on-github">Chatwoot on GitHub.</a> ### Create your Chatwoot Account <img src="https://aviyel.com/assets/uploads/files/1636549055044-7-resized.png"> To create an account, visit this link: https://chatwoot.com to get started. The registration is very straightforward. Create an account and then verify your email. ### Add User Details <img src="https://aviyel.com/assets/uploads/files/1636549114157-8-resized.png"> After creating an account, you’ll see a conversations dashboard. On the bottom left, click **(Profile icon > Profile Settings)** which will take you to an account settings page. Under the account settings, fill in the boxes with your necessary details. Add a profile photo and a display name. Once you’re done, hit **Update Profile**. Now we’re done filling in our details, we can proceed to the next step. ### Add an Inbox <img src="https://aviyel.com/assets/uploads/files/1636549193387-9-resized.png"> On the sidebar to the left, click inboxes and select add a new inbox. You’ll be presented with a new screen that asks you to choose a channel you want to integrate the live chat in. Choose your preferred channel, in this case, website. Fill in the boxes with your website information. Add your website domain name into the website domain box. (E.g `www.mywebsite.com`) PS: Omit the https:// protocol. This is to make sure the live chat widget matches the style of your business website and the user experience is consistent. ### Pick an Agent for Your Live Chat Needs The next step is to pick a default agent for the live chat. Add Yourself as the primary agent. But If you want to add a new support agent, click the Agents tab on the left to do that. <img src="https://aviyel.com/assets/uploads/files/1636565745661-10-resized.png"> Now a code will be generated after picking an agent. This code is how we will add the live chat to our website. The next step is to **customize the widget** even further, you can skip this step and proceed to integrate the live chat widget on your website if you want, but I highly recommend you don’t skip it to avoid your chat widget from being too generic. ### Customize Your Widget - To customize the widget even further, click More Settings below the code. - First, **add a logo** for the chat widget, and then choose a name and your widget color. <img src="https://aviyel.com/assets/uploads/files/1636627186482-customize-your-widget-resized.png"> There are more settings available to customize your chat widget even further, you can add more features like - Welcome greeting message - Welcome tagline - A message when the visitor starts typing - An email field to collect visitors’ email addresses. - Automatic assignment of chat agents, and many more features. Figure out how you want the live widget to be like and then customize it to fit that description. You can refer to Chatwoot <a href="https://www.chatwoot.com/docs/user-guide/add-inbox-settings">docs</a> to learn more. ## How to Integrate Live Chat into your Shopify Website? Now we have completely set up our chatwoot account and added an inbox, we can proceed to add it to our Shopify website. Once you’re done customizing the widget, click update and select the configurations tab to see the code. Copy the code and go to your <a href="https://www.shopify.in/plus/integrate">Shopify website</a>. - Click edit code and navigate to `theme.liquid` and then paste the code inside the body tag and hit save. <img src="https://aviyel.com/assets/uploads/files/1636626743137-how-to-integrate-live-chat-into-your-shopify-website-resized.png"> - The next step is to test the chat widget, refresh the website page and now you should see the widget. - You can try typing a message on the chatbox to test it out. - Here’s a preview of the live chat on our Shopify website. <img src="https://aviyel.com/assets/uploads/files/1636626762961-how-to-integrate-live-chat-into-your-shopify-website-1-resized.png"> ## How to Integrate Live Chat to Your Webflow Website? The process is practically similar since we already created and set up our Chatwoot account, all we have to do is add a new inbox for our Webflow widget. Follow through the same steps as mentioned above, add your <a href="https://webflow.com/">Webflow website</a> domain, customize your profile to match the live widget, and then copy the code. Paste the code into the body tag and save it. Refresh your Webflow website to see the changes. ## How to Integrate Live Chat to Your WordPress Website? There are different ways through which we can add live chat to a <a href="https://wordpress.com/">WordPress site</a>, the simplest way is to add the code in the PHP file. - To do this, (after generating our WordPress website code) go to your project dashboard, and navigate to **Appearance > Theme Editor**. - Open the **Theme Footer (footer.php)** file and paste the code inside the `<body>` tag. ![how-to-integrate-live-chat-into-your-wordpress-website-resized.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1636854967170/UP-cq6XNh.png) - Once you’ve pasted the code, click **Update File** and go to your website, and refresh the page. You should see the **live chat widget** now. - Here’s a preview of the live chat in action. <img src="https://aviyel.com/assets/uploads/files/1636626800949-how-to-integrate-live-chat-into-your-wordpress-website-1-resized.png"> ## Final Words Aviyel is partnering with Chatwoot to scale, build and incentivize open source communities. To stay updated, follow the discussions <a href="https://aviyel.com/discussions">here</a>. Join Aviyel’s Twitter space at <a href="https://twitter.com/aviyelhq?lang=en">AviyelHQ</a>. ## TL; DR - <a href="https://aviyel.com/post/472/chatwoot-localization-of-chats">Localization of live chats</a> - <a href="https://www.chatwoot.com/docs/contributing-guide">Chatwoot contributor guidelines</a> - <a href="https://aviyel.com/post/263/chatwoot-with-next-js">Using Chatwoot with Next.js</a> - <a href="https://aviyel.com/post/430/chatwoot-compared-to-zendesk-intercom-drift-and-other-saas-customer-support-tools">Chatwoot versus SaaS</a>
victoreke
897,610
Store Alphabet in a list with Python
Step1: import string to your code. &gt; import string Step2: create alphabet list with...
0
2021-11-14T05:46:04
https://dev.to/hishakil/store-alphabet-in-a-list-with-python-34df
python, programming
Step1: import string to your code. > import string Step2: create alphabet list with uppercase and lowercase. > small_letter = string.ascii_lowercase Step3: store it... > lowerletter_list = list(small_letter) Finish, You can run to see the result.
hishakil
897,655
Webhooks in Kraken CI for GitHub, GitLab and Gitea
Kraken CI is a new Continuous Integration tool. It is a modern, open-source, on-premise CI/CD system...
12,501
2021-11-15T11:49:39
https://kraken.ci/docs/guide-webhooks
ci, cd, devops, github
[Kraken CI](https://kraken.ci/) is a new Continuous Integration tool. It is a modern, open-source, on-premise CI/CD system that is highly scalable and focused on testing. It is licensed under Apache 2.0 license. Its source code is available on [Kraken CI GitHub page](https://github.com/Kraken-CI/kraken). This tutorial is the third installment of the series of articles about Kraken CI. [Part 1, Kraken CI, New Kid on the CI block](https://dev.to/godfryd/kraken-ci-new-kid-on-the-ci-block-4imo), presented the installation of Kraken. The second part covered [how to prepare a workflow for a simple Python project](https://dev.to/godfryd/your-first-workflow-in-kraken-ci-43hb). The last one was about [autoscaling on AWS and Azure](https://dev.to/godfryd/autoscaling-ci-with-kraken-ci-jij). This time we would like to show the latest feature that was developed in Kraken CI: webhooks for GitHub, GitLab and Gitea. ## Intro to Webhooks Kraken CI allows triggering a flow of a branch in a project using a webhook in a Git hosting service. A push to a regular branch e.g. main branch or to a branch associated with a pull or a merge request will cause a Git hosting service to call a webhook exposed by Kraken CI. To make it happen a webhook URL and a secret have to be stored in a project settings in Git hosting service. Currenty there are support the following Git hosting services: - GitHub - GitLab - Gitea The following guide shows how to configure webhooks: 1) how to enable them in a project in Kraken CI and then 2) how to set a webhook URL and a secret in Git hosting service. In the end, the usage of webhooks will be presented by doing a push to Git repository. ## Enable Webhooks in a Project In Kraken CI go to your project page and switch to `WebHooks` tab. There are available webhooks for several Git hosting services. Enable the one that you are using for hosting your Git repository. Enabled webhooks show an actuall webhook URL and a secret. These information should be copied and set in you webhooks setting page in your Git hosting service. <Screen img="screen-webhooks.png" /> ## Set Webhook URL in Git Hosting Service The following sections show how to set webhook URL to Kraken CI, a secret that is used to authenticate and they show which event types should be selected that will trigger a new flow in you Kraken CI project. ### GitHub <Screen img="webhooks-github.png" /> The arrows on the picture above indicate where the webhook URL and the secret from your Kraken CI project should be pasted on your GitHub repository webhooks settings page. In case of GitHub two event types should be selected: - `Pull Requests` - they will trigger DEV flows in you Kraken CI project, in a branch indicated in the event - `Pushes` - they will trigger CI flows in you Kraken CI project, in a branch indicated in the event ### GitLab <Screen img="webhooks-gitlab.png" /> The arrows on the picture above indicate where the webhook URL and the secret from your Kraken CI project should be pasted on your GitLab repository webhooks settings page. In case of GitLab two event types should be selected: - `Push events` - they will trigger CI flows in you Kraken CI project, in a branch indicated in the event - `Merge Request events` - they will trigger DEV flows in you Kraken CI project, in a branch indicated in the event ### Gitea <Screen img="webhooks-gitea.png" /> The arrows on the picture above indicate where the webhook URL and the secret from your Kraken CI project should be pasted on your Gitea repository webhooks settings page. In case of GitLab two event types should be selected: - `Push` - they will trigger CI flows in you Kraken CI project, in a branch indicated in the event - `Pull Request` and `Pull Request Synchronized` - they will trigger DEV flows in you Kraken CI project, in a branch indicated in the event ## Trigger a Flow Now, in your a folder with source code repository you may invoke git push command: ```console $ git push ``` This should trigger a new flow in CI branch in you Kraken CI project. If you create a new branch, do some commits to it, then push this new branch, and then create a pull request (PR) or merge request (MR) in your Git hosting service UI, then all this will trigger a new DEV flow that is a base branch for the PR or MR: ```console $ git checkout -b my-branch $ git push --set-upstream origin my-branch ``` That's it!
godfryd
897,693
Keyoxide
This is an OpenPGP proof that connects my OpenPGP key to this dev.to account. For details check out...
0
2021-11-14T08:12:52
https://dev.to/rintan/keyoxide-18c0
This is an OpenPGP proof that connects [my OpenPGP key](https://keyoxide.org/4792BF13985872DC12546B6FD679E0532311765C) to [this dev.to account](https://dev.to/rintan). For details check out https://keyoxide.org/guides/openpgp-proofs [Verifying my OpenPGP key: openpgp4fpr:4792BF13985872DC12546B6FD679E0532311765C]
rintan
897,883
Weekly Digest 45/2021
Welcome to my Weekly Digest #45. This weekly digest contains a lot of interesting and inspiring...
10,701
2021-11-14T15:19:11
https://dev.to/marcobiedermann/weekly-digest-452021-2mok
css, javascript, react, webdev
Welcome to my Weekly Digest #45. This weekly digest contains a lot of interesting and inspiring articles, videos, tweets, podcasts, and designs I consumed during this week. --- ## Interesting articles to read ### Rust Is The Future of JavaScript Infrastructure Why is Rust being used to replace parts of the JavaScript web ecosystem like minification (Terser), transpilation (Babel), formatting (Prettier), bundling (webpack), linting (ESLint), and more? [Rust Is The Future of JavaScript Infrastructure - Lee Robinson](https://leerob.io/blog/rust) ### The Golden Ratio and User-Interface Design Although traditionally used in art and architecture, the golden ratio can be referenced to design aesthetically pleasing interfaces. [The Golden Ratio and User-Interface Design](https://www.nngroup.com/articles/golden-ratio-ui-design/) ### Interactive Rebase: Clean up your Commit History Interactive Rebase is the Swiss Army knife of Git commands: lots of use cases and lots of possibilities! [Interactive Rebase: Clean up your Commit History](https://css-tricks.com/interactive-rebase-clean-up-your-commit-history/) --- ## Some great videos I watched this week ### Cleaning Up Copilot Code GitHub Copilot is a great helper, but you it's just the copilot. Let's take the pilot's chair and turn some not-so-great Copilot code and turn it into high-quality production-ready TypeScript code. {% youtube yo87SLp4jOo %} by [Jack Herrington](https://twitter.com/jherr) ### Knobs! Yair Even Or has a neat project called [Knobs](https://github.com/yairEO/knobs) that adds UI controls that adjust CSS Custom Properties instantly any way you need them to. There is more to the project than that, but that's how we use them here to control some generative art. {% youtube 09qdFxIlT7o %} by [Chris Coyier](https://twitter.com/chriscoyier) ### Webpack alternative: ESBuild {% youtube bKPelKsc4e4 %} by [Clem Tech](https://twitter.com/clem__tech) ### Shopify built a JS Framework Shopify just announced a React-based JavaScript framework called Hydrogen. It is similar to Next.js but has extra features for e-commerce and data fetching with GraphQL. {% youtube mAsM9c2sGjA %} by [Fireship](https://twitter.com/fireship_dev) ### Chrome 96 - What’s New in DevTools New CSS Overview panel, emulate CSS prefers-contrast media and Chrome’s auto dark mode, and more. {% youtube 3CXbhnaFNEw %} by [Google Chrome Developers](https://twitter.com/chromedevtools) --- ## Useful GitHub repositories ### **Fantasy Map Generator** Fantasy Map Generator is a free web application generating interactive and highly customizable svg maps based on voronoi diagram. {% github Azgaar/Fantasy-Map-Generator %} ### GitHub Pages Deploy Action This [GitHub Action](https://github.com/features/actions) will automatically deploy your project to [GitHub Pages](https://pages.github.com/). It can be configured to push your production-ready code into any branch you'd like, including gh-pages and docs. {% github JamesIves/github-pages-deploy-action %} ### quicktype quicktype generates strongly-typed models and serializers from JSON, JSON Schema, TypeScript, and [GraphQL queries](https://blog.quicktype.io/graphql-with-quicktype/), making it a breeze to work with JSON type-safely in many programming languages. {% github quicktype/quicktype %} --- ## dribbble shots ### **Money transfer app mock-ups** ![by [Prakhar Neel Sharma](https://dribbble.com/shots/16836057-Money-transfer-app-mock-ups)](https://cdn.dribbble.com/users/452635/screenshots/16836057/media/d33c7641c81e6306f05e85d14f94b759.png) by [Prakhar Neel Sharma](https://dribbble.com/shots/16836057-Money-transfer-app-mock-ups) ### Cinely Streaming Real Project ![by [Arshia Amin Javahery](https://dribbble.com/shots/16852940-Cinely-Streaming-Real-Project)](https://cdn.dribbble.com/users/3798578/screenshots/16852940/media/9b4731c263f509d131bceba271bd85d1.png) by [Arshia Amin Javahery](https://dribbble.com/shots/16852940-Cinely-Streaming-Real-Project) ### Fitness App UI Exploration ![by [Saber Ali](https://dribbble.com/shots/16851920-Fitness-App-UI-Exploration)](https://cdn.dribbble.com/users/4554958/screenshots/16851920/media/865512ec547967d1494e318196a8472f.jpg) by [Saber Ali](https://dribbble.com/shots/16851920-Fitness-App-UI-Exploration) --- ## Tweets {% twitter 1457281480556781568 %} {% twitter 1457987203745918978 %} {% twitter 1458916318359564302 %} {% twitter 1459221197283999745 %} --- ## Picked Pens ### Paper plane {% codepen https://codepen.io/t_afif/pen/YzxaBbm %} by [Temani Afif](https://twitter.com/ChallengesCss) ### Generative Bauhaus Grid Patterns {% codepen https://codepen.io/georgedoescode/pen/qBXYama %} by [George Francis](https://twitter.com/georgedoescode) ### Diagonal Page Transitions {% codepen https://codepen.io/chriscoyier/pen/oNePEyW %} by [Chris Coyier](https://twitter.com/chriscoyier) --- ## Podcasts worth listening ### CodePen Radio – Challenges Marie and Chris talk about [CodePen Challenges](https://codepen.io/challenges), which have been going strong for many years now. {% spotify spotify:episode:2ojSbQxh7gdMyVhVZw8YLl %} ### Call with Kent – Learning Gaps & Cluelessness as a Developer Hello Kent, I learnt software development in a self-taught path and I really didn't do a good job because when I hear words like Serverless and others, I don't have the first clue what they are talking about. {% spotify spotify:episode:5G19fwX5J39Lymrlm2PqXf %} ### Syntax – Web Containers, StackBlitz, and Node.js in the Browser In this episode of Syntax, Scott and Wes talk with Tomek Sulkowski about web containers, StackBlitz and more! {% spotify spotify:episode:6Rm9SmYVe9mdLpF5Ru8XUP %} --- Thank you for reading, talk to you next week, and stay safe! 👋
marcobiedermann
898,015
Track App Interactions with TraceContext
This is a placeholder testing 123
0
2021-11-14T17:16:43
https://dev.to/capndave/track-app-interactions-with-tracecontext-5da9
programming, node, tutorial, webdev
## This is a placeholder testing 123
capndave
898,032
Welcome to the free open-source OLAP server project
Welcome to the site of the eMondrian project. eMondrian is a free open-source OLAP server. It is...
0
2021-11-15T13:59:06
https://dev.to/sergeisemenkov/welcome-to-the-free-open-source-olap-server-project-132f
Welcome to the site of the [**eMondrian**](https://sergeisemenkov.github.io/eMondrian) project. eMondrian is a free open-source [OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing) server. It is based on the [Mondrian](https://github.com/pentaho/mondrian) project. OLAP server allows you to represent your database as a multidimensional space with dimensions and measures. It hides the complexity of underlying tables and their relations and allows you to interactively analyze data from multiple perspectives. eMondrian server can run on Windows and Linux operating systems. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j34edhvspxe64fjrw52k.png) eMondrian supports XML for Analysis (XMLA) standard and OLE DB for OLAP at the level that makes it possible to connect to it from client tools such as Microsoft Excel, Power BI, Tableau and many others. The logic of the original Mondrian was changed in order to improve performance of queries specific to these clients and to support clients’ features, for example Excel’s sessions, calculated members and sets. Any database that has a [JDBC](https://en.wikipedia.org/wiki/JDBC_driver) driver can be the source for eMondrian. eMondrian server is a Relational OLAP (ROLAP) server that means it always shows real time data from a source. This server runs queries written in the MDX language, reads data from a source database and presents results in a multidimensional format. The most efficient way is to use [column store databases](https://en.wikipedia.org/wiki/Column-oriented_DBMS) as data sources for eMondrian. For example, [ClickHouse](https://clickhouse.com/) could run as a powerful and fast query engine while eMondrian works as a proxy representing data as cubes and executing MDX queries.
sergeisemenkov
898,046
Pie Time
Canvas pie chart clock with second, minute, &amp;&amp; hour progression.
0
2021-11-14T18:09:13
https://dev.to/libanzakariya9409081201/pie-time-170l
codepen
<p>Canvas pie chart clock with second, minute, &amp;&amp; hour progression.</p> {% codepen https://codepen.io/tmrDevelops/pen/VYKyge %}
libanzakariya9409081201
898,261
The First Git Commit
GitHub link: https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290
0
2021-11-14T23:30:53
https://dev-to-dev.hashnode.dev/the-first-git-commit
git
![First Git Commit](https://cdn.hashnode.com/res/hashnode/image/upload/v1636921446152/iHG8R8JDK.png) GitHub link: [https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290](https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290)
rosberglinhares
898,295
How to calculate accuracy ratio in Excel using only a formula
Risk practitioners often use accuracy ratio (AR) to measure the discriminatory power of binary...
0
2021-11-15T13:48:36
https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/
excel, risk, statistics, datascience
--- title: How to calculate accuracy ratio in Excel using only a formula published: true description: tags: excel, risk, statistics, datascience canonical_url: https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/ cover_image: https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/index.png --- Risk practitioners often use accuracy ratio (AR) to measure the discriminatory power of [binary classification](https://en.wikipedia.org/wiki/Binary_classification) models, such as models of credit default and insurance fraud. The closer AR is to 1, the higher the discriminatory power of the model. ## Definition ![Cumulative Accuracy Profile](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/cap.png?1641439855020) The above diagram shows the [cumulative accuracy profiles](https://en.wikipedia.org/wiki/Cumulative_accuracy_profile) of a realistic model, a random model, and a perfect model. As the proportion of observations increases, a perfect model would correctly classify all events before all non-events, but a random model would indiscriminately classify events and non-events together. A realistic model would be somewhere in between. AR is the ratio of the area between the cumulative accuracy profiles of the realistic and random models (B) to the area between the cumulative accuracy profiles of the perfect and random models (A+B). Unfortunately, Excel does not come with a native function for calculating AR. It is possible to calculate AR in Excel manually, but the process involves auxiliary rows and columns with complicated formulas that have to be adjusted as observations are added or removed. The problem is exacerbated with the standard error of AR. ## QRS.DISC.AR Fortunately, [QRS Toolbox for Excel](https://addin.qrstoolbox.com) includes the [QRS.DISC.AR](https://addin.qrstoolbox.com/pages/docs/disc.ar) function for calculating AR. It is applicable to both grouped and ungrouped data. To try QRS.DISC.AR yourself, first [add QRS Toolbox](https://addin.qrstoolbox.com/pages/support/#how-to-add-toolbox) to your instance of Excel and then open the [example workbook](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/example.xlsx). ![Ungrouped data](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/data-ungrouped.png?1641439855020) The workbook contains 2 worksheets. In the UNGROUPED worksheet: * Cells A2&ndash;A2001 contain credit scores for 2000 borrowers. The scores range between 0 for least creditworthy and 100 for most creditworthy. * Cells B2&ndash;B2001 contain ones if credit default occurred and zeros otherwise. ![Grouped data](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/data-grouped.png?1641439855020) In the GROUPED worksheet, cells A2&ndash;E8 contain data from the previous worksheet grouped into 7 score ranges, each with an alphabetical rating grade. ## Ungrouped data example ```excel =QRS.DISC.AR(A2:A2001, B2:B2001) ``` ![Formula](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/formula-1.png?1642648218982) To calculate AR of the ungrouped data, open the UNGROUPED worksheet and enter the formula `=QRS.DISC.AR(A2:A2001, B2:B2001)` in cell D1. The result is -0.794, which is generally considered to be a large AR in absolute terms. The result is negative-valued, because the credit scores and credit default events in this example are negatively correlated by design. In a perfect model, score=0 corresponds to event=1, and score=100 corresponds to event=0. ### Significance test ```excel =QRS.DISC.AR(A2:A2001, B2:B2001, "TEST", "RAG") ``` ![Formula](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/formula-2.png?1642648218982) To determine the statistical significance of the AR, add `"TEST", "RAG"` to the formula. The result now contains a second row with a red/amber/green rating that summarizes the significance test. The AR in this example has a green rating. A green/amber rating means the AR is significant at the 5%/10% significance level. A red rating means the AR is not significant at the 10% significance level. Please read the [documentation](https://addin.qrstoolbox.com/pages/docs/disc.ar) to learn how to return the p-value and other useful information about the significance test, as well as how to change the significance levels of the ratings. ### Labels and transpose ```excel =QRS.DISC.AR(A2:A2001, B2:B2001, "TEST", "RAG", "LABELS", TRUE) ``` ![Formula](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/formula-3.png?1642648218982) To add labels to the result, add `"LABELS", TRUE` to the formula. To swap the rows and columns of the result, add `"TRANSPOSE", TRUE` to the formula. ## Grouped data example ```excel =QRS.DISC.AR(B2:B8, D2:E8) ``` ![Formula](https://addin.qrstoolbox.com/pages/demos/how-to-calculate-accuracy-ratio-excel/formula-4.png?1642648218982) To calculate AR of the grouped data, switch to the GROUPED worksheet and enter the formula `=QRS.DISC.AR(B2:B8, D2:E8)` in cell F9. The result is -0.797, which is similar to the AR of the ungrouped data. The `TEST`, `LABELS`, and `TRANSPOSE` options can be used as before. ## Final remarks If you find QRS.DISC.AR useful, please share this page with other potential users.
wynntee
911,011
Accessible card component with pure (S)CSS (no JavaScript, the pseudo-content trick)
When creating the card component, sometimes it’s advisable (or required by design) to make the whole...
0
2021-11-27T23:51:55
https://www.damianwajer.com/blog/accessible-card-component/
css, a11y, tutorial, webdev
When creating the card component, sometimes it’s advisable (or required by design) to make the whole card clickable. But how to do so without compromising the usability? Below I share a useful pseudo-content trick to make the whole card clickable and maintain its accessibility. ## Problem statement * the whole card needs to be clickable * within the card there is also a “read more” link * inside a card, there are other separate links to different URL-s * you don’t want to harm the usability, e.g. allow the user to open all links in a new tab with a mouse right-click (context menu on touch devices) * support custom styles for hover and focus states * one last requirement: user should be able to select and copy the text within a card to the clipboard How would you approach this task? Just a regular card component wrapped in an `a` element? Or maybe `onclick` in JavaScript directly on a `div` or `article` element (don’t do that!)? How to handle such a case and maintain the accessibility in a simple and elegant way? ## Possible solution 1. Set `position: relative` on the container element 2. Set `position: absolute` on the link’s `:after` pseudo-content 3. Set value of `0` for `top`, `right`, `bottom`, and `left` properties on link’s `:after` pseudo-content 4. Combine it with `:focus-within` and `:hover` to style different states 5. Enhance it even further and make the text selectable with `z-index` 6. If you want to add other links inside a card, use styles from `card__separate` to make it selectable (and/or clickable). ### HTML ```html <div class="card"> <p> <a href="#optional" class="card__separate">Optional</span> </p> <p> <span class="card__separate">Lorem ipsum</span> </p> <a href="#card-link" class="card__link">Link</a> </div> ``` ### SCSS ```scss .card { position: relative; border: 3px solid green; // Style hover and focus states. &:hover, &:focus-within { border-color: red; } // Make the content selectable. &__separate { position: relative; z-index: 2; } // Make the whole card clickable. &__link::after { position: absolute; top: 0; right: 0; bottom: 0; left: 0; content: ""; } } ``` ## Demo Check clickable area and try tab keyboard navigation in the following example: {% jsfiddle https://jsfiddle.net/damianwajer/hbv2zajs result,html,css %} ## Credits & further reading I first saw the pseudo-content trick technique on [Inclusive Components](https://inclusive-components.design/) website. Check it out for other solutions to card issues and more inclusive components examples.
damianwajer
911,369
UI Dev Newsletter #85
Links Backgrounds Chrome Developers share a module where you will learn how to style text...
0
2021-12-02T14:16:56
https://mentor.silvestar.codes/reads/2021-11-29/
html, css, javascript, webdev
## Links [Backgrounds](https://bit.ly/3Ed1VZO) Chrome Developers share a module where you will learn how to style text on the web. [My Custom CSS Reset](https://bit.ly/3xtz2pK) Josh Comeau shares his custom CSS reset and explains every rule in detail with examples. [How I made Google’s data grid scroll 10x faster with one line of CSS](https://bit.ly/3E1HvDc) Johan Isaksson describes how he improved Google Search Console page scrolling by making it 10x faster with a single line of CSS. [Modern CSS in a Nutshell](https://bit.ly/3nVhCPL) Scott Vandehey shares a high-level overview of the current state of CSS. [Using Position Sticky With CSS Grid](https://bit.ly/32tpXBV) Ahmad Shadeed explains why position: sticky isn’t working as expected with a child of a grid container. [Make your website stand out with a custom scrollbar](https://bit.ly/3E0YNAv) Estee Tey describes how to re-create the CSS Tricks scrollbar. [Cross-fading any two DOM elements is currently impossible](https://bit.ly/310jfDj) Jake Archibald shows why the cross-fading is not working when neither element is fully opaque. [Squoosh](https://bit.ly/3cUPgia) Squoosh is the image optimizer that allows you to compress and compare images with different codecs in your browser. **Happy coding!** [Subscribe to the newsletter here!](https://buttondown.email/starbist)
starbist
913,737
Building a scraping tool with Python and storing it in Airtable (with real code)
A startup often needs extremely custom tools to achieve its goals. At Arbington.com we've had to...
15,325
2021-12-30T00:27:30
https://dev.to/kalobtaulien/building-a-scraping-tool-with-python-and-storing-it-in-airtable-with-real-code-4pbl
python
A startup often needs extremely custom tools to achieve its goals. At [Arbington.com](https://arbington.com) we've had to build scraping tools, data analytics tools, and custom email functions. > None of this required a database. We used files as our "database" but mostly we used Airtable. ## Scrapers Nobody wants to admit it, but scraping is pretty important for gathering huge amounts of useful data. It's frowned upon, but frankly, everyone does it. Whether they use an automated tool, or manually sift through thousands of websites to collect email addresses - most organizations do it. In fact, scraping is what made the worlds best search engine: Google. And in Python, this is REALLY easy. The hardest part is reading through various forms of HTML, but even then, we have a tool for that. Let's take a look at an example that I've adjusted so you can scrape my website. We'll use [https://kalob.io/teaching/](https://kalob.io/teaching/) as the example and get all the courses I teach. First, we look for a pattern in the DOM. Open up that page, right click, inspect element, and look for all the blue buttons. You'll see they all have `class="btn btn-primary"`. Interesting, we've found a pattern. Great! We can work with that. Now let's just right into the code. And if you're a Python dev, feel free to paste this into your terminal. ```python import requests response = requests.get("https://kalob.io/teaching/") print(response.content) ``` You'll see the HTML for my website. Now, all we need to do is parse the HTML. > Note: utf-8 encoding is most commonly used on the internet. So we'll want to decode the HTML we scraped into utf-8 compatible text (in a giant string) Our code now looks like this: ```python import requests response = requests.get("https://kalob.io/teaching/") html = response.content.decode("utf-8") print(html) ``` And you'll see the HTML looks a little nicer now. Now here's a big hairy problem: parsing HTML. Some people use `attr=""` some people use `attr=''` some people use XHTML and some don't. So how do we get around this? Introducing: Beautiful Soup 4. In your Python environment pip install this package: ```bash pip install beautifulsoup4 ``` And your code now looks like this: ```python import requests response = requests.get("https://kalob.io/teaching/") html = response.content.decode("utf-8") import bs4 # You'll need to `pip install ` soup = bs4.BeautifulSoup(html, "html.parser") print(soup) # Shows the parsed HTML print(type(soup)) # Returns <class 'bs4.BeautifulSoup'> ``` So our `soup` variable is no longer a string, but an object. This means we can use object methods on it - like looking for certain elements in the HTML we scraped. Let's put together a list of all the links on this page. ```python import requests response = requests.get("https://kalob.io/teaching/") html = response.content.decode("utf-8") import bs4 # You'll need to `pip install ` soup = bs4.BeautifulSoup(html, "html.parser") courses = soup.findAll("a", {"class": ["btn btn-primary"]}) print(courses) ``` Look at that.. now we have a list of buttons from the page we scraped at the beginning of this article. Lastly, let's loop through them to get the button text and the link: ```python for course in courses: print(course.get("href")) print(course.text.strip()) print("\n") ``` Listen, I wrote 3 print statements to make this clear - but typically I'd write this in a single line. Now we have something to work with! We have the entire HTML element, the `href` attribute, and the `innerText` without any whitespace. The entire script is 9 lines of code and looks like this: ```python import requests import bs4 # You'll need to `pip install ` response = requests.get("https://kalob.io/teaching/") html = response.content.decode("utf-8") soup = bs4.BeautifulSoup(html, "html.parser") courses = soup.findAll("a", {"class": ["btn btn-primary"]}) for course in courses: print(f"{course.get('href')} -> {course.text.strip()}") ``` ## Moving this data somewhere useful. You know me, I'm a HUGE fan of Airtable. And instead of using local database or a cloud based database, I like to use Airtable so me and my team and work with the data and easily expand the tables if we need to. Like if we needed to add a column to see if a course meetings our criteria to be on Arbington.com. For this we use Airtables API and the python package known as `airtable-python-wrapper`. Go ahead an install this through pip. ```bash pip install airtable-python-wrapper ``` Now before we continue, you'll need a [free Airtable account](https://airtable.com/invite/r/JXx8l4fX) 👈 that's our referral link. No need to use it, it's just a nice kickback for us for constantly promoting Airtable 😂 Once you have an account, you need to dig up your app API key, your table API key, and your Base Name. It would look something like this in python: ```python from airtable.airtable import Airtable airtable = Airtable('appXXXXXXXXX', 'Links', 'keyXXXXXXXXXX') ``` Lastly, all we need to do is create a dictionary of Airtable Column Names, and insert the record. ```python import requests import bs4 # You'll need to `pip install ` from airtable.airtable import Airtable response = requests.get("https://kalob.io/teaching/") html = response.content.decode("utf-8") soup = bs4.BeautifulSoup(html, "html.parser") courses = soup.findAll("a", {"class": ["btn btn-primary"]}) airtable = Airtable('appXXXXXXXXX', 'Links', 'keyXXXXXXXXXX') for course in courses: new_record = { "Link": course.get('href'), "Text": course.text.strip(), } airtable.insert(new_record) ``` Assuming you setup your Airtable columns, table and API keys properly, you should see my website links and URLs appear in your Airtable. Now you and your team can scrape webpages and store the data in Airtable for the rest of your team to use! ## Pulling data out to work with it Now that all the data we want is in Airtable, we can use the same Python package to pull the data out, work with it, scrape more data, and update each record. But that's for another day 😉 ## Want to learn Python? If you're looking for online courses, take a look at Arbington.com, [there are over 40 Python courses](https://arbington.com/search/?q=python) available. And it comes with a free 14 day trial to access over 1,500 courses immediately! 🔥
kalobtaulien
913,759
What type of developer would a startup hire?
"The world has a shortage of developers." You might have heard this before. It's fairly common for...
15,325
2022-01-04T16:39:53
https://dev.to/kalobtaulien/what-type-of-developer-would-a-startup-hire-3bbp
"The world has a shortage of developers." You might have heard this before. It's fairly common for people to say this. It's kind of true, but not really... Here's the truth. "The world has a shortage of **[senior]** developers." Because companies don't like hiring junior or intermediate developers because they cost more to get up to speed. And the average developer only stays at a job for 2 years. But also... senior developers are _expensive_. At [Arbington.com](https://arbington.com/) we would love to hire a senior developer, but it'd also cost us like $120,000-$150,000 for the best possible candidate. And that's not an option at a lot of startups. So they hire intermediate developers, if possible. And if that's not an option, then they hire juniors. And they'll label the junior job as an "internship". Don't be fooled, interns **need** to make money too. I would never EVER advise anybody to work for free. But minimum wage if it's your first job? Possibly. So this begs the question... ## What are startups looking for in a developer? **Skills**, of course. Can you code? Great! Are you the worlds best coder? It doesn't matter if you are or if you aren't. Not sure what stack they use? Just ask! Tech stacks aren't secrets. **But fullstack is key** because a full stack developer is basically a backend dev _and_ a frontend dev at the same time. They aren't usually AMAZING at either, but are super flexible and dangerous with what they're given. **Leadership** attributes. When joining a startup, you're not just a developer - you're a core part of the team! You help develop the future culture of the company, even if you are employee #20. **Adaptability**. Startups move FAST. You need to be able to adjust. If your dream is to work for the government, a startup is absolutely _not_ for you. **Flexible pay**. Startups are cash-poor, usually. Unless they just raised a few million dollars you're likely looking at a lower pay rate. But you can get **stock options** - so if you're really in love with what they do as a business, and you think it's going somewhere, stock options are a great way to support the company and get paid at the same time. **A curious personality** is also vital. Be curious about how to help the startup. Be curious about how to level up your code. Be curious about life in general! Because your curiosity is going to help develop the company culture, and drive you to become an insanely amazing developer. ## Do you need to be the best? No! But you do need to have a lot of the soft skills and desire to become better at the technical portions. ## How do you get in? I've mentioned this before in a [Twitter thread I wrote](https://twitter.com/KalobTaulien/status/1465785129826078721), but I'll say it again: > Resumes are dead. Friends hire friends. Get to know the founders. Email them, support them, offer them help.. earn your way into their circles. Actually, this is just good advice for any job. But once they get to know you a little bit they'll be more likely to hire you. Why? Because you aren't seen as a "risk" anymore, but rather you'll be seen as a friend to the company - and a GREAT candidate to hire. - I hope this was helpful! If you found any value in this article I would love if you could share it or follow me here on Dev.to.
kalobtaulien
913,900
Weekly web development resources #98
Twind A small, fast, most feature complete tailwind-in-js solution. Using...
0
2021-12-01T06:52:12
https://dev.to/vincenius/weekly-web-development-resources-98-3o6i
weekly, webdev
______ ##[Twind](https://twind.dev/) [![Twind](https://wweb.dev/weekly/content/98/twind.jpg)](https://twind.dev/) A small, fast, most feature complete tailwind-in-js solution. ______ ##[Using Emojis in HTML, CSS, and JavaScript](https://www.kirupa.com/html5/emoji.htm) [![Using Emojis in HTML, CSS, and JavaScript](https://wweb.dev/weekly/content/98/emojis.jpg)](https://www.kirupa.com/html5/emoji.htm) An article on how to use emojis in your web documents. ______ ##[nnnoise](https://fffuel.co/nnnoise/) [![nnnoise](https://wweb.dev/weekly/content/98/nnnoise.jpg)](https://fffuel.co/nnnoise/) A SVG generator for subtle or not-so-subtle noise textures. ______ ##[Icons in Pure CSS](https://antfu.me/posts/icons-in-pure-css) [![Icons in Pure CSS](https://wweb.dev/weekly/content/98/icons-css.jpg)](https://antfu.me/posts/icons-in-pure-css) An article on how to use any icons on-demand in purely CSS. ______ ##[Image Optimizer](https://github.com/antonreshetov/image-optimizer) [![Image Optimizer](https://wweb.dev/weekly/content/98/image-optimizer.jpg)](https://github.com/antonreshetov/image-optimizer) A free and open source tool for optimizing images and vector graphics. ______ ##[42 things I learned from building a production database](https://maheshba.bitbucket.io/blog/2021/10/19/42Things.html) [![42 things I learned from building a production database](https://wweb.dev/weekly/content/98/database.jpg)](https://maheshba.bitbucket.io/blog/2021/10/19/42Things.html) A nice article by Mahesh Balakrishnan. ______ ##[a11y myths](https://a11ymyths.com/) [![a11y myths](https://wweb.dev/weekly/content/98/a11y-myths.jpg)](https://a11ymyths.com/) A small website debunking common accessibility myths. ______ ##[EmailTimer](https://easytimer.app/) [![EmailTimer](https://wweb.dev/weekly/content/98/timer.jpg)](https://easytimer.app/) A free countdown timer for your emails. ______ ##[Pexel](https://www.pexel.xyz) [![Pexel](https://wweb.dev/weekly/content/98/pexel.jpg)](https://www.pexel.xyz) A Canva style Social Media Kit for creating quick social media creatives. ______ To see all the weeklies check: [wweb.dev/weekly](https://wweb.dev/weekly)
vincenius
913,915
Add members to the .md file using github action
My Workflow This workflow read all the files in members folder and add in a table in the...
0
2021-12-01T07:28:42
https://dev.to/abhigoyani/add-members-to-the-md-file-using-github-action-mcf
actionshackathon21
### My Workflow This workflow read all the files in members folder and add in a table in the Members.md file it runs on PUSH event. [github Repo](https://github.com/abhigoyani/githubActionToAddMembersToFile) ### Submission Category: Wacky Wildcards ### Yaml File or Link to Code Workflow: ```YAML name: Add members to .md file on: push: branches: [main] jobs: build: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v2 # checkout the repository content to github runner - name: setup python uses: actions/setup-python@v2 with: python-version: '3.7.7' # install the python version needed - name: execute py script # run sj-gobierno.py to get the latest data run: python add.py - name: Commit files run: | git config --local user.email "github-actions[bot]@users.noreply.github.com" git config --local user.name "github-actions[bot]" git commit -m "Added member" -a - name: Push changes uses: ad-m/github-push-action@master with: github_token: ${{ secrets.GITHUB_TOKEN }} branch: ${{ github.ref }} ``` add.py : ```python import os users = [] data = '<td align="center"><a href="https://github.com/{username}"><img src="https://github.com/{username}.png" width="100px;" alt=""/><br /><sub><b>{name}</b></sub></a></td>\n' for filename in os.listdir("members"): with open(os.path.join("members", filename), 'r') as f: text = f.readlines() users.append({"name":text[1][7:-1],"username":text[2][11:-1]}) with open("Members.md","w") as f: f.write("<table>\n") for i in range(len(users)): if(i%10 == 0): f.write("<tr>\n") f.write(data.format(name=users[i]["name"],username=users[i]["username"])) if((i+1)%10 == 0): f.write("</tr>\n") f.write("</table>") ``` ### Additional Resources / Info We are using this action in [BauddhikGeeks](https://github.com/Bauddhik-Geeks/Welcome-to-Bauddhik-Geeks)
abhigoyani
913,941
โม้ว่าทำอะไรเกี่ยวกับ Rust บ้าง 2021
ทีแรกผมจะเขียนเตรียมไว้พูดในงาน Rustacean Bangkok 2.0.0 แต่ว่าวันงานไม่ว่าง ก็เลยเอามารวมเป็น blog...
0
2021-12-01T09:27:38
https://dev.to/veer66/omwaathamaairekiiywkab-rust-baang-33ib
--- title: โม้ว่าทำอะไรเกี่ยวกับ Rust บ้าง 2021 published: true description: tags: //cover_image: https://direct_url_to_image.jpg --- ทีแรกผมจะเขียนเตรียมไว้พูดในงาน Rustacean Bangkok 2.0.0 แต่ว่าวันงานไม่ว่าง ก็เลยเอามารวมเป็น blog ไว้ก่อนแล้วกัน ผมจะเล่าว่าผมทำอะไรกับภาษา Rust บ้างซึ่งก็คงไม่ครบถ้วน จะเขียนไปตามที่นึกได้ ## 2014 ผมเริ่มเขียน Rust ครั้งแรกค.ศ. 2014 อย่างน้อยก็ที่ใส่ [gist](https://gist.github.com/veer66/43567e674922667ac25b) ไว้ code ```Rust #[deriving(Encodable, Decodable, Show, Clone)] pub struct Node { pub snode: ~[~[Range]], pub children: Option<~[Option<Node>]>, } ``` เป็นช่วงก่อนที่จะออก Rust 1.0 หน้าตาก็ต่างจากเดี๋ยวนี้เยอะเหมือนกัน ส่วนมากเอามาทำงานที่เอาไว้เก็บ string-tree alignment แต่รายละเอียดเรื่องนี้ก็ข้ามไปดีกว่า ## 2015 ค.ศ. 2015 ผมนึกถึงโปรแกรมพื้นฐานที่ใช้บ่อย ๆ โปรแกรมตัดคำ ถ้าอยากให้ความถูกต้องสูงใช้ deepcut เลย ถ้าอยากให้ความถูกต้องสูงแต่เร็วขึ้นหน่อยใช้ attacut แต่ถ้าเอาเร็ว libthai chamkho icu [chamkho](https://github.com/veer66/chamkho) ผมแบ่งขั้นตอนการตัดคำเป็น 3 ขั้น 1. สร้าง directed acyclic graph (DAG) ของวิธีตัดคำที่เป็นไปได้ทั้งหมด 2. หา shortest path บน DAG จากข้อ 1 3. ถอด path จากข้อสองมาเป็นตำแหน่งของ string ที่ต้องตัด ส่วนที่ปรับไปมาคือข้อ 1 ผมแยกแบบนี้ 1.1 สร้าง edge จากคำในพจนานุกรมที่ตรงกับ sub-string 1.2 สร้าง edge 1.3 ตัด edge ที่ขัดกับกด cluster ซึ่งเป็น substring ที่ตัดไม่ได้ออก ข้อนี้เพิ่มมาในค.ศ. 2021 มีแค่นี้เลย เล่าย้อนกลับไปหน่อยว่าปลายค.ศ. 2002 ผมเริ่มทำโปรแกรมตัดคำออกมาเขียนด้วย C เพราะผมลองใช้ NLTK ที่เขียนด้วย Python ทั้งตัวแล้วมันทำงานช้าเกิน แต่ผมก็คิดว่าแอปก็ยังควรจะภาษาที่เขียนง่าย ๆ แบบ Python หรือ Ruby หรือ Lisp อยู่ดี แต่ข้อจำกัดของตัวคำสมัยนั้นคือ 1. หลายตัวเขียนด้วย C++ ผมเขียนไม่เป็นหลังจากเรียนอยู่นาน 2. ไม่ค่อยเหมาะเอามา bind กับ Ruby และอื่น ๆ 3. แก้ word list และกฎยาก thaiwordseg ก็เลยใส่ word list ใน file กฎใช้ regex และ bind กับ Ruby มาให้เลย ดูโครงการได้ที่ [sourceforge](https://sourceforge.net/projects/thaiwordseg/files/wordcut/0.0.7/) ยังมี package ของโครงการนี้อยู่บน SUSE Linux ด้วยครับ ปลายค.ศ. 2015 ผมไปพูดเรื่อง Rust ใน Barcamp Bangkhen รอบนึง พูดประมาณใน[เอกสารนี้](https://gist.github.com/veer66/895c04528b2b7dccefaa) ## 2016 ค.ศ. 2016 พูดเรื่อง Rust ในกลุ่ม Mozilla ไทย มีหลงเหลือ [slide](https://file.veer66.rocks/rust-moz-20160312/present.html) อยู่บ้าง ค.ศ. 2016 ตั้ง[กลุ่ม Facebook](https://www.facebook.com/groups/RustLangTH) สำหรับชาว Rust ไทยขึ้นมา ถ้าจำไม่ผิดตั้งตามกลุ่ม Clojure ไทยเพราะเห็นว่ามีกลุ่มขึ้นมาก็ดี ทุกวันนี้กลุ่ม RustLangTH ก็ได้คุณ Nui Narongwet และคุณ Wasawat Somno ช่วยกันดูแลเป็นหลักเลย กราบขอบพระคุณครับ ## 2017 ค.ศ. 2017 ร่วมกับคุณ [@iporsut](https://dev.to/iporsut) ทดสอบ Rust ที่เอามาตัดคำโดยใช้ word list และยังไม่ได้ใช้ regex สรุปได้เลยว่าเอา Rust เร็วกว่า Go Java คือตัวที่เขียนด้วย Rust ใช้เวลาทำงานเพียง 60% ของ Go ยังไม่ต้องพูดถึงตัวอื่นที่ว่าช้ากว่านั้น ส่วนพวก Python JavaScript Clojure คือช้ามาก ห่างกันเป็นเท่าตัว [ดูรายละเอียดเพิ่มเติม](https://veer66.rocks/posts/A-benchmark-of-Thai-word-tokenizers-written-in-various-programming-languages.html) ค.ศ. 2017 พบปะชาว Rust ไทยไปอีกรอบนึง [ตาม blog นี้](https://veer66.wordpress.com/2017/01/2) ค.ศ. 2017 ตั้ง[กลุ่ม Rust ไทยบน Telegram](https://t.me/rustthai) ทั้งกลุ่มใน Telegram ทั้ง Facebook ผมส่งมอบสิทธิ admin ไปหมดแล้ว สมมุติว่าตอนนี้ผมตายไปก็ไม่ผลกระทบแน่นอน ค.ศ.2017 ผมแก้ bug บน servo พอมีบันทึกไว้ใน [blog ของ servo](https://blog.servo.org/2017/02/06/twis-91/) ก็พบว่าเป็น bug ที่ผิดง่าย ๆ ใส่ตัวแปรสลับกัน Rust จะประกาศ type ให้ต่างกันจน compiler ตรวจได้ว่าคนใส่สลับกันได้ แต่ก็แบบที่ให้เวลาใช้งานจริง ๆ ก็ไม่ได้ประกาศ type แบบที่ว่า i32 u32 เต็มไปหมด ใส่สลับกัน compiler ก็ตรวจไม่ได้อยู่ดี ระหว่างผมเขียน Python กับ Rust รู้สึกว่าเขียน Rust ก็เกิด bug เยอะแยะอยู่ดี อาจจะว่าผมโง่ก็ได้ แต่มันเป็นตัวแปรควบคุมว่าผมก็โง่เหมือนเดิมเวลาเขียน Python หรือ Rust ก็ตาม 😛 ## 2018 ค.ศ. 2018 ส่วนมากก็จะปรับปรุงโปรแกรมเก่า เริ่มเขียน HTTP server มาหุ้มตัวตัดคำบ้าง โดยใช้ Hyper ตอนนี้น่าจะรันไม่ได้แล้ว หน้าตาแบบข้างล่าง ```Rust impl Service for WordcutServer { type Request = Request; type Response = Response; type Error = hyper::Error; type Future = WebFuture; fn call(&self, req: Request) -> Self::Future { match (req.method(), req.path()) { (&Post, "/wordseg") => wordseg_handler(req), (&Post, "/dag") => dag_handler(req), _ => not_found(req) } } } ``` ## 2019 ค.ศ.2019 ส่วนมากจะเป็น code พวก string-tree เหมือนห้าปีก่อน แต่ style ก็เปลี่ยนไปเยอะ หน้าตาประมาณข้างล่าง ```Rust quick_error! { #[derive(Debug)] pub enum TextunitLoadingError { CannotLoadToks(lang: LangKey, err: Box<Error>) { } CannotLoadLines(lang: LangKey, err: Box<Error>) { } CannotLoadLinks(err: Box<Error>) { } CannotAlignToks(lang: LangKey, line_no: usize, err: Box<Error>) { } } } #[derive(Debug)] pub struct Textunit { pub bi_text: BiText, pub bi_rtoks: BiRToks, pub links: Vec<Link>, } ``` ## 2020 ค.ศ. 2020 ผม port extension สำหรับ full-text search ภาษาไทยบน PostgreSQL ของคุณ [zdk](https://github.com/zdk) (Di Warachet S.) มาเป็น Rust ผมไม่ทราบว่าคุณ zdk คิดอย่างไร แต่ผมมองว่าการลงพวก ElasticSearch เพื่อที่จะใช้ full text search พื้น ๆ หรือแม้แต่เอามา query JSON document บนเว็บที่คนใช้พร้อมกันไม่ถึง 100% คน มันเพิ่มงาน กิน RAM กิน SSD ขึ้นอีกเยอะ ใช้ PostgreSQL แทนได้น่าจะดีกว่า ตัวที่เขียนด้วย Rust ผมตั้งชื่อให้ว่า[ชำฆ้อพีจี](https://github.com/veer66/chamkho-pg) ค.ศ. 2020 ผมเขียน JSON parser สำหรับ GNU Guile ชื่อ guile-json-parse ด้วย ## 2021 ค.ศ. 2021 ผมแก้ให้โปรแกรมตัดคำมี regular expression แบบ newmm และ nlpo3 แต่พบว่าทุกอย่างที่เคย optimize มาสูญสลายไปเกือบหมดเลย เพราะว่า regex แบบ fancy ใน Rust มันก็ไม่ได้เร็วกว่า regex engine ของ Python ที่เขียนด้วย C 😰 ถ้าจะคงความเร็วไว้อาจจะต้องเขียนกฎให้เป็นภาษา Rust เลยแบบที่ chamkho เคยทำมา หรือทำตาม libthai ที่มีกฎคล้าย ๆ newmm แต่เขียนด้วย C วิธีนี้เร็วแน่ แต่ว่าแก้ทีเหนื่อย จะเพิ่มภาษาใหม่หนักแรง แต่ Rust มีทางออกที่สามคือใช้ regex-automata ของ Andrew Gallant ซึ่งตัวนี้ตามชื่อแปลง regex เป็น automata แบบที่ควรจะเป็น แต่ก็จะรับ regex แบบขยายของ newmm ไม่ได้ก็เลยต้องโม regex บ้าง ทำให้เสร็จแล้ว newmm รันบน xeon ใช้เวลารัน 9.65 เท่าของ chamkho ผมว่าควรจะเขียนถึงเรื่อง profiling ด้วยแค่จะแยกไปอีกโพสต์ อีกภาษาที่หลัง ๆ ทำงานเร็วขึ้นมาก และสำหรับผมเขียนง่ายกว่า Rust เยอะมาก ๆ คือ Julia แต่ว่าก็ติดอยู่ 3 เรื่องคือ 1. ไม่มี regex-automata บน Julia 2. runtime ของ Julia เปิดช้า 3. ทำ lib ลอกเลียนภาษา C ลำบาก ทำให้เอาไปใช้กับ cffi ใน Common Lisp หรือ Ruby ยาก Common Lisp ก็เป็นอีกภาษาหนึ่งที่ optimize มาก ๆ แล้วก็เร็วสูสี Julia แต่ก็ติดประเด็นเดียวกันทั้งหมด ทำให้ผมสรุปแบบนี้เลยว่า Rust นอกจากภาษาและ compiler แล้วยังเป็น community ที่ lib ดี ๆ ออกมาเยอะมาก หลายตัวหาในภาษาอื่นยาก ค.ศ. 2021 อีกอย่างที่คือ เพื่อที่จะไป bind กับภาษาอื่นง่าย ๆ chamkho มี wrapper ทำให้กลายเป็น library ภาษา C แบบปลอม ๆ ชื่อ [wordcutw](https://github.com/veer66/wordcutw) พอทำแบบนี้เวลาจะ bind กับ Ruby หรือภาษาอื่น ๆ ก็ไม่ต้องคิดอะไรใหม่เลยใช้ FFI ได้เลย ข้อดีของการใช้ FFI หรือ cffi คือมันจะไม่ยึดติดกับ implementation ตัวใดตัวหนึ่ง เช่น Common Lisp อาจจะใช้ implementation เป็น SBCL หรือ Clozure CL ก็ได้ แต่ถ้าใช้ cffi มา bind กับ wordcutw ก็จะใช้ได้ทั้ง SBCL และ Clozure CL โดยไม่ต้องแก้ code เลย ค.ศ. 2021 ผมคิดว่าน่าจะไป chat กันแบบ protocol ที่เปิดและ decentralize บ้างก็เลยเปิด [กลุ่ม Rust ไทยบน Matrix](https://matrix.to/#/#rust-lang-th:matrix.org) ค.ศ. 2021 อีกอย่างที่ทำคือเทียบความเร็ว Apache Arrow กับ Pandas ผลคือถ้าเขียน Pandas ดี ๆ ก็เร็วได้ แต่ใช้ Arrow เร็วกว่า ยิ่งใช้ Rust ก็เร็วใหญ่ และไม่ต้องระวังเรื่องแปลงไปมาระหว่าง type ของ Rust และ Python [ดูรายละเอียดเพิ่มเติม](github.com/veer66/arrow-exper1) ค.ศ. 2021 ก็มีเรื่องอื่น ๆ คือเขียน parser สำหรับ Apertium stream format โดยใช้ Nom ทำให้เขียน parser ง่ายขึ้นมาก; port โปรแกรม Attacut มาบน Rust ให้ชื่อว่า Khatson ปรากฎว่าไม่เร็วขึ้นเลยแต่ทำให้รู้ว่าใช้ tch ใน Rust ที่หุ้ม PyTorch ไว้ให้ เขียนพวก deep learning แบบเดี๋ยวกับใช้ PyTorch สะดวกมาก และสำหรับผมงงน้อยกว่าใช้ PyTorch ตรง ๆ ## สถานการณ์ของ Rust ในไทย สภานการณ์ปัจจุบัน ตอนนี้ผมคิดว่า Rust ติดลมบนไปแล้ว บริษัทขวัญใจมหาชนก็เป็นสมาชิก Rust Foundation ทั้ง AWS Facebook Google Huawei Microsoft Mozilla; celeb ก็เขียน Rust กันแล้ว สถานการณ์ในไทยก็ฝืนกระแสโลกไม่ไหวแน่ ๆ ส่วนที่เป็นรูปธรรมมากคือคุณ Jojo จากช่อง KubeOps Skills ทำวิดีโอภาษาไทยขึ้นมาสอนเขียน Rust เลย ผมว่าอันนี้ดีมาก ๆ และตามที่เกริ่นไปคุณ Natechewin Suthison จะจัดงาน [Rustacean Bangkok 2.0.0](https://m.facebook.com/story.php?story_fbid=4900432456656105&id=100000681975712&sfnsn=mo) วันที่ 13 ธันวาคม พ.ศ.2564 นี ทุกท่านกดดูรายละเอียดตาม [link ไป Facebook](https://m.facebook.com/story.php?story_fbid=4900432456656105&id=100000681975712&sfnsn=mo) ได้เลย
veer66
913,997
November 21' Month Updates for Developers 🚀
NEW! This is the November 2021 release announcement. Here is a list of all new enhancements and...
0
2021-12-01T15:03:03
https://www.videosdk.live/blog/november-month-updates
--- title: November 21' Month Updates for Developers 🚀 published: true teg: webrtc, showdev, webdev, beginners canonical_url: https://www.videosdk.live/blog/november-month-updates --- ![November 21' Month Updates for Developers 🚀](http://blog.videosdk.live/content/images/2021/12/November-updates.jpg) **NEW! This is the November 2021 release announcement. Here is a list of all new enhancements and product updates on Video SDK.** **1. User dashboard** - Now **download chat** for new sessions as a CSV file. - **Domains** prefixed with **www** are now automatically **allowed**. - **New guide links** in quickstart and overview page. ![November 21' Month Updates for Developers 🚀](http://blog.videosdk.live/content/images/2021/12/https___s3.amazonaws.com_appforest_uf_f1628685577359x338695008335103200_giphy-2.gif) **2. RTC Javascript prebuilt v0.1.17** - The **prebuilt website** code is now [open-sourced](https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui) and available on our Github repo for **contribution** or use in your own website: [videosdk-live/videosdk-rtc-react-prebuilt-ui](https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui). **3. Flutter SDK v0.0.8** - **Toggle other participants' mic or webcam.** - **Remove any participant** from the meeting. - **Pause and resume** incoming participant video and audio **streams**. - Set **incoming video** stream quality based to **low, med or high**. - **Presenter change event** available for screenshare from other platforms. **4. Android SDK v0.0.4** - **Toggle other participants' mic or webcam.** - **Remove any participant** from the meeting. - **Start cloud recording** for the meeting. - **Livestream** meeting to **Youtube and other RTMP** supported platforms. **5. iOS SDK v1.0.0** - **Toggle other participants' mic or webcam.** - **Remove any participant** from the meeting. - **Start cloud recording** for the meeting. - **Livestream** meeting to **Youtube and other RTMP** supported platforms. **6. React Native SDK v0.0.18** - **Support for projects ejected from expo.** **7. Code Samples** - Code sample released for [React Prebuilt UI](https://github.com/videosdk-live/videosdk-rtc-react-prebuilt-ui) and [React Native UI](https://github.com/videosdk-live/videosdk-rtc-react-native-prebuilt-ui) - (Experimental) [Prebuilt webview for Android](https://github.com/videosdk-live/videosdk-rtc-android-prebuilt-webview-example): ideal for one-to-one calls and small meetings with up to on-screen 6 participants. - (Experimental) [Prebuilt webview for iOS](https://github.com/videosdk-live/videosdk-rtc-ios-prebuilt-webview-example): ideal for one-to-one calls and small meetings with up to on-screen 6 participants. **8. Rest APIs** - Meeting **chat CSV file link** now available in [Meeting sessions API](https://docs.videosdk.live/docs/realtime-communication/rest-api-reference/list-meeting-sessions) **9. Documentation & Support** We are going live **every Tuesday** providing a live quickstart demo for all of our SDKs one by one. Here are the links for the previously hosted events. Stay tuned for more events on our [Linkedin page] (https://www.linkedin.com/company/video-sdk/events/). **Build a Video Conferencing App in Flutter** {% youtube jvzE4j1Pj2Q %} **Create a Low Latency Live Streaming App in React Native** {% youtube BQ1vWEC5WrE %} **Plug & Play Live Shopping in E-commerce Website** {% youtube ZMLMBmkSwDA %} **Build Video Calling in any No-code Platform within 5 Minutes** {% youtube sI6vJGc2XH0 %} **Website & Support** - Join our [Discord](https://discord.gg/f2WsNDN9S5) Community You can always [connect with us](https://videosdk.live/contact) in case of any query or help. We are happy to assist you. Thanks for reading.
sagarkava
914,029
N, manage easily your node versions
Before share a node JS tool you should consider these things: Your tool has no bugs Your tool has...
0
2021-12-01T13:33:30
https://dev.to/gatomo_oficial/n-manage-easily-your-node-versions-f8k
javascript, node, tutorial, productivity
Before share a node JS tool you should consider these things: - Your tool has no bugs - Your tool has documentation - **Your tool has compatibility between versions** Compatibility is something important to keep in mind. Developers needs different versions according to their needs, so your tool must have support for different versions. ## The problem is... The problem is that you need to install different versions for test it, and download and use the node installer for each version takes his time. Afortunately there are a lot of tools for manage versions quickly. Today I'm going to talk about **N, a simple node version manager.** --- ## What is N? N is a really simple Node version manager. It helps you to change between versions with a command. N supports Linux and MacOS, but not Windows, unless you use [WSL](https://en.m.wikipedia.org/wiki/Windows_Subsystem_for_Linux). --- ## Here starts a short tutorial Ok, you know what is N and why you need it. Now let's go to install it and learn some commands. ### Installation Install N globally with your favorite package manager. ![installing N](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wjwz87b3gbjhhad7l05y.png) Now you can use N CLI with `n` ### Install versions Install a version with `n <version>` ![installing node from N](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3le94f14e6e6niq5p5e.png) You can also put `latest` or `current` instead of `lts`. Once the version is installed, N will save in cache the version for be available offline at any moment. Similar to Yarn with node modules 🧵 ### View installed versions If you installed many versions (e.g. 16.5.0 and 14.18.2) you can view a list of cached versions and select which to install. You should see something like this ![view and install](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lyqvfxejze2vr22y7x1c.png) Use the arrow keys to change versions, and press enter for install. ### Uninstall versions If you want to clear some specifics versions or all the cache, you can use the `rm` and `prune` commands, respectively. ![Uninstall](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zmjykzfaemwxckqg6330.png) ## It really works? Yes, it works without problems. You can do `node --version` and check it. --- Congratulations 🥳! You have a useful tool for manage node versions. You can check all commands in the [NPM page](https://www.npmjs.com/package/n) and view the source code in the [GitHub repository](https://github.com/tj/n) 🦑 Are you going to use N? You prefer other version manager? Tell me in the comments whatever you want 😄
gatomo_oficial
914,049
Setup-Cpp
setup-cpp Install all the tools required for building and testing C++/C...
0
2021-12-06T16:51:47
https://dev.to/aminya/setup-cpp-3ia4
actionshackathon21, cpp, github, devops
# setup-cpp Install all the tools required for building and testing C++/C projects. ![Build Status (Github Actions)](https://github.com/aminya/setup-cpp/workflows/CI/badge.svg) Setting up a **cross-platform** environment for building and testing C++/C projects is a bit tricky. Each platform has its own compilers, and each of them requires a different installation procedure. This package aims to fix this issue. `setup-cpp` can be used locally from terminal, from CI services like GitHub Actions and GitLab Pipelines, and inside containers like Docker. `setup-cpp` is supported on many platforms. It is continuously tested on several configurations including Windows (11, 10, 2022, 2019), Linux (Ubuntu 22.04, Ubuntu 20.04, Fedora, ArchLinux), and macOS (12, 11, 10.15). `setup-cpp` is backed by unit tests for each tool and integration tests for compiling cpp projects. ## Features `setup-cpp` is **modular** and you can choose to install any of these tools: | category | tools | | --------------------- | ------------------------------------------------------------ | | compiler and analyzer | llvm, gcc, msvc, vcvarsall, cppcheck, clangtidy, clangformat | | build system | cmake, ninja, meson, make, task, bazel | | package manager | vcpkg, conan, choco, brew, nala | | cache | cppcache, sccache | | documentation | doxygen, graphviz | | coverage | gcovr, opencppcoverage, kcov | | other | python, powershell, sevenzip | `setup-cpp` automatically handles the dependencies of the selected tool (e.g., `python` is required for `conan`). ## Usage ### From Terminal #### With npm and Nodejs Run `setup-cpp` with the available options. ```shell # Windows example (open PowerShell as admin) npx setup-cpp --compiler llvm --cmake true --ninja true --ccache true --vcpkg true RefreshEnv.cmd # activate the environment ``` ```shell # Linux/Macos example sudo npx setup-cpp --compiler llvm --cmake true --ninja true --ccache true --vcpkg true source ~/.cpprc ``` NOTE: In the `compiler` entry, you can specify the version after `-` like `llvm-11.0.0`. For the tools, you can pass a specific version instead of `true` that chooses the default version NOTE: On Unix systems, when `setup-cpp` is used locally or in other CI services like GitLab, the environment variables are added to `~/.cpprc`. You should run `source ~/.cpprc` to immediately activate the environment variables. This file is automatically sourced in the next shell restart from `~/.bashrc` or `~/.profile` if `SOURCE_CPPRC` is not set to `0`. To deactivate `.cpprc` in the next shell restart, rename/remove `~/.cpprc`. NOTE: On Unix systems, if you are already a root user (e.g., in a GitLab runner or Docker), you will not need to use `sudo`. #### With executable Download the executable for your platform from [here](https://github.com/aminya/setup-cpp/releases/tag/v0.33.0), and run it with the available options. You can also automate downloading using `wget`, `curl`, or other similar tools. An example that installs llvm, cmake, ninja, ccache, and vcpkg: ```shell # windows example (open PowerShell as admin) curl -LJO "https://github.com/aminya/setup-cpp/releases/download/v0.33.0/setup-cpp-x64-windows.exe" ./setup-cpp-x64-windows --compiler llvm --cmake true --ninja true --ccache true --vcpkg true RefreshEnv.cmd # activate cpp environment variables ``` ```shell # linux example wget "https://github.com/aminya/setup-cpp/releases/download/v0.33.0/setup-cpp-x64-linux" chmod +x ./setup-cpp-x64-linux sudo ./setup-cpp-x64-linux --compiler llvm --cmake true --ninja true --ccache true --vcpkg true source ~/.cpprc # activate cpp environment variables ``` ```shell # macos example wget "https://github.com/aminya/setup-cpp/releases/download/v0.33.0/setup-cpp-x64-macos" chmod +x ./setup-cpp-x64-macos sudo ./setup-cpp-x64-macos --compiler llvm --cmake true --ninja true --ccache true --vcpkg true source ~/.cpprc # activate cpp environment variables ``` ### Inside GitHub Actions Here is a complete cross-platform example that tests llvm, gcc, and msvc. It also uses cmake, ninja, vcpkg, and cppcheck. `.github/workflows/ci.yml`: ```yaml name: ci on: pull_request: push: branches: - main - master jobs: Test: runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: os: - windows-2022 - ubuntu-22.04 - macos-12 compiler: - llvm - gcc # you can specify the version after `-` like `llvm-13.0.0`. include: - os: "windows-2022" compiler: "msvc" steps: - uses: actions/checkout@v3 - name: Cache uses: actions/cache@v3 with: path: | ~/vcpkg ./build/vcpkg_installed ${{ env.HOME }}/.cache/vcpkg/archives ${{ env.XDG_CACHE_HOME }}/vcpkg/archives ${{ env.LOCALAPPDATA }}\vcpkg\archives ${{ env.APPDATA }}\vcpkg\archives key: ${{ runner.os }}-${{ matrix.compiler }}-${{ env.BUILD_TYPE }}-${{ hashFiles('**/CMakeLists.txt') }}-${{ hashFiles('./vcpkg.json')}} restore-keys: | ${{ runner.os }}-${{ env.BUILD_TYPE }}- - name: Setup Cpp uses: aminya/setup-cpp@v1 with: compiler: ${{ matrix.compiler }} vcvarsall: ${{ contains(matrix.os, 'windows') }} cmake: true ninja: true vcpkg: true cppcheck: true clangtidy: true # instead of `true`, which chooses the default version, you can pass a specific version. # ... ``` ### Inside Docker Here is an example for using setup-cpp to make a builder image that has the Cpp tools you need. ```dockerfile #### Base Image FROM ubuntu:22.04 as setup-cpp-ubuntu RUN apt-get update -qq && \ # install nodejs apt-get install -y --no-install-recommends nodejs npm && \ # install setup-cpp npm install -g setup-cpp@v0.33.0 && \ # install the compiler and tools setup-cpp \ --nala true \ --compiler llvm \ --cmake true \ --ninja true \ --task true \ --vcpkg true \ --python true \ --make true \ --cppcheck true \ --gcovr true \ --doxygen true \ --ccache true && \ # cleanup nala autoremove -y && \ nala autopurge -y && \ apt-get clean && \ nala clean --lists && \ rm -rf /var/lib/apt/lists/* && \ rm -rf /tmp/* ENTRYPOINT ["/bin/bash"] #### Building (example) FROM setup-cpp-ubuntu AS builder COPY ./dev/cpp_vcpkg_project /home/app WORKDIR /home/app RUN bash -c 'source ~/.cpprc \ && task build' #### Running environment # use a fresh image as the runner FROM ubuntu:22.04 as runner # copy the built binaries and their runtime dependencies COPY --from=builder /home/app/build/my_exe/Release/ /home/app/ WORKDIR /home/app/ ENTRYPOINT ["./my_exe"] ``` See [this folder](https://github.com/aminya/setup-cpp/tree/master/dev/docker), for some dockerfile examples. If you want to build the ones included, then run: ```shell git clone --recurse-submodules https://github.com/aminya/setup-cpp cd ./setup-cpp docker build -f ./dev/docker/setup-cpp-ubuntu.dockerfile -t setup-cpp-ubuntu . ``` Where you should use the path to the dockerfile after `-f`. After build, run the following to start an interactive shell in your container ```shell docker run -it setup-cpp ``` ### Inside Docker inside GitHub Actions You can use the docker file discussed in the previous section inside GitHub Actions like the following: ```yaml jobs: Docker: runs-on: ${{ matrix.os }} strategy: matrix: os: - ubuntu-22.04 steps: - uses: actions/checkout@v3 - name: Build id: docker_build run: | docker build -f ./dev/docker/ubuntu.dockerfile -t setup-cpp . ``` ### Inside GitLab pipelines The following gives an example for setting up a C++ environment inside GitLab pipelines. .gitlab-ci.yaml ```yaml image: ubuntu:22.04 stages: - test .setup_linux: &setup_linux | DEBIAN_FRONTEND=noninteractive # set time-zone TZ=Canada/Pacific ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone # for downloading apt-get update -qq apt-get install -y --no-install-recommends curl gnupg ca-certificates # keys used by apt apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3B4FE6ACC0B21F32 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1E9377A2BA9EF27F .setup-cpp: &setup-cpp | curl -LJO "https://github.com/aminya/setup-cpp/releases/download/v0.33.0/setup-cpp-x64-linux" chmod +x setup-cpp-x64-linux ./setup-cpp-x64-linux --compiler $compiler --cmake true --ninja true --ccache true --vcpkg true source ~/.cpprc .test: &test | # Build and Test # ... test_linux_llvm: stage: test variables: compiler: llvm script: - *setup_linux - *setup-cpp - *test test_linux_gcc: stage: test variables: compiler: gcc script: - *setup_linux - *setup-cpp - *test ``` ## Articles [Setup-Cpp on Dev.to](https://dev.to/aminya/setup-cpp-3ia4) ## Usage Examples - [cpp_vcpkg_project project](https://github.com/aminya/cpp_vcpkg_project) - [project_options](https://github.com/aminya/project_options) - [cpp-best-practices starter project](https://github.com/cpp-best-practices/cpp_starter_project) - [ftxui](https://github.com/ArthurSonzogni/FTXUI) - [inja](https://github.com/pantor/inja) - [teslamotors/fixed-containers](https://github.com/teslamotors/fixed-containers) - [zeromq.js](https://github.com/zeromq/zeromq.js) - [json2cpp](https://github.com/lefticus/json2cpp) - [lefticus/tools](https://github.com/lefticus/tools) - [watcher](https://github.com/e-dant/watcher) - [pinpoint-c-agent](https://github.com/pinpoint-apm/pinpoint-c-agent) - [dpp](https://github.com/atilaneves/dpp) - [DSpellCheck](https://github.com/Predelnik/DSpellCheck) - [simdjson-rust](https://github.com/SunDoge/simdjson-rust) - [CXXIter](https://github.com/seijikun/CXXIter) - [git-tui](https://github.com/ArthurSonzogni/git-tui) - [supercell](https://github.com/orex/supercell) - [libclang](https://github.com/atilaneves/libclang) - [d-tree-sitter](https://github.com/aminya/d-tree-sitter) - [atom-community/papm](https://github.com/atom-community/papm) - [ecs_benchmark](https://github.com/abeimler/ecs_benchmark) - [smk](https://github.com/ArthurSonzogni/smk) See all of the usage examples on GitHub [here](https://github.com/search?q=aminya%2Fsetup-cpp+path%3A.github%2Fworkflows%2F+language%3AYAML+fork%3Atrue&type=code).
aminya
914,087
Schema Validation with Zod and Express.js
Overview In the past I've done articles on how we can use libraries like Joi and Yup to...
0
2021-12-01T10:31:47
https://dev.to/franciscomendes10866/schema-validation-with-zod-and-expressjs-111p
node, javascript, tutorial, webdev
## Overview In the past I've done articles on how we can use libraries like [Joi](https://dev.to/franciscomendes10866/schema-validation-with-joi-and-node-js-1lma) and [Yup](https://dev.to/franciscomendes10866/schema-validation-with-yup-and-express-js-3l19) to create middleware that does input validation coming from the frontend. Although both libraries are similar, they end up having a small difference in their implementation. But if you are going to make the transition from JavaScript to TypeScript it doesn't have any problems, because the only thing you need to do is install the data type dependencies and then infer them in the code. However most libraries are JavaScript oriented, I don't mention this point as a negative aspect, but there are libraries which are TypeScript first and very easy to use. That's why I'm talking about [Zod](https://github.com/colinhacks/zod), if you've already tried Yup or if you already have some experience, you'll literally feel at home because the API's are similar. The only thing that changes is that Zod has many more features for TypeScript developers. ## Today's example Today I'm going to do as in other articles where we proceeded to create a middleware to perform the schema validation of a specific route. The only difference is that we are going to create an API in TypeScript. The idea is quite simple, let's create a middleware that will receive a schema as a single argument and then validate it. ## Project setup As a first step, create a project directory and navigate into it: ```sh mkdir zod-example cd zod-example ``` Next, initialize a TypeScript project and add the necessary dependencies: ```sh npm init -y npm install typescript ts-node-dev @types/node --save-dev ``` Next, create a `tsconfig.json` file and add the following configuration to it: ```js { "compilerOptions": { "sourceMap": true, "outDir": "dist", "strict": true, "lib": ["esnext"], "esModuleInterop": true } } ``` Now let's add the following script to our `package.json` file. ```js { // ... "type": "module", "scripts": { "start": "ts-node-dev main.ts" }, // ... } ``` Now proceed with the installation of the Express and Zod dependencies (as well as their development dependencies): ```sh npm i express zod --save npm i @types/express --save-dev ``` # Let's code And now let's create a simple API: ```js // @/main.ts import express, { Request, Response } from "express"; const app = express(); app.use(express.json()); app.get("/", (req: Request, res: Response): Response => { return res.json({ message: "Validation with Zod 👊" }); }); const start = (): void => { try { app.listen(3333, () => { console.log("Server started on port 3333"); }); } catch (error) { console.error(error); process.exit(1); } }; start(); ``` For the API to be initialized on port 3333 just run the following command: ```sh npm start ``` Now we can start working with Zod and first let's define our schema, in this example we will only validate the response body. And let's hope the body contains two properties, the fullName and the email. This way: ```js // @/main.ts import express, { Request, Response } from "express"; import { z } from "zod"; const app = express(); app.use(express.json()); const dataSchema = z.object({ body: z.object({ fullName: z.string({ required_error: "Full name is required", }), email: z .string({ required_error: "Email is required", }) .email("Not a valid email"), }), }); // ... ``` Now we can create our middleware, but first we have to import `NextFunction` from Express and `AnyZodObject` from Zod. Then let's call our middleware validate and receive schema validation in the arguments. Finally, if it is properly filled in, we will go to the controller, otherwise we will send an error message to the user. ```js import express, { Request, Response, NextFunction } from "express"; import { z, AnyZodObject } from "zod"; // ... const validate = (schema: AnyZodObject) => async (req: Request, res: Response, next: NextFunction) => { try { await schema.parseAsync({ body: req.body, query: req.query, params: req.params, }); return next(); } catch (error) { return res.status(400).json(error); } }; // ... ``` Finally, we are going to create a route with the HTTP verb of POST type, which we will use our middleware to perform the validation of the body and, if successful, we will send the data submitted by the user. ```js app.post("/create", validate(dataSchema), (req: Request, res: Response): Response => { return res.json({ ...req.body }); } ); ``` The final code of the example would be as follows: ```js import express, { Request, Response, NextFunction } from "express"; import { z, AnyZodObject } from "zod"; const app = express(); app.use(express.json()); const dataSchema = z.object({ body: z.object({ fullName: z.string({ required_error: "Full name is required", }), email: z .string({ required_error: "Email is required", }) .email("Not a valid email"), }), }); const validate = (schema: AnyZodObject) => async (req: Request, res: Response, next: NextFunction) => { try { await schema.parseAsync({ body: req.body, query: req.query, params: req.params, }); return next(); } catch (error) { return res.status(400).json(error); } }; app.get("/", (req: Request, res: Response): Response => { return res.json({ message: "Validation with Zod 👊" }); }); app.post("/create", validate(dataSchema), (req: Request, res: Response): Response => { return res.json({ ...req.body }); } ); const start = (): void => { try { app.listen(3333, () => { console.log("Server started on port 3333"); }); } catch (error) { console.error(error); process.exit(1); } }; start(); ``` ## Conclusion As always, I hope you found it interesting. If you noticed any errors in this article, please mention them in the comments. 🧑🏻‍💻 Hope you have a great day! 🤗
franciscomendes10866
914,144
How to recover the firmware of a NAS
QNAP Firmware Recovery Check Hardware Turn Off NAS. Unplug Harddrives. Use...
0
2021-12-01T11:46:53
https://dev.to/jeromesch/how-to-recover-the-firmware-of-a-nas-4nb
nas, recovery, backup, homeserver
# QNAP Firmware Recovery ##Check Hardware 1. Turn Off NAS. 2. Unplug Harddrives. 3. Use HDMI / USB plugs to control the nas manualy. (Keyboard / Mouse / Monitor) 4. Turn On NAS. If the Bios won't boot without Harddrives the internal storage most likely has a Problem. A.) This might be the case if you just updated the Firmware Version B.) or the Hardware might be broken. In Case B you can just send in the NAS and get a new one since the internal Storage is soldered. (Sh*t happens) If the BIOS boots up, check the Harddrives. Unusual that both Harddrives break at the same time, but _just in case.._ If the BIOS shows `"Uncompressing Linux..."` tries to bring up the DOM. In general thats a good sign, if it keeps beeing stuck there thats a bid sign. Means we need to fix the DOM. ##Recover DOM ###Make a bootable USB Stick Download unetbootin utility & the matching iso for your NAS Unetbootin: http://unetbootin.sourceforge.net/ DSL: http://distro.ibiblio.org/damnsmall/current/dsl-4.4.10-initrd.iso **DSL = "DAMN SMALL LINUX"** Plug in USB Stick and format (Min. 1 GB / Fat32) Install Software (unetbootin) and pick "Damn Small Linux" in Distro. Flash the USB Stick the System/Distro _little excourse_ Download the **right** System Image in my case: System Image (TS-251+) TS-x51+ Series (where the 'X' stands for the amount of installable hard drives) `http://eu1.qnap.com/Storage/tsd/fullimage/F_TS-X51_20150605-1.3.0.img` Copy the image to the `root` folder of the usb stick and rename the file to `dom.img`. ### Use USB Stick to repair the DOM 1.Turn Off NAS. 2.Unplug Harddrives. 3.Use HDMI / USB plugs to control the nas manualy. (Keyboard / Mouse / Monitor) 4.Turn On NAS & Press `F2` or `DEL` on StartUp 5.Choose the USB flash drive as boot device (If there are 2 USB Boot devices, *dont choose* “USB DISK MODULE PMAP”) Open Command Line `Ctrl + Alt + Del` and enter the following: ``` sudo su fdisk –l /dev/sda (should be your flash drive) /dev/sdb or /dev/hda (should be your DOM drive) ``` The Size should be 128 MB or 512 MB after understanding that, we follow with: ``` mkdir usbdrive mount /dev/sda1 /home/dsl/usbdrive cd /home/dsl/usbdrive cp dom.img /dev/sdb "dom.img" (as mentioned earlier, is your firmware image) reboot ``` After that the NAS should boot normaly. If the Firmware Version is at `1.x.x` the firmware need to run updates, otherwise the Hard drives cant be recognized. > If None of this helped, your harddrive might be broken. ## Change Harddrive If only one Harddrive is broken you can simply unplug the broken one and get a new one (in the same storage size) At the moment you plug in the new one the NAS automaticly starts to synchrosize the Data from the First Harddrive to the new one. _Raid Magic_
jeromesch
914,174
first project starting
A post by Bijay
0
2021-12-01T12:24:59
https://dev.to/bijay05487063/first-project-starting-4nig
codepen
{% codepen https://codepen.io/bijay124r3/pen/ZEXYKYr %}
bijay05487063
914,177
What to start with as a begginer in Web Dev??
Well all the YouTube videos, some are true some are false.. But the statement I always say...
0
2021-12-01T12:27:04
https://dev.to/sumanta_thefrontdev/what-to-start-with-as-a-begginer-in-web-dev-4eg
discuss, webdev, beginners
Well all the YouTube videos, some are true some are false.. But the statement I always say .... Don't spend money. As it can happen that after spending money, sometimes the money goes waste and you don't learn anything.. So the first thing to do is. Start with HTML5. And see YouTube to start learning... ## CONCLUSION : Start with HTML5 using YouTube videos.
sumanta_thefrontdev
914,384
Learn Solidity helping Santa Claus
Advent of Code is a yearly series of 25 puzzles that are released between the first and 25th of...
0
2021-12-01T15:52:08
https://dev.to/ethsgo_/learn-solidity-helping-santa-claus-129b
blockchain, adventofcode, solidity, javascript
Advent of Code is a yearly series of 25 puzzles that are released between the first and 25th of December. You might have heard of them, lot's of people do them – to have fun, to show off their speed, or to learn a new language. We'll be going through these puzzles, doing them in Solidity (and JS) - https://github.com/ethsgo/aoc The first puzzle came out just today, so it's not late to start following along. Here is what the solution for today looks like (not adding the code to this post itself to avoid spoilers - https://github.com/ethsgo/aoc/blob/main/contracts/_01.sol#L18).
ethsgo_
914,391
10 Best Free B2B Lead Generation Tools - A Comprehensive List 2022
How do you grow your B2B business without spending money on advertising? The answer lies in...
0
2021-12-01T16:12:36
https://dev.to/fahadconall/10-best-free-b2b-lead-generation-tools-a-comprehensive-list-2022-1pki
leadgeneration, bigdata, productivity, database
How do you grow your B2B business without spending money on advertising? The answer lies in generating your own leads through lead generation tools. While some of these tools may look similar, they all serve different purposes and can be used in combination with each other to maximize the number of leads generated while reducing the cost of running a campaign. Here are 10 best free B2B lead generation tools that you should check out right now! 10 Best Free B2B Lead Generation Tools - A Comprehensive List 2022 ## 1) **CoRepo** [CoRepo](https://corepo.org) is a company search engine and a free lead generation tool that fetches data from 60+ company data sources. There are over 130 million companies in more than 200 countries in CoRepo's database and it updates daily. With just one click, you can find product and service providers near your area or anywhere in the world. Click on any listing to see detailed information on that particular business including: contact info, products and services they offer, geographical location (map), etc. You can also read reviews from others who have done business with that company before and get leads based on your preferred criteria such as; geographic location, SIC code or NAICS code, number of employees etc. Thus helping you save time searching for potential suppliers by yourself. ## 2) **Formstack** Formstack is a free web-based form builder that allows you to build a forms for collecting information from clients or leads. There are over a dozen premade templates that can be customized with your own logo, color scheme, and information. All of your data is securely stored in your account on their servers and it’s available to download at any time. The tool also integrates with Microsoft Outlook so if you receive an email from someone using one of your forms, you can easily add them as a contact as per [My Ethos Market](https://myethosmarket.com/). You can set up unlimited forms to collect all sorts of information like name, phone number, email address etc. ## 3) **LeadChat** LeadChat is a robust chatbot that helps you generate leads from your website. Once installed, LeadChat integrates with your site to capture leads in a streamlined process that frees up your team’s time. The bot answers visitors’ questions about products and services and can also answer lead-specific questions as well. There are a number of ways to use LeadChat including having it answer frequently asked questions on your website, providing product information and even filling out forms. It’s easy to set up and comes with an analytics dashboard that tracks activity as well as allows you to set triggers for when it should engage with visitors or provide new content. ## 4) **HubSpot Sales** When it comes to lead generation, not all tactics are created equal. And while most marketers agree that email is one of their best tools, they also acknowledge that they aren’t doing it correctly. HubSpot Sales can help you solve your customer relationship management problems and bring in more business with a complete CRM platform built specifically for sales teams. You get unlimited users and accounts, access to 20 free apps that integrate into Sales, and training on how to build customer relationships through effective selling techniques. It doesn’t end there: They also offer robust reporting capabilities and useful templates so you can track metrics like new leads generated per week or win rates for different deals in progress. ##5) **Snov.io** This is a very simple and easy tool for any business to generate leads and increase sales. The tool allows you to collect leads from all of your website visitors. It generates powerful software program, which gathers info about your potential clients when they visit your site, with which you can communicate later on through email or call. You will also be able to build custom forms for your site using Snovio. ## 6) **Bitrix24** Bitrix24 is a social intranet built for businesses. It’s easy to use, and offers loads of tools that small businesses often need. From project management to sales lead tracking, and customer support to HR management; Bitrix24 seems to have it all. Moreover, if you're on a tight budget but need powerful software that does everything you want your business communication software to do then Bitrix24 might be just what you're looking for but you will need [copywriting services](https://copywritelab.com/) for that. The pricing is simple: there's no complex rate card or tier system involved; users are charged based on their company size (i.e. ## 7) **SendPulse** The SendPulse email marketing platform offers a real-time notification system that notifies users of subscribers opening, reading and clicking links in emails. It also allows you to schedule messages to send at later dates. The data it provides is fairly extensive, including details on open rates and total number of clicks over time, as well as broken down by link or recipient. You can even see how many times your message was forwarded from its original recipients. One of our favourite features is SendPulse’s easy-to-navigate interface that enables you to make changes without having any coding experience, something ideal for anyone looking to get a first taste of automation in email marketing without too much hassle. ## 8) **Lead Boost** Lead boost provides an easy and cost-effective way to generate leads on LinkedIn. When you find a high quality lead through Lead Boost, your information is added to that person’s profile in their Other Interests section, so they get in touch with you. By integrating with LinkedIn Sales Navigator, their sales team will see your pitch as well! One of our favorite features of Lead Boost is its excellent support team, who are always very friendly and ready to help make sure your success is taken care of. The free trial for Lead Boost gives you 50 credits; if it works for you and generates leads for your business, pricing starts at $20 per month for 1000 credits. ## 9) **Podawaa** Podawa's a fantastic b2b lead generation tool that surfaces companies that are actively looking for new vendors. All you have to do is fill out a form, and within 10 minutes (at most) your company will be listed on their page, making it easy for prospects to find you. It's a great way to get your business in front of decision makers without paying anything at all. Even better? They're super-responsive if you have any questions or suggestions about their service; they've already made significant updates based on my input and feedback from other users. One of our members told me he landed four new accounts in just two weeks because of Podawa! Try it free today! ## 10) **AeroLeads** The best free lead generation tool we’ve found is AeroLeads. You can import your existing contact list and sort leads by industry, company size, geography, and more. The free version of AeroLeads allows you to collect a maximum of 50 contacts per month with unlimited campaign traffic. If you want more than 50 leads per month or to remove any of their branding, you’ll need to upgrade to one of their paid plans. All in all, AeroLeads is a great option for new businesses looking for a simple way to generate leads on an ongoing basis.
fahadconall
914,648
How to Rename Local and Remote Git Branch
Have you ever wondered or come across a situation where you want to rename a Git branch? If yes then...
0
2021-12-01T18:26:59
https://kodewithchirag.com/how-to-rename-local-and-remote-git-branch
git, webdev, productivity, tutorial
Have you ever wondered or come across a situation where you want to rename a Git branch? If yes then this article will help you with that. Earlier, I faced the same situation where I wanted to rename the git branch locally and on remote, and luckily I found that git allows us to rename the branch very easily, lets see how. I will share the solutions to rename the git branch locally and under the remote. ## Rename local git branch If you are already in the local branch which you wanted to rename, you can hit this command. ``` git branch -m <new_name> ``` If you are under another branch and want to rename the branch then hit the below command. ``` git branch -m <old_name> <new_name> ``` Now check your current branch name by hitting ``` git status or git branch ``` you will see the changes, isn’t it is so simple 😉 ## Rename remote git branch If your local branch is already pushed to a remote repository and you want to rename it and reset the upstream branch then this command will help you to rename it. ``` git push origin -u <new_name> git push origin --delete <old_name> ``` Now if you want to check the changes then you can login to your GitHub or GitLab or whatever the git client portal you are using and see the changes there. ## Conclusion Renaming a local Git Branch is just a matter of running a single “git branch -m” command. However, you can’t directly rename a remote branch as you need to push the renamed local branch to the remote repository and then delete the old branch from there. > Hope you liked this article and found it useful, feel free to comment with your thoughts and opinions and stay connected with me here and on [Twitter](https://twitter.com/KodeWithChirag)🐦.
kodewithchirag
914,680
Factory Method Pattern
Factory Method(virtual constructor) provides an interface for creating objects in a super-class, but...
15,752
2021-12-01T20:44:12
https://dev.to/eyuelberga/factory-method-pattern-3od
algorithms, java, beginners, codenewbie
> Factory Method(virtual constructor) provides an interface for creating objects in a super-class, but defers instantiation to sub-classes. ## Motivation Consider an application which has to access a class, but does not know which class to choose form among a set of sub-classes of the parent class. The application class can’t predict the sub-class to instantiate as the choice might depend on numerous factors, such as the state of the application at run-time or its configuration. The Factory method pattern solves this problem by encapsulating the functionality of selecting and instantiating the appropriate class inside a designated method, which we refer to as the factory method. ## Applicability The factory method pattern can be used in cases where: - A class can not know or predict the object it must create - A class wants to transfer the instantiation of objects to its sub-classes ## Structure ![Factory method class diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygsqvf9o2fsktzs7t2u7.png) ### Participants - **Product:** Interface of objects created by the factory method - **ConcreteProduct:** Implementation of the Product interface - **Creator:** declares the factory method that returns a Product instance. - **ConcreteCreator:** Overrides factory method and returns ConcreteProduct instance ### Collaborations In case the `Creator` does not have a default implementation of the factory method, it relies on its sub-classes to define the factory method. ## Advantages - Provides hook for sub-classes, enabling sub-classes to affect their parents’ behavior. - Dynamic or concrete types are isolated form the client code, hence promotes loose-coupling ## Disadvantages - To create even one `ConcreteProduct` object the creator class must be sub-classed. This might overburden the clients. ## Implementation The implementation of the factory method could have in general three variations: 1. `Creator` class is an abstract class and does not have a default implementation for the factory method 2. `Creator` class has a default implementation for the factory class, sub-classes override the default implementation if necessary 3. factory methods are parameterized and can create multiple kinds of products corresponding to the value passed. ## Demo Let us look at how we could apply the Factory Method Pattern to the cafe discussed in the introduction article of the series. > The cafe is growing popular and the owners decided to open a new branch in France. But there is a problem, the menu is in English but the customers expect it to be in French. There is also a confusion in the currency. In France, the currency is Euro but the system displays price in US Dollars. We can solve this problem by first creating an abstract `Menu` class, and then creating concrete classes that extend this abstract class to make a french and other language versions of the menu. We then create a `GetMenuFactory` class to generate object of the different language versions based on the information given. {% replit @eyuelberga/Factory-Method-Pattern %}
eyuelberga
915,354
C vs. C++: Who is the winner?
Hello Techies, It's Nomadev back with another blog. So today's blog is on one of the most debatable...
0
2021-12-02T12:20:01
https://dev.to/thenomadevel/c-vs-c-who-is-the-winner-4h12
Hello Techies, It's [Nomadev](https://twitter.com/thenomadevel) back with another blog. So today's blog is on one of the most debatable topics, **Who's the winner: C Vs C++**. A lot of beginners have had the same questions when they started their programming journey. There was a long time where I didn't know the answer. So I decided to make a blog on the same. ![lets start.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1637914261576/5nfYC7vz_.gif) ## C programming language C programming language is one of the ancient programming languages and is in demand right from the time, it was developed. C is the mother of all the modern programming languages because most of the compilers and JVMs are written in the C language. C is a procedural language and a versatile language, which allows maximum control with a minimal number of commands. C language has a rich library collection having most of the arithmetic and logical operations, which are predefined. All the modern languages have borrowed the core concepts from C languages like arrays, functions, file handling, and much more. ### Pros - Very fast speed of compilation - Portable (Unlike assembly languages) - Open-source - Supports both low and high-level programming - Built-in functions - Extensible ### Cons - Prohibits Object-Oriented Programming Paradigm (Inheritance, Polymorphism, Encapsulation, Abstraction, Data Hiding) - No Garbage Collection - Lacks Constructor and Destructor - Lacks Concept of namespace - Somehow not easy to debug <hr /> ## C++ C++ is one of the most popular programming languages around, It is also just the next level of C programming. I was developed aiming to make a dynamic language that is efficient and has some additional features to C. It is used to develop games, operating systems, browsers, and so on. It is a powerful as well as a very flexible language having both high and low-level language features. C++ is an object-oriented programming language, which includes concepts like classes, inheritance, polymorphism, data abstraction, and encapsulation that allow code reusability and make a program even more reliable. Earlier C++ was known by the name **C with classes**. ### Pros- - Object-oriented programming language - Supports Exception handling and inline functions - Multi-Paradigm and Faster - Has Standard Template Library - Extensible - Static Type System - Large Community ### Cons - A little complex to learn - Absence of garbage collection - Security Issues <hr /> ## Ease of Learning As C++ is a subset of the C language, that means their syntax resembles some extent. C++ is an extension of the C language having in-built functions and a standard template library so it makes it comparatively easier to learn as a newbie. Sample syntax >printf("Hello, World!") //*for printing a string in C* >scanf("%d", &testInt); //*for storing integer input in C* > cout << "Hello, World" // *for printing string in C++ * >cin >> testInt; * //for storing integer input in C++* <hr /> ## Programming Paradigm C is a procedural-oriented language, so it is divided into modules and procedures, therefore it can make your code quite messy when it grows in size. On the other hand, C++ supports multiple paradigms, which allows it to follow both procedural as well as object-oriented ways of programming. Being object-oriented, C++ code can be organized in a proper way and simultaneously increases productivity. The object-oriented nature of C++ helps developers to develop server-side software and fast applications. <hr /> ## Function Overloading Function overloading is one of the most powerful features updated to C++ programming language, a form of polymorphism. In it, a function with the same name can be used for different purposes. For instance, the function add() can be defined in two ways. We can use it to calculate the sum of integer values, and for the string part, it can be used to concatenate two (or more) strings. Unlike C++, the C programming language doesn’t provide support for function overloading. <hr /> ## Application Development Area C is a good option for embedded devices and system-level code. C++, on the contrary, is a top choice for developing gaming, networking, and server-side applications. It is also a great option for the development of device drivers. The authority of C++ lies in performance and speed. Though C also offers these both qualities, C++ takes it a step further. <hr /> ## The Standard Template Library(STL) C++ offers the Standard template library, which provides template classes for most of the data structures and their components for implementing added, build-in functionalities. You don't have to write the whole snippet every time you want to implement any data structure. Whereas C has no such library. Whereas **C supports the graphic library**. But after Python made graphics easier, its graphic library's popularity decreased. <hr /> ## From Where to learn C and C++ Both languages are known as good first languages to learn in your programming journey. To learn C: - [CS50 2021 - Lecture 1 - C](https://www.youtube.com/watch?v=LMUCmgghEXs) - [The C Beginner's Handbook: Learn C Programming Language basics in just a few hours](https://www.freecodecamp.org/news/the-c-beginners-handbook/) - [C Programming Tutorial for Beginners](https://www.youtube.com/watch?v=KJgsSFOSQv0) To learn C++: - [ Beginning C++ Programming - From Beginner to Beyond](https://www.udemy.com/course/beginning-c-plus-plus-programming/?LSNPUBID=JVFxdTr9V80&ranEAID=JVFxdTr9V80&ranMID=39197&ranSiteID=JVFxdTr9V80-0AGHx4c46bwVKt_wTdAvZw&utm_medium=udemyads&utm_source=aff-campaign) - [C++ Tutorial for Beginners - Full Course](https://www.youtube.com/watch?v=vLnPwxZdW4Y) - [Object-Oriented Programming in C++](https://faculty.ksu.edu.sa/sites/default/files/ObjectOrientedProgramminginC4thEdition.pdf) <hr /> ## Who is the winner: C and C++ both are considered the most popular and evergreen languages in programming, both have some pros and cons. But in my humble opinion, if you are just getting started on programming and you have enough time to learn stuff and you want to make your fundamentals strong, you should learn C first and then master data structures and algorithms and implement them in it. Believe me, if you will get successful, you will surely become a sigma programmer. And you will never face any problem in switching to some other tech stack. So this was my personal opinion, You can your opinion in the comment section below. So this was it, If you liked this blog make sure to follow me on [Twitter](https://twitter.com/thenomadevel) for more tech information. [![good-twitter.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1637395103449/aVaT64w2l.gif)](https://twitter.com/thenomadevel) And if you want to appreciate my work you can [buy me a coffee](https://www.buymeacoffee.com/nomadevel), **Your appreciation is my motivation.** [![coffee.jfif](https://cdn.hashnode.com/res/hashnode/image/upload/v1637869195331/r4SfPP57h.jpeg)](https://www.buymeacoffee.com/nomadevel) That's my time Dev's, See you in the next one. Happy Coding.
thenomadevel
914,683
Introduction
Creational patterns hide how instance of classes are constructed. Thereby enabling more independence...
15,752
2021-12-01T19:54:05
https://dev.to/eyuelberga/introduction-4bal
Creational patterns hide how instance of classes are constructed. Thereby enabling more independence and increasing flexibility in a system. In this series, we try to look at the most common creational patterns and their unique features. To demonstrate the implementation of the creational patterns, we will use, as an example, what we have named as “The Design-pattern Cafe”. The Design-pattern Cafe is a fictitious Cafe based in Ethiopia. For each pattern, we have devised a problem-scenario that could potentially be solved with the pattern. All of the sample code used as examples can be found on GitHub at: {% github dp-team/creationaldesignpatterns %}
eyuelberga
915,063
What are Pet Tags and Details to include on them?
Pet Tags are used to identify the pet's owner or the pet that is wearing it. The tags are usually...
0
2021-12-02T05:53:28
https://dev.to/leashkings/what-are-pet-tags-and-details-to-include-on-them-59cp
Pet Tags are used to identify the pet's owner or the pet that is wearing it. The tags are usually made of plastic, cloth, metal, or paper. They are usually found on the pets' necks. They, like humans, are used to identify the owner and animal. Along with this, these can be used in cases of lost animals. The pet tag is an identification tag that is worn by a pet on their collar or harness. The tag is made of plastic and has an identification number, which can be read when using an electronic scanner. The information contained in the pet tags may include: -Owner information (name and address) -Pet's name -Veterinarian's phone number -Microchip ID number -ID number of rabies vaccination -License/registration ID [Pet tags](https://leashkingtags.com/) are used for the identification of pets. They are worn around the neck or on the collar of a pet. It is usually made out of metal, plastic, or paper. They have been used since the 1800s. The first pet tag was registered with the American Kennel Club in 1884. They are an important accessory for all pets. They can be used for identification purposes and to prevent theft. They come in many shapes and sizes and can be engraved or printed with the pet's name, phone number, and microchip ID. Animal shelters use them to identify animals that are lost. Pet tags are a useful way to identify a pet. They have been around for a long time and have proven to be an effective way of identifying pets that are lost. The pet tag is a small metal plate with a name and address on it, usually attached to the collar or harness of the animal. They can also be used as an identifier for lost pets. If someone finds a lost pet they will contact the owner if they can find out who the owner is by looking at their contact details on the tag. Sometimes people will make up stories to find out who owns their new stray pet, but this isn't recommended because it's illegal and committed by people that do not care about animals. These are typically worn for identification, but they can also be used to identify animals that are lost. They come in different shapes and sizes, but the most common shape is the flat metal tag that is often described as looking like a license plate. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sm8rx3evklnbz33k2ouw.jpg)
leashkings