id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
778,865
Convert string to list in Python
In this short tutorial, find how to convert string to list in Python. We look at all the ways you can...
0
2021-08-02T06:36:06
https://flexiple.com/convert-string-to-list-in-python/
python, programming, tutorial, beginners
In this short tutorial, find how to convert string to list in Python. We look at all the ways you can achieve this along with their pros and cons. This tutorial is a part of our initiative at [Flexiple](https://flexiple.com/), to write short curated tutorials around often used or interesting concepts. ### Table of Contents - [Converting string to list in Python](#converting-string-to-list-in-python) - [Solution 1: Using Split()](#solution-1-using-split) - [Solution 2: Using list()](#solution-2-using-list) - [Limitations, and Caveats](#convert-string-to-list-in-python-closing-thoughts) ## Converting string to list in Python: Data type conversion or type casting in Python is a very common practice. However, converting string to list in Python is not as simple as converting an int to string or vice versa. Strings can be converted to lists using `list()`. We will look at this method below. However, in this method, Python would not know where each item starts and ends, returning a list of characters. Hence, Python provides a few alternate methods which can be used to convert a string to a list. ## Solution 1: Using Split() The split method is used to split a string based on a specified delimiter. Once split, it returns the split string in a list, using this method we can convert string to list in Python. ### Syntax: ```python string.split( delimiter, maxsplit) ``` ### Parameters: - **Delimiter** - Optional. Specifies the desired delimiter to use when splitting the string. If left empty, whitespaces are considered delimiters. - **Maxsplit** - Optional. Specifies how many splits to do. ### Code to Convert string to list in Python ```python str_1 = "Hire the top 1% freelance developers" list_1 = str_1.split() print(list_1) #Output: #['Hire', 'the', 'top', '1%', 'freelance', 'developers'] ``` **Note**: Since no argument was passed as a delimiter, the string was split on whitespace. Now let’s take a look at an instance where the delimiter is specified. ```python str_1 = "Hire-the-top-1%-freelance-developers" list_1 = str_1.split("-") print(list_1) #Output: #['Hire', 'the', 'top', '1%', 'freelance', 'developers'] ``` ## Solution 2: Using list(): As aforementioned, this method converts a string into a list of characters. Hence this method is not used quite often. I would recommend using this method only if you are certain that the list should contain each character as an item and if the string contains a set of characters or numbers that are not divided by a space. If not, the spaces would also be considered as a character and stored in a list. ### Code to Convert string to list in Python: ```python str_1 = "Hire freelance developers" list_1 = list(str_1.strip(" ")) print(list_1) #Output: ['H', 'i', 'r', 'e', ' ', 'f', 'r', 'e', 'e', 'l', 'a', 'n', 'c', 'e', ' ', 'd', 'e', 'v', 'e', 'l', 'o', 'p', 'e', 'r', 's'] ``` ### Convert string to list in Python - Closing thoughts The split() method is the recommended and most common method used to convert string to list in Python. This method does not have any significant cons. On the other hand, the typecast method is not widely recommended and is used only when the requirements are met. However, please feel free to practice both methods to facilitate a better understanding of the concept. Do let me know your thoughts in the comments section below. :)
hrishikesh1990
779,008
Refresh SwiftUI views
Refresh a SwiftUI view with the new refreshable modifier introduced at WWDC21
0
2021-08-02T08:37:53
https://dev.to/gualtierofr/refresh-swiftui-views-33n
swift, swiftui
--- title: Refresh SwiftUI views published: true description: Refresh a SwiftUI view with the new refreshable modifier introduced at WWDC21 tags: swift, swiftui //cover_image: https://direct_url_to_image.jpg --- A nice addition to SwiftUI at WWDC21 is the new refreshable modifier to refresh view contents. This new feature is powered by maybe the biggest announcement at WWDC21: the new async/await pattern introduced in Swift 5.5. While this isn’t an article about this specific topic is important to know that what we’re going to see here is powered by it, thus is not backwards compatible with older versions of SwiftUI. One of the features iOS users love it the ability to pull down a list of content to refresh it. Up to this year, there wasn’t a first party API to implement this functionality, and if you’re interested in a solution compatible with the first version of SwiftUI you can check out my article about [pull down to refresh](https://dev.to/gualtierofr/pull-down-to-refresh-in-swiftui-4j26). We finally have that official API, is a modifier called refreshable and I’m going to show you how to use it on a List, and how to apply it to a custom view. As usual the code is available on [GitHub](https://github.com/gualtierofrigerio/SwiftUIScroll) If you want to add the refreshable scroll view to your project via SwiftPM you can use [this repository](https://github.com/gualtierofrigerio/GFRefreshableScrollView). ## Refresh a List Applying the new modifier to a List is straightforward ```swift List(posts) { post in PostView(post: post) } .refreshable { await refreshListAsync() } ``` you add the modifier .refreshable and you provide a function to refresh the content inside the closure. As I told you, this feature is powered by async/await, so we need the new await keyword before the function call. Let’s have a look at the declaration of refreshable ```swift func refreshable(action: @escaping () async -> Void) -> some View ``` the action we are providing (our closure in the example above) is marked as async, that’s the reason why we need await. I haven’t written an article about async/await yet, but you’ll find plenty of them if you want to understand exactly what is going on. For now, this is all you need to do to implement pull down to refresh with the new modifier. That’s because List implements the whole functionality and is able to display an indicator and hide it once the reload function (the one you implemented in the closure) ends. ## Refresh a custom view All right, refreshing a List is really easy but you may wonder how to use the new modifier in your view. Maybe you cannot use a List and you have an array of elements you display in a ForEach, the good news is that with a few more lines of code you can implement the same feature. I’m going to show you a simple example from the repository I linked before, starting with my custom ScrollView implementing pull down to refresh. See the code [here](https://github.com/gualtierofrigerio/SwiftUIScroll/blob/master/SwiftUIScroll/ScrollViewPullRefresh.swift) ```swift struct ScrollViewPullRefresh<Content:View>: View { @Environment(\.refresh) var refreshAction: RefreshAction? init(@ViewBuilder content: @escaping () -> Content) { self.content = content } var body: some View { VStack { if isRefreshing { ProgressView() } } GeometryReader { geometry in ScrollView { content() .anchorPreference(key: OffsetPreferenceKey.self, value: .top) { geometry[$0].y } } .onPreferenceChange(OffsetPreferenceKey.self) { offset in if offset > threshold && isRefreshing == false { if let action = refreshAction { Task { isRefreshing = true await action() withAnimation { isRefreshing = false } } } } } } } private var content: () -> Content @State private var isRefreshing = false private let threshold = 50.0 } ``` this is the entire implementation. I won’t explain how to actually implement the pull down to refresh part, the onPreferenceChange modifier I applied to a ScrollView. Please refer to my [previous article](https://dev.to/gualtierofr/pull-down-to-refresh-in-swiftui-4j26) to find out more. In a nutshell, when the scroll view offset goes over a defined threshold I can perform an action, in this case the async action configured by the caller. But how do I now what action to execute? Alongside the refreshable modifier SwiftUI introduced a new environment value, called refresh ```swift @Environment(\.refresh) var refreshAction: RefreshAction? ``` by referring to this value, we can call the async function passed to refreshable. What is RefreshAction? Let’s take a look at its definition ```swift public struct RefreshAction { ... public func callAsFunction() async } ``` if you’re not familiar with callAsFunction, it is a way to treat objects as functions. In this case, you can call refreshAction() and execute the code passed to the closure of refreshable. It is not importa to know that, but I pasted the definition in case you were curious 🙂 Ok let’s get back to our example. We have refreshAction, so we know what to call when the user pulls down the ScrollView. ```swift .onPreferenceChange(OffsetPreferenceKey.self) { offset in if offset > threshold && isRefreshing == false { if let action = refreshAction { Task { isRefreshing = true await action() withAnimation { isRefreshing = false } } } } } ``` his is the modifier. It is important to keep a @State variable to know whether we’re executing the refresh operation, otherwise we’d continue to call the async function as the user pulls down. If we’re not refreshing and the action is set, we can call the async function. What is Task? I won’t go into details, but tt is a way to execute asynchronous code inside a synchronous context. In this case we set the isRefreshing value to true, then we execute the async function (so we need the await keyword) and then set the refreshing value to false. I use withAnimation so the ProgressView disappears with an animation, otherwise you’d see the ScrollView jump at the top. Let’s see how to refresh our custom ScrollView, full implementation [here](https://github.com/gualtierofrigerio/SwiftUIScroll/blob/master/SwiftUIScroll/PostForEachPull.swift). ```swift var body: some View { ScrollViewPullRefresh { VStack { ForEach(posts) { post in PostView(post: post) } } } .task { posts = await getPosts() } .refreshable { posts = await shufflePosts() } } ``` As you see I added the refreshable modifier and inside I assign the posts variable the result of an async function to simply shuffle the posts. Let’s take a look at it ```swift private func shufflePosts() async -> [Post] { await Task.sleep(2_000_000_000) return viewModel.allPosts.shuffled() } ``` In order to simulate a network call, I use Task.sleep. As the name suggest, the function does nothing and sleeps for the given amount of microseconds. I find it useful to experiment with async await without setting up an API to retrieve data, you can simply have a JSON inside the project and await for the result. Before ending, you may have noticed the .task modifier I applied to the custom view before refreshable. This is another addition to SwiftUI, and gives us the ability to perform an asynchronous task when the view appears. In this example I load the list of posts when the view appears, and refresh them every time the user pulls down to refresh. Of course you can implement alternative ways to refresh a list of contents. For example you can use a Button, like this ```swift struct RefreshableView: View { @ObservedObject var viewModel: ListViewModel @Environment(\.refresh) var refreshAction: RefreshAction? var body: some View { VStack { Button { if let action = refreshAction { Task { await action() } } } label: { Image(systemName: "arrow.counterclockwise") } ScrollView { ForEach(viewModel.beers) { beer in HStack { CustomImageView(url: URL(string: beer.imageUrl), placeHolder: Image(systemName: "xmark.octagon")) .frame(width: 100, height: 100) Text(beer.name) Spacer() } } } } } } ``` and call the async function inside the Button’s action, wrapping it with Task. That’s it, I really like the new API and hope you’ll find it easy to adopt in your projects. Happy coding 🙂 [Original article](https://www.gfrigerio.com/refresh-swiftui-views/)
gualtierofr
779,091
Cache me if you can 🏃
A Guide to keep your cache fresh as a daisy with stale-while-revalidate
0
2021-08-02T09:47:00
https://dev.to/iamshouvikmitra/cache-me-if-you-can-2g94
http, browser, cache, swr
--- title: Cache me if you can 🏃 published: true description: A Guide to keep your cache fresh as a daisy with stale-while-revalidate tags: http, browser, cache, swr cover_image: https://askleo.askleomedia.com/wp-content/uploads/2013/11/cache.jpg --- ## A Guide to keep your cache fresh as a daisy with stale-while-revalidate Today, we are going to talk about an additional tool to help you maintain a fine balance between instancy and freshness when delivering data to your web applications. RFC5861 states two independent Cache-Control extensions that allow for the cache to respond to a request with the most up-to-date response held. 1. The `stale-if-error` HTTP Cache-Control extension allows a cache to return a stale response when an error such as Internal Server Error is encountered, rather than returning a hard error. This improves availability. 2. The `stale-while-revalidate` HTTP Cache-Control extension allows a cache to immediately return a stale response while it revalidates it in the background, thereby hiding latency (both in the network and on the server) from clients In this blog, we will be talking more about the stale-while-revalidate HTTP header. The basic idea of this header is to reduce the latency of serving cached content by your web browser to your application and have a refresh mechanism via which the browser itself updates its cache. A `stale-while-revalidate` is used inside a cache-control header along with max-age. For example, if a server response for content include - ``` Cache-Control: max-age=60, stale-while-revalidate=10 ``` would mean that if any request to the same endpoint is made within the next 60 seconds, the browser will serve the cached content with no further actions. But if any request is made anywhere between 60 to 70 seconds after the initial response, the browsers will not only serve the cached content but also at the same time in the background will fire a re-validation request to the server to update the content of its cache. The further request will follow whatever Cache-Control header is returned during the re-validation request. ![image](https://lh3.googleusercontent.com/-_x3BS-A25E8/YQbeRYmWluI/AAAAAAAAPTo/bCEJHRk_WVIL50z2Z3EMNtIf9SqYJBO6gCLcBGAsYHQ/w640-h136/image.png) ## Usage: As of 2021, all modern browsers do not support the HTTP implementation of stale-while-revalidate. But the good news is similar implementations are available using [service workers](https://developers.google.com/web/tools/workbox/modules/workbox-strategies#stale-while-revalidate), and in case you really do not want to deal with service workers, then there are popular libraries such as [swr](https://github.com/vercel/swr) which implements something along similar lines and you could use it in your react project using custom fetchers such as axios, unfetch and so on.
iamshouvikmitra
779,193
7 Different Ways To Create Objects In Javascript 2022
Watch this video if you don't know https://youtu.be/HRP-5MS9DkQ
0
2021-08-02T12:30:11
https://dev.to/ravics09/7-different-ways-to-create-objects-in-javascript-160j
javascript, javascriptobject
Watch this video if you don't know https://youtu.be/HRP-5MS9DkQ
ravics09
779,201
How to Create an AR Measuring Tape App in 15 Minutes or Less [Tutorial]
Here’s an easy demo for creating a simple measurement application (i.e., AR ruler app or tape...
0
2021-08-02T12:41:57
https://dev.to/echo3d/how-to-create-an-ar-measuring-tape-app-in-15-minutes-or-less-tutorial-420k
tutorial, augmentedreality, programming, unity3d
Here’s an easy demo for creating a simple measurement application (i.e., AR ruler app or tape measurement app) using AR Foundation, Unity, and echoAR. Tested on Android devices. The full demo can also be found on echoAR’s [GitHub](https://github.com/echoARxyz/Unity-ARFoundation-echoAR-demo-Measurement-with-AR). ![1.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1627907806331/80Jawzo8R.png) ## Register If you don’t have an echoAR API key yet, make sure to register for FREE at [echoAR](https://console.echoar.xyz/#/auth/register). ## Setup * Clone this repository for prefabs,scenes and custom scripts. * Open the project in Unity and follow the instructions on our [documentation page](https://docs.echoar.xyz/unity/adding-ar-capabilities) to [set your API key](https://docs.echoar.xyz/unity/adding-ar-capabilities#3-set-you-api-key). * Set your echoAR API key in the echoAR prefab * Add models to your echoAR project ## Run * [Build and run the AR application](https://docs.echoar.xyz/unity/adding-ar-capabilities#4-build-and-run-the-ar-application). ![2.jpg](https://cdn.hashnode.com/res/hashnode/image/upload/v1627907892853/ydmVtDVLI.jpeg) You can measure real-life objects! ## Usage Instructions: The app has two modes: * Object Placement mode, which allows you to place an EchoAR object on any plane * Measurement Tape mode, which allows you to drag your finger between any two points on a plane to create a measurement. * You can easily switch between modes by clicking on their respective buttons in the UI. * Note that because both modes depend on plane detection, it may take a few seconds to move your camera around before the plane you are trying to measure may register. * Remember: if the placed Object is too large, you can modify the scale of the object in the EchoAR console (see https://docs.echoar.xyz/unity/transforming-content) ![33.jpg](https://cdn.hashnode.com/res/hashnode/image/upload/v1627908041182/1M79Rfeew.jpeg) You can measure custom AR objects! ## Learn more Refer to our [documentation](https://docs.echoar.xyz/unity/) to learn more about how to use Unity, AR Foundation, and echoAR. ## Support Feel free to reach out at support@echoAR.xyz or join our [support channel on Slack](https://join.slack.com/t/echoar/shared_invite/enQtNTg4NjI5NjM3OTc1LWU1M2M2MTNlNTM3NGY1YTUxYmY3ZDNjNTc3YjA5M2QyNGZiOTgzMjVmZWZmZmFjNGJjYTcxZjhhNzk3YjNhNjE). **** > echoAR (http://www.echoAR.xyz; Techstars ’19) is a cloud platform for augmented reality (AR) and virtual reality (VR) that provides tools and server-side infrastructure to help developers & companies quickly build and deploy AR/VR apps and experiences. ![4.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1627907994496/rwSHGGzFa.png)
_echo3d_
779,220
Web Performance Optimization- II
Part-I About 𝐈𝐦𝐚𝐠𝐞 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬: with different file formats, Responsive Images Markup,...
0
2021-08-03T18:18:13
https://dev.to/bipul/web-performance-optimization-ii-2799
performance, javascript, css, webdev
[Part-I](https://dev.to/bipul/web-performance-optimization-i-5d39) ### About 𝐈𝐦𝐚𝐠𝐞 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬: with different file formats, Responsive Images Markup, mannual and automatic optimzations, lazy loading 𝐉𝐒 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐢𝐭𝐢𝐨𝐧: modularization, async-defer, lazy loading, minifiers 𝐂𝐒𝐒 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧: modularization, critical CSS, using onload and disabled attributes. **Glossary** * Shallow depth of feild- very small zones of focus. * Lossy and Lossless images- lossy has loss in quality and file size on compression while lossless has no loss in quality and results in bigger file size on compression. * transparency/opacity- images that is clear and can take the effect of any images behind it * Render blocking- JS stopping the DOM rendering. ## Image Optimization Images are the leading cause of the slow web. We have two conflicting needs here: we want to post high quality images online, but also want our websites and apps to be performant, and images are the main reason they are not. So how do we solve this conundrum? The answer is with a multi-pronged approach, ranging from **compression** to careful **selection of image formats**, to how we **mark up** and **load** images in our applications. Image performance is all about how much data is contained within an image and how easy it is to compress that data. The more complex the image, the larger the data set necessary to display it and the more difficult it is to compress. **Shallow depth of field means better performance**. For photography including products, headshots, documentary, and others, a shallower depth of field is preferred. If you want to squeeze as much performance as possible out of your images, **reducing the size of each image by 87% percent, and then upscaling it by 115%**, will actually impact the performance of the image as well. It turns out downscaling a photo by 87% percent, Photoshop will take away pixels and simplify the image to scale it down and reduce the complexity of the image and by upscaling it by 115% percent it preserves image quality well enough that humans can't tell the difference. So we get a image of same size but has significantly less complexity. The image format or file type you choose for your images will have a direct impact on performance. On the web we generally use one of five formats JPEG, PNG, GIF, SVG, and webP. **JPG/JPEG** * Meant for Photos * Lossy image with adjustable compression * High compression means large artifacts(distortion) * Use for Photos when WebP is not an Option **PNG** * Meant for Graphics * Lossless image format * Optional transparent alpha layer * Use for computer generated graphics and transparency **GIF** * Meant for simple lofi gaphics * Lossy image format * 256 colors * Can be animated (but dont use them) * SVG/Video is always a better option **SVG** * Meant for advance scalable graphics * Written in Markup, can be included in HTML, CSS * Very small when optimized * Use for vector-based computer generated graphics and icons **webP** * Meant for web-based photos * Upto 34% smaller than JPGs * Not supported in older browsers(fallback required) * Used for photos and complex detail images (with fallback) **How to choose what to use?** * For photos, use webP (with JPG fallback) * For too complex computer graphics use PNG or JPG (whichever is smaller) * For graphics with transparency use PNG or webP * For scalable computer graphics, icons and graphs use SVGs * Aviod animated GIFs at all cost, use videos instead **Mannual Optimizations** * Decide on the maximum visible size the image will have in the layout. No image should ever be displayed wider than a full HD monitor, 1920 pixels. Make sure you also restrict the display width of that image to 1920 pixels, and then center align it. Once you've settled on a width for an image, scale your image to fit that size. * Experiment with compression in webP, JPG * Simplify SVGs by removing unnecessary points and lines * Compare file sizes for JPG, webP and PNG for computer graphics **Automated Optimization** * [Imagemin](https://www.npmjs.com/package/imagemin) is a good choice. You can use it to build a custom optimization function in Node.js. Or add automated image optimization into your preferred build process. Imagemin CLI provides lossless compression for JPEG, PNGs, and GIFs. * You can add dedicated lossy compression for each of them using plug-ins: [Imagemin-mozjpeg](https://www.npmjs.com/package/imagemin-mozjpeg) for JPEGs. [Imagemin-pngquant](https://www.npmjs.com/package/imagemin-pngquant) for PNGs and [Imagemin-webp](https://www.npmjs.com/package/imagemin-webp) for webPs. * [Squoosh](https://squoosh.app/) uses various compression algorithms to optimize images. And it has an [experimental CLI](https://www.npmjs.com/package/@squoosh/cli) you can use to automate that process. * [Sharp](https://www.npmjs.com/package/sharp) is also available for use. Even a fully optimized image can slow down the performance of your site if it's delivered to the wrong browser at the wrong time. This is the problem [Responsive Images Markup](https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Responsive_images) is meant to solve. We have responsive images attributes: [srcset](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-srcset) and [sizes](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-sizes). Source sets allows you to provide a list of image sources for the browser to choose from and sizes defines a set of media conditions (e.g. screen widths) and indicates what image size would be best to choose, when certain media conditions are true. W indicates total pixel width of each of these images. For example: ![Screenshot (200)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0a5bo4jq42z2wu9d4vlm.png) If the viewport of the browser is 800 pixels wide. The browser will pick the 1200 pixel wide image because it is the closest size upwards. If you then choose to scale up the viewport by just scaling up the browser window. The browser will automatically pull down larger versions of the image to fill in the space if it's necessary. But the important thing now is, by carefully planning your image sizes you can now deliver appropriately sized image files to all browsers and all devices. But, for most of your images, the actual displayed width of the image is determined using CSS and media queries. And you rarely display all your images as full width in the browser. To address this, we have the sizes attribute. Sizes holds a list of media queries and corresponding width to save. For this image, if the viewport is 1200 pixels or wider, the actual width this image will be displayed at will always be 1200 pixels. The reason why I'm still providing the 1920 pixel image here is to provide a higher resolution image to higher resolution displays. The 100 VW at the end of the size of the attribute says, for all other conditions, meaning screen widths under 1200 pixels, the image is always full width because this is a responsive layout. This is especially important when you have a design where an image has a max size smaller than the viewport width. Which is almost every single image on the web. **Lazy Loading Images** Loading images, videos, and iframes the user never scrolls to has always been a major performance issue on the web. We're simply wasting data that we shouldn't be wasting. To deal with this issue, developers started adding lazy loading JavaScript libraries that would wait for the user to scroll close to an element before the image was loaded by the browser so that instead of loading all the images on a page, only the images the user would actually get to see inside the viewport were loaded by the browser. ![Screenshot (204)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yahgqo4wwo0u3uj84rz.png) Native lazy loading is activated using the loading attribute on the element in question. Lazy, meaning the asset is loaded only when it's close to the viewport, and eager, meaning the asset is loaded immediately, even if it's nowhere near the viewport. There's also a fallback here called auto, but it's not yet in the specification. Now, this loading attribute is also non-destructive, meaning older browsers who do not understand this attribute will simply ignore it and load all the assets as it would normally do. If you want lazy loading support in older browsers as well, you can use a JavaScript solution like [lazysizes](https://www.npmjs.com/package/lazysizes), which has an extension plugin called native loading, which serves up the JavaScript solution only to browsers that do not support the loading attribute and the new built in lazy loading feature. *** ## JavaScript Optimization The code we write is optimized for humans, but if we want the code to be as fast as possible and to be performant, it needs to be rewritten for size and effectiveness, and that makes it unreadable for us humans. We now have tools to do this job for us in the form of code minimizers, packagers, bundlers, and more. At minimum, you'll need a development track where the human readable code is stored and a production track where the highly optimized and compressed machine-readable code is stored. How and when we compress, bundle, load, modularize, and execute JavaScript is becoming more and more important to improving performance. The same can be said for CSS. Modular and inline CSS, progressive loading, and other performance techniques are now essential to ensure the style of a site or application doesn't slow down its delivery. The modern web platform supports JavaScript modules, separate JavaScript files that export and import objects functions, and other primitives from each other so bundling all JavaScript into one big file, makes no sense on the modern web. So from a performance perspective heres what should happpen. On initial, load any critical JavaScript necessary to get the app framework up and running and displaying something above the fold should be loaded. Once that's done and the user has something to look at, any necessary JavaScript modules for functionality should be loaded. And from here on out, the browsers should progressively load JavaScript modules only when they become relevant. JavaScript functionality should be modularized as much as possible and split into dedicated files. Several immediate benefits to this approach are: * React, uses components. JavaScript modules are the exact same thing. Except they run on the web platform itself and you don't need a bundler to make them work. * Modularization makes ongoing development easier because it provides clear separation of concerns. * Modularizing, JavaScript and loading modules only when they are needed, brings significant performance benefits on initial load. * Modularization means updating some feature in a JavaScript app does not require the browser to download the entire app bundle again. It just needs to download the updated module file with its features, which is way smaller. When and how the browser loads each JavaScript file it encounters has a significant impact on both performance and functionality. If we add JavaScript to the head of an HTML document, it will always load and execute as soon as the browser encounters it, which is always before the body is rendered out. This will always cause render blocking. To prevent this blocking JavaScript has been added to the very bottom of the body element, but this too causes render blocking because as soon as the browser encounters a reference to JavaScript, it'll stop doing anything, download the entire script, then execute the script, and then go back to rendering. So basically, entire page will be loaded before the JavaScript is even loaded which just adds to the performance problems. We have the **async** and **defer** keywords which instruct the browser to either load JavaScript files asynchronously while DOM rendering takes place, and then execute them as soon as they're available, or to load the files asynchronously and defer execution until the DOM rendering is done. ![Screenshot (209)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wb1v1lvi522drhwo1jol.png) When we add the async tag, the browser will load the JavaScript asynchronously meaning it loads alongside the HTML parsing process. When the script is fully loaded the browser stops the rendering of the HTML until the script is executed and then it continues. Already we're seeing a significant performance enhancement because the parsing isn't paused while the script is being downloaded. In JavaScript and other programming languages, a synchronous event means one event happens after another, in a chain. Asynchronous means the events happen independently of one another and one event doesn't have to wait for another to complete before it takes place. In the case of async JavaScript loading the loading is asynchronous, while the execution is synchronous. Use async anytime you're loading JavaScript and you don't need to wait for the whole DOM to be created first. Defer is slightly different. We're still loading the script asynchronously when the browser encounters it without render blocking. And then we literally defer the execution of the JavaScript until the HTML parsing is complete. This is effectively the same as placing the script tag at the end of the body element, except the script is loaded asynchronously, and is therefore much better for performance because we don't render out the entire HTML and then go download the JavaScript. The JavaScript is already downloaded. Use defer if you need to wait for the whole DOM to be loaded before executing the JavaScript or if the JavaScript can wait. So here is your performance focused JavaScript loading best practices. * Call JavaScript by placing the script tag in the head * Anytime you load JavaScript in the head, always put async on there unless you have a reason to use defer. * Defer any scripts that need the DOM to be fully built or scripts that you can defer because they don't need to execute right away. * If and only if, you need to support older browsers and you can't allow the browser to wait for things, load your script in the footer the old way and take the performance hit. Lazy load JavaScript modules and their associated assets only when they're interacted with and needed using [import](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import) statements. For example: import("/path/to/import-module.js") .then((module) => { // do something with the module }); With this you'll not be chaining the events and getting everything to work conditionally on the user's behavior. So you're saving the user a ton of data and only pushing content to the browser when it's needed. This whole concept can be used with any JavaScript module including external [ESM module](https://nodejs.org/api/esm.html#esm_introduction). To rewrite everything and turn it into highly optimized human unreadable code we can use minifiers and uglifiers. All major bundlers, including webpack, rollup, parcel, etc ship with minifiers built in. The two most popular minifiers are [uglify-js](https://www.npmjs.com/package/@types/uglify-js) and [terser](https://www.npmjs.com/package/terser). *** ## CSS Optimization The number one measure of perceived performance is how fast something loads in the view port of the browser. For a page to render, all the CSS has to be fully loaded because CSS is a cascade and the rule sets at the bottom of a style sheet may well impact the rules that's higher up. If we serve the browser with a huge style sheet with all the styles for the page, it takes a long time to load that style sheet on this content and the performance suffers. To get around this problem, developers have come up with a clever hack called **critical CSS**. First, inline any styles impacting the content above the fold(in the viewport) in the HTML document itself as a style tag in the head. Then lazy load and defer the rest of the CSS, using a clever JavaScript trick, so it only loads when the page is fully loaded. [Critical](https://www.npmjs.com/package/critical) helps us automate this process so that so you don't have to manually copy and paste code every time you update something. Critical reads the HTML and CSS figures out what rule sets should be inlined automatically inlines that CSS into the HTML document, separates out the non-critical CSS into a step separate style sheet and then lazy loads on the first and non-critical CSS. Because this tool is built into the tool chain, it can be set up to take place at every build, so you don't have to keep tabs on what styles are critical. This tool also has a ton of options, so you can fully customize exactly what happens within the critical CSS, index file or the HTML file, the CSS, the view port you're targeting, all this stuff can be configured. For example: ![Screenshot (212)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gm2e24nfsfbpznibe67e.png) Critical actually spin up a browser and then display the contents in the browser in a defined view port size that we've defined. And then look at what CSS is affecting the content inside that view port and split that out into this critical CSS file. The view port in the example is 320 width, 480 height. ![Screenshot (213)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bo7l55v50erj7twdfxpc.png) The critical inline CSS that will run before the dom's even built. So this will then define the content that's above the fold. Then below we have our link elements, but the link element now points at uncritical CSS. And you'll notice the media property is set to print. This is the JavaScript trick. So what happens now is a regular browser will identify itself as screen. For that reason, this style sheet will not be loaded because it's set to only load for print. Meaning when you're actually printing something. Then, on load, which is an event that is triggered when the page is fully loaded, would change this media to all instead. And at that point, once everything else is done, this extra style sheet will be loaded. To see how much of your JavaScript and CSS and other code is loaded unnecessarily into the browser, you can use the coverage view in the browser dev tools. ![Screenshot (220)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ev2qij6fh254i09ayrg7.png) If you see anything marked in red, here, it's a rule that is not currently being used on the page. This is what Critical does, it runs this type of process and then identifies which rules are being used and which rules are not being used, but in the view port, and then it picks and chooses. If you have one giant style sheet, you need to compare all of these pages and do a bunch of work. A better solution would be if we could modularize our CSS and split the CSS into smaller components and then load them only if they are needed. And one way we can do that is by deferring loading of CSS until something happens. Now, you already saw an example of that in Critical. You'll remember when we used Critical, the Critical CSS was in lined and then the rest of the styles were put in this uncritical CSS file and deferred. So, here's a different way of doing the same thing. ![Screenshot (221)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20a9yruwitjp1i4bb1kq.png) Here we set the rel preload and as style attributes into the link element, to tell the browser to preload this style sheet when there's processing available, meaning the loading is delayed to avoid render blocking. Then the on load attribute fires when the CSS is fully loaded and sets the rel attributes to stylesheet so the browser recognizes it and renders it. But this non script element at the bottom is a fall back for browsers that don't have JavaScript, and in that case, they will just immediately load the style sheet. We could also: ![Screenshot (222)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7p7088njjqzt6pns10i6.png) This style sheet will not be loaded by the browser at all until the disabled attribute is removed or set defaults. You can then set up a JavaScript function to change the disabled attribute if, and only if, some event occurs like activating a gallery or triggering a JavaScript or triggering some external function and only then will the browser go to the internet pull down the style sheet, and mount it in the browser. Lastly, ![Screenshot (224)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xauyg8563wb2z8gyjs19.png) Loading style sheets in body means you can have each component load its own style sheets on the fly. That way the component brings its own styles to the table and you don't have to load any styles you don't need. This makes for much cleaner and more manageable code and it falls in line with modern component-based development practices.
bipul
779,503
This branch is out-of-date
When you are working by yourself or with a small school project team the source control requirements...
0
2021-08-02T18:17:57
https://dev.to/ankitxg/this-branch-is-out-of-date-1hb6
github, productivity, devops, codereview
When you are working by yourself or with a small school project team the source control requirements are pretty low. You do not have to worry about keeping builds healthy all the time or the impact broken builds may have on others. But the story is very different when one is working in large engineering teams. Therefore, most engineering teams rely on Github protected branch rules to ensure stability of the application. One of the most common restrictions that developers use in Github protected branches is to require status checks to pass before merging. This ensures that the CI is completed successfully before merging the changes into the protected branch (typically master or main). ![Protected branch restrictions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mavbktkhzyesv788kng.png) There’s one additional restriction that requires branches to be up to date before merging. This ensures that the pull requests have been tested with the latest code before merging in the protected branch. This prompt typically shows up in the Pull Request asking developers to update their branch. This setting is less commonly used today, and we will look into why. ![Update branch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/taufhe4w0nr18pi6rzev.png) ## Example So what’s the point of having branches up to date before merging? I recently ran into this scenario, so let me explain with an example: Let’s say Joseph and Ashley are working on a web application for food ordering service. They are working on a feature to offer a discount for users who would order ahead. **PR #1**: Joseph made the following change to calculate the discount offered to the users. ![Discount offered PR snippet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n1mgzy25ihbhpy9wexnc.png) **PR 2**: Ashley adds a new API endpoint that will be used to charge users. ![API endpoint](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3bplughutd5winvaexi.png) As you can imagine, the changes independently look pretty safe and pass the test, but when combined together will break the build as the signature of charge_order has changed. These types of scenarios are very common in large teams where multiple engineers would be working on the same code base. In those cases, it is safer to enable the restriction that requires branches to be up to date before merging. ## Side effect Before you go and turn on that knob in Github, you should understand one side-effect from using this configuration. I remember after we turned it on, engineers started spending a bunch of time playing rebase-athon. So in the above example, let’s say both Ashley and Joseph are ready to merge the changes. Now if they do not communicate with each other, they may both rebase their branches with the latest master and wait for CI to finish. Whoever notices a finished CI would merge the changes, leaving the other developer to start the rebase process from scratch. This can be optimized using a custom Github action to keep branches up to date. Every time a PR is merged, a Github action can be triggered to update the rest of the branches. Some companies have internally built a system to manage this whole process, for example [SubmitQueue at Uber](https://eng.uber.com/research/keeping-master-green-at-scale/) or [Merge Queue at Shopify](https://shopify.engineering/introducing-the-merge-queue). There are also some plug and play versions of similar concepts, the one I’ve used in the past is called [MergeQueue](https://mergequeue.com/). The bottom line is that as your team grows, optimizing the code merging process may become critical. This Github setting to restrict branches to be up to date before merging may seem a bit of an overhead, but would pay off to give you and your team peace of mind.
ankitxg
779,632
Enhance your skills in Html Css and Js ?
Top 10 website where you can enhance your Html,Css and JS skills. 1. Css Battle 2. 100days of...
16,565
2021-08-19T19:57:31
https://dev.to/buddhadebchhetri/enhance-your-skills-in-html-css-and-js-4pne
challenge, javascript, html, css
Top 10 website where you can enhance your Html,Css and JS skills. **1. [Css Battle](https://cssbattle.dev/)** ![Css Battle](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8yc2keopwhzagdhafuik.jpg) **2. [100days of css](https://100dayscss.com/)** ![100daysofcss](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39osochigwrhjya09wy3.jpg) **3. [30 days of Js](https://javascript30.com/)** ![30 days of Js](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ivt775276ntr4b70dqe.jpg) **4. [Css Challange](https://css-challenges.com/)** ![Css Challange](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6w5wkd2zq223xnk1lsd1.jpg) **5. [30days of tailwindcss](https://30daysoftailwindcss.com/)** ![30days of tailwindcss](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f01xsq6mefdajvvml0si.jpg) **6. [Frontendmentor](https://www.frontendmentor.io/challenges)** ![Frontendmentor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9hekckvs3v5pzlnuavw.jpg) **7. [Piccalil](https://piccalil.li/category/front-end-challenges-club/)** ![Piccalil](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fdzlhaxsinn4z8ngr06.jpg) **8. [Ace Frontend](https://www.acefrontend.com/)** ![Ace Frontend](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gzas459xcsvroyhjhwtw.jpg) **9. [Codier](https://codier.io/)** ![Codier](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d8bhfxc5ozcrgygftxuo.jpg) **10.[1HTMLPageChhallenge](https://onehtmlpagechallenge.com/)** ![1HTMLPageChhallenge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxijnrj6qxup6pvpaked.jpg) ###Thank you so much for reading 💖 {% user buddhadebchhetri %}
buddhadebchhetri
779,692
Failed to resolve plugin for module @react-native-firebase/app
Debug Error in React Native and Firebase
0
2021-08-02T20:14:40
https://dev.to/lexycodestudio/failed-to-resolve-plugin-for-module-react-native-firebase-app-3k00
reactnative, firebase
--- title: Failed to resolve plugin for module @react-native-firebase/app published: true description: Debug Error in React Native and Firebase tags: #reactnative #firebase cover_image: https://images.app.goo.gl/6RmkEkTiphKWdBKo9 --- #Remove @react-native-firebase/app Remove @react-native-firebase/app from node_modules using *Yarn remove @react-native-firebase/app or npm remove @react-native-firebase/app* ##Delete @react-native-firebase/app from App.json The End.
lexycodestudio
779,697
A New Free Remote Webdev Bootcamp
Wanted to get started with web development but don't have the thousands of dollars bootcamps or training courses often cost? Here's a new, free, all-remote bootcamp.
0
2021-08-02T20:36:33
https://dev.to/jesslynnrose/a-new-free-remote-webdev-bootcamp-43kb
codenewbie, beginners, bootcamp, webdev
--- title: A New Free Remote Webdev Bootcamp published: true description: Wanted to get started with web development but don't have the thousands of dollars bootcamps or training courses often cost? Here's a new, free, all-remote bootcamp. tags: codenewbie, beginners, bootcamp, webdev //cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wxuny4ugz9jifgrjfk16.png --- 👋 Hi, I'm [Jess](https://twitter.com/jesslynnrose/). I'm a former teacher working in tech that gets really, really mad about expensive, exploitative programming bootcamps. These courses can cost tens of thousands of dollars for courses of widely variable quality. While there are some great paid bootcamps (and amazing free bootcamps) out there, I wondered how existing high quality free resources could be extended to offer bootcamp-like supports for learners. So I sat down with Class Central and we started designing a free web development based around [freeCodeCamp's Responsive Web Design](https://www.freecodecamp.org/learn/responsive-web-design/) Curriculum. We wanted it to be flexible and part time, but still have the peer support of cohort based learning. So we built a cohort structure supported by a shared schedule and peer-supported forum. To keep up with the cohort, we think learners will need to do 10-20 hours of work in their own time per week. ![A breakdown of bootcmap components, showing the weekly livestreams and a dedicated cohort based forum have been added to freeCodeCamp's curricilum](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47c24lgvr3izeo3bi53g.png) We wanted to make sure learners had live contact with instructors *and* that even learners who weren't enrolled with us could benefit. So we added open weekly [livestreams](https://www.twitch.tv/jesslynnrose) on multiple platforms to go over key lessons from the week, look at core concepts together and answer learner questions in real time. We wanted to keep the risk low, so that learners who left the bootcamp or needed to take a break could keep learning on their own. So we based it around an existing learning platform and curriculum. Learners can come back to the freeCodeCamp responsive web design coursework at any time and pick back up where they left off. Most importantly, it needed to be free. freeCodeCamp was kind enough to allow us to base our program around their high quality learning materials, which gave us a great foundation. [Class Central](https://www.classcentral.com) made internal resources and budget available and I'm working on a reduced day rate. We're all very motivated to make this happen! In the future, we may explore funding options that could include tech toolmakers sponsoring optional supplementary workshops for our learners. But everyone involved is committed to keeping this and future programs 100% free for learners. We'll be covering HTML, CSS, visual design, accessibility, responsive design, CSS Flexbox and CSS Grid. We think this program is a great fit for learners new to building the web, or folks who have taken a break and want to return in a supported way. You can learn more about what we're doing and [enroll here](https://www.classcentral.com/groups/webdev-bootcamp-fall-2021). I know many of you here already have these skills. If you think this might be useful, I would be honored if you wanted to share this with friends and family who could benefit.
jesslynnrose
779,705
Passing Props to Child Components in React using TypeScript
I believe that if you are reading this article, you already have an idea of component hireraquia and...
0
2021-08-02T21:53:26
https://dev.to/franciscomendes10866/passing-props-to-child-components-in-react-using-typescript-2690
react, javascript, node, beginners
I believe that if you are reading this article, you already have an idea of component hireraquia and the normal flow is to pass props from the parent component to the child component. I believe we all had some friction trying to convert our JavaScript knowledge to TypeScript, even though it was the same the code became more verbose and suddenly you started questioning everything. I continue with several challenges on a daily basis, however today I am fully aware of the advantages that TypeScript brings to my application development experience in React. Exactly today I'm going to give a brief example of how we can pass props between components using TypeScript. # Let's code Pretend that the main page of your application is as follows: ```js // @src/App.tsx import React, { useState } from "react"; import Form from "./components/Form"; const App = () => { const [state, setState] = useState(""); const handleOnSubmit = (e) => { e.preventDefault(); console.log({ state }); }; return ( <Form state={state} setState={setState} handleOnSubmit={handleOnSubmit} placeholder="Type some letters" /> ); }; export default App; ``` And the component of our form is as follows: ```js // @src/components/Form.tsx import React from "react"; const Form = ({ state, setState, handleOnSubmit, placeholder, }) => { return ( <form onSubmit={handleOnSubmit}> <input type="text" value={state} onChange={(e) => setState(e.target.value)} placeholder={placeholder} /> <button type="submit">Submit</button> </form> ); }; export default Form; ``` As you may have noticed both components, both are written the same way you would in JavaScript. And you may have noticed that we passed the following properties from the parent component to the child component: - `state` is a string; - `setState` is a function; - `handleOnSubmit` is a function; - `placeholder` is a string; But before that we have to type our own function components. This way: ```js // @src/App.tsx const App: React.FC = () => { // Hidden for simplicity } // @src/components/Form.tsx const Form: React.FC = () => { // Hidden for simplicity } ``` So we can go to our `Form.tsx` and create a type called Props that will be used as a generic for our component. ```js // @src/components/Form.tsx import React from "react"; type Props = { state: string; setState: (val: string) => void; handleOnSubmit: () => void; placeholder: string; }; const Form: React.FC<Props> = ({ state, setState, handleOnSubmit, placeholder, }) => { return ( // Hidden for simplicity ); }; export default Form; ``` You may have noticed an inconsistency in the previous code, in `App.tsx` the **handleOnSubmit** function takes a single argument, which is an *event*. While in our `Props` type of `Form.tsx` we don't have any arguments. For this we will use a React data type called `FormEvent` that will have a generic, which in this case will be the `HTMLFormElement`. This way we will already have the ideal data type to "handle" the form event.Like this: ```js // @src/components/Form.tsx import React, { FormEvent } from "react"; type SubmitEvent = FormEvent<HTMLFormElement>; type Props = { state: string; setState: (val: string) => void; handleOnSubmit: (e: SubmitEvent) => void; placeholder: string; }; const Form: React.FC<Props> = ({ state, setState, handleOnSubmit, placeholder, }) => { return ( // Hidden for simplicity ); }; export default Form; ``` This way, you must have also noticed that in the input element we have an attribute that is the onChange which is actually an event, so we have to type it. In a very similar way to what we did before. First we will import a React data type called `ChangeEvent`, then we will assign a generic which in this case will be `HTMLInputElement`. This way: ```js // @src/components/Form.tsx import React, { ChangeEvent, FormEvent } from "react"; type SubmitEvent = FormEvent<HTMLFormElement>; type InputEvent = ChangeEvent<HTMLInputElement>; // Hidden for simplicity const Form: React.FC<Props> = ({ // Hidden for simplicity }) => { return ( <form onSubmit={handleOnSubmit}> <input type="text" value={state} onChange={(e: InputEvent) => setState(e.target.value)} placeholder={placeholder} /> <button type="submit">Submit</button> </form> ); }; export default Form; ``` Now we can go back to our `App.tsx`. We just need to create a type in the `handleOnSubmit` function argument, which, as you might have guessed, is an event. Like this: ```js // @src/App.tsx import React, { useState } from "react"; import Form from "./components/Form"; type FormEvent = React.FormEvent<HTMLFormElement>; const App: React.FC = () => { const [state, setState] = useState(""); const handleOnSubmit = (e: FormEvent) => { e.preventDefault(); console.log({ state }); }; return ( // Hidden for simplicity ); }; export default App; ``` Finally we can add a generic to our `useState()`, which in this case is a string. ```js // @src/App.tsx import React, { useState } from "react"; // Hidden for simplicity const App: React.FC = () => { const [state, setState] = useState<string>(""); // Hidden for simplicity }; export default App; ``` ## Conclusion As always, I hope I was clear and that I helped you. If you notice any errors in this article, please mention them in the comments. ✏️ Hope you have a great day! 🙌 🤩
franciscomendes10866
779,799
Tem diferença entre Software e Hardware?
Sim, tem diferença. E muita. Software é a parte que você xinga, hardware a que você...
0
2021-08-02T23:35:50
https://dev.to/lagcrs/tem-diferenca-entre-software-e-hardware-25b2
hardware, software
Sim, tem diferença. E muita. Software é a parte que você xinga, hardware a que você chuta. Brincadeiras à parte, é muito comum as pessoas confundirem um pouco sobre eles, já que possuem nomes parecidos. Ambos fazem parte da formação de um computador¹, no entanto exercem funções completamente distintas. Vamos entender o porquê. ### HARDWARE Hardware é a parte física do computador, a parte tocável, a que você vê. É o conjunto dos componentes eletrônicos, peças e equipamentos que fazem que seu computador funciona. Podemos dizer que é o corpo da máquina, tanto para computadores, notebooks, celulares, etc. A palavra hardware também se refere a equipamentos vinculados a um produto que precisa de algum tipo de processamento computacional. Carro, por exemplo. #### Principais Componentes do Computador ##### [1] Placa-mãe É o coração do computador. Ela funciona como a “Central de comando”, já que é responsável pela administração e distribuição das instruções para todos os componentes do computador. Isto só ocorre pois ela une todas as partes do sistema nela, permitindo que ocorra a troca de informação entre processador, memória, placas, etc. Além de tudo isso, a placa-mãe também é responsável pela partilha da alimentação entre os componentes com a energia elétrica dada pela fonte. ##### [2] Processador Já o processador é o cérebro, também chamado de CPU (Unidade de Processamento de Dados). Muitos erram achando que CPU é o gabinete do computador, mas não. CPU é a outra forma que processador é conhecido. É uma das partes principais do hardware, responsável pelos cálculos e execuções das informações dadas pelo usuário. A velocidade que o computador processa dados e executa as tarefas está inteiramente ligada com a velocidade do processador. Atualmente, existe duas fabricantes concorrentes de processadores: Intel e AMD. ##### [3] Placa de Vídeo Também chamada de placa gráfica ou GPU (Unidade de Processamento Gráfico), esta é parte do hardware responsável por administrar e controlar as funções de exibição de vídeo em seu monitor, aquela que envia sinais do computador para tela, apresentadas em forma de imagem. Todo computador hoje em dia possui uma placa de vídeo com memória dedicada, sendo onboard² ou offboard³. ##### [4] Memória RAM A memória RAM (Memória de Acesso Aleatório) armazenar dados de programas somente quando o computador estiver ligado, não permanentemente. É uma memória volátil de acesso rápido, essencial para acompanhar a velocidade do processador. ##### [5] HD É um disco com grande capacidade de armazenamento de dados. Ao contrário da memória RAM, que mantém os dados somente quando o computador está ligado, o HD (Hard Disk, disco rígido) armazena os dados permanentemente. É aqui que fica todos os programas instalados no computador, mesmo quando este se encontra desligado. #####[6] Fonte A fonte cuida da alimentação de todos os componentes descritos acima. Ela converta a corrente alternada de sua casa em corrente contínua, essencial para o funcionamento do computador. É vital, pois sem ela o computador seria apenas uma caixa sem utilidade nenhuma. ### SOFTWARE O software é a parte lógica do computador, a que comanda o computador, sendo composto por partes não palpáveis. São programas inseridos dentro do hardware que realizam diversas tarefas. Quando ocorre a interpretação dos dados, realizam as funções para qual foi projetado. Para se ter uma ideia, o Word, editor de texto da Microsoft, é um software. Um jogo é um software. Um aplicativo de celular é um software também. Pode ser desenvolvido desde de empresas especializadas, até por pessoas comuns. Os softwares podem ser divididos em: #### [1] Software de Sistema É o que permite que o usuário interaja com o computador. Muitas vezes divididos em Sistema Operacional e Programas Utilitários. Sistema Operacional é o software mais importante, pois é ele que faz a ponte entre o computador e o usuário e nos dando também a opção de dizer ao mesmo o que fazer. Exemplo: Linux, Windows, Solaris, Mac OS, Android, entre outros. Já os utilitários são de menor porte, que realizam tarefas adicionais a parte das oferecidas pelo sistema operacional, drivers, antivírus, cópia de segurança, verificação de disco, etc. #### [2] Software Aplicativo São softwares que realizam tarefas específicas, usados pelos usuários para auxiliar as tarefas do dia a dia. O Microsoft Word, citado acima se enquadra nesta categoria, pois é um editor de texto, assim como Excel, planilha eletrônica. Os navegadores também são softwares aplicativos, como Mozilla Firefox, Google Chrome, Opera, entre vários outros. - Computador¹ é qualquer máquina eletrônica capaz de armazenar, processar e enviar dados processados de volta para o usuário; - Onboard² significa “na placa”, ou seja, já vêm integrado na placa-mãe. - Offboard³ contrário de onboard. **Fontes:** - techtudo.com.br - mundoeducacao.bol.uol.com.br - canaltech.com.br - faqinformatica.com *Foto de Pok Rie no Pexels* - mundoeducacao.bol.uol.com.br - significados.com.br - oficinadanet.com.br
lagcrs
779,809
[Android Dev] Add Google Maps Quick and Dirty
Sometimes we just want to add a Google Maps activity into a proof-of-concept or just a test...
0
2021-08-03T00:05:54
https://dev.to/dougylee/android-dev-add-google-maps-quick-and-dirty-4kf9
android
Sometimes we just want to add a Google Maps activity into a proof-of-concept or just a test project. We want **less overhead** of making it well engineered. We just want to **get it running**. --- ## Use the Preset The easiest way to get started on Google Maps is to use the preset Google Maps activity. This is a template that adds the code to insert a Google map video into your app. The code here is functional so you don't have to do too much to get it working. [The documentation](https://developers.google.com/maps/documentation/android-sdk/get-api-key) is very easy to read and the steps are few. --- ## Starting from nothing 1. You'll need to set up [Google Cloud console](https://developers.google.com/maps/documentation/android-sdk/cloud-setup). 2. Create a new project inside the Google Cloud console. [Enable the Google Maps platform credentials](https://console.cloud.google.com/project/_/google/maps-apis/credentials) 3. Copy and paste the API key into the metadata tag in the Android manifest. ``` <meta-data android:name="com.google.android.geo.API_KEY" android:value="MAPS_API_KEY" /> ``` --- ## Gotchas - The Google Maps template activity uses view binding which can be confusing for new users. - The way we handle the API key is not very secure since we are directly adding it into our manifest file.
dougylee
779,829
Advantages of using Firebase for Mobile App Development
Introduction to Firebase for Mobile App As mobile apps evolved from being simple...
0
2021-08-03T02:39:10
https://dev.to/joseprest/advantages-of-using-firebase-for-mobile-app-development-2egn
firebase, javascript, mobile
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsdizd8dbjugycy2rlkb.jpg) ##Introduction to Firebase for Mobile App## As mobile apps evolved from being simple entertainment platforms for people to more sophisticated and challenging tools for even enterprises, there was a huge need to improve its backend capabilities. By backend, we meant not just scalable storage, but also high-performance processing engines, flexible integration capabilities and much more. Besides, today there is a huge demand for mobile apps to have analytical and artificial intelligence-based capabilities that require even more processing power in the backend. So how can app development companies or businesses that want to build a mobile app to serve customers achieve this level of power in their backend? Well, you do not have to look beyond Firebase for this. Backed by Google, Firebase offers a suite of tools required for businesses to build powerful apps without having to worry about management of infrastructure. It is designed to support backend developers at all stages of development and helps in improving the quality of the overall app development exercise. ##Why Firebase is used for mobile app development? ## It provides a host of features and modules that an app developer needs, as a service thereby eliminating the need to create these from scratch. It includes everything from a scalable database to powerful analytics libraries. Firebase is in no way a replacement for backend development activity, but it is rather a platform to help backend developers and engineers enhance the experience of the app without stressful coding and architectural planning. Some interesting features of using firebase:- Real-time Data Security Built-in Email and Password authentication Static File Hosting Storage fostered by Google Cloud Now, here are 5 revolutionary advantages brought by Firebase in today’s mobile app development practices. ###1) Faster time to market### This is one of the key elements that decides how successful a mobile app turns out to be. When consumers get to use your apps or its new features as early as possible, they turn out to be your biggest competitive advantage in extremely sensitive markets. For example, imagine that you own a financial management application that consumers use to manage their personal financial expenses. When customers want to use their smartphones increasingly for making payments across their physical and digital shopping destinations, then it is imperative that you roll out the functionality in your app at the earliest or risk losing market share to competitors who do it first. A typical back end development exercise for this capability would take months or even years to be production ready. Bring in Firebase, and you could be looking at launching the feature in a matter of weeks thanks to the huge set of processing capabilities and scalable assets it offers. Firebase offers this competitive advantage that you cannot simply ignore. ###2) Reduces development time and effort### For building a backend for your mobile app, there is a requirement for servers, hosting, database, and numerous supporting backend services. You need dedicated development professionals for managing this backend activity as well as another team to work on the front-end mobile app code. This leads to increased development efforts, integration time and project management overhead. And above all, dependencies in these two teams will push development timelines further if one team gets held up with some challenges they face. Communication and collaboration efforts are also high in such mode of application development which ultimately increases the risk of errors that can lead to severe consequences on the final product. Firebase comes preloaded with all such pre-requisites of backend development so that you need not re-invent the wheel for each activity. This is a huge advantage of using firebase for mobile app as a front-end developer who needs a certain backend data type or service can simply integrate his code with the Firebase suite and get the desired input. He or his team can manage the end-to-end development for the mobile app thereby reducing overheads and dependency-based challenges to a bare minimum or even eliminate it all together for smaller projects. This leads to a drastic reduction of time and effort needed in building mobile apps which is a boon to companies who have smaller development teams. ###3) Real-Time Database Scalability### Cloud Firestore is the popular database platform within the Firebase suite of offerings. It allows development teams to quickly setup a highly scalable and flexible real time database for their mobile, web and server development activities. Elimination of a middle synchronization layer between the application and the backend database results in direct data access through the Firebase SDK. It ensures that data is well synchronized within the application environment irrespective of the internet connectivity it has. It is a NoSQL database which has been proven to be more effective than relational databases for mobile application-based workloads. It can be used to store large amounts of unstructured or structured data, run powerful processing algorithms and scale on demand to support heavy spikes in data. It allows easy storage and management of user-generated content irrespective of the volume of data that is generated. Google’s cloud storage empowers easy scalability for the data storage as well as processing systems to accommodate large volumes seamlessly. ###4) Google Analytics Integration### Any kind of personalized experience that you need to provide for your mobile app consumers require insights about their behavior and usage patterns. From a traditional standpoint, you need to either build an analytical engine into your mobile apps backend to process these insights or you may need to integrate powerful 3rd party analytical solutions to get the job done. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mgezy4buh39vw0oev2c.jpg) With Firebase, you have Google’s own Google Analytics integrated into the core solution and available for you to use to create your own views about customers based on target data behavior. You can easily monitor user behavior, identify journeys across devices and do much more personalization in your mobile apps with the help of in-built Google Analytics. It provides you data on where your app scores in terms of user engagement. This allows you to build features and capabilities that customers would want from your app and hence improve brand loyalty and customer satisfaction. It helps in placing the right content for the right audience at the right time to ensure that all your customer needs are well addressed from your mobile app. Ultimately, analytics serves as a game-changing feature and highly advantageous in your mobile app’s journey to building a sustainable growth channel for your business powered by user feedback and interests. ###5) Flexible Cost### Firebase is free to begin with and there are many subscription-based plans for various services within Firebase. Enterprises or startups that want to begin their mobile app journey can use Firebase to explore possibilities in small scales and later expand their capabilities on a pay-as-you-go basis. This provides enough financial flexibility for small businesses to compete with established businesses in the race for building innovative mobile based business channels. From e-commerce to mobile payments, there is huge ocean of opportunities that they can explore within their limited financial capabilities and technology constraints. The lower starting cost and on-demand pricing allows companies to monetize their mobile channels gradually and maintain low operational costs throughout its lifecycle. ##Final Thoughts:-## Due to these tremendous advantages of using Firebase in Mobile apps, it has revolutionized the way mobile app backends are built today. From cost to effort, there is a huge impact it made in making app backend development more accessible and seamless for businesses of all sizes. All that a business needs to create a productive and sustainable roadmap for their mobile app journey is a knowledgeable workforce who can use tools like Firebase to create amazing mobile app experiences for your consumers. We understand the need for you to focus on core business activities and the limit it creates for your resources. This is why CitrusBug offers state of the art Firebase development services for mobile app development projects. Our consultants will identify the most profitable growth roadmap for your mobile app, build capabilities with using Firebase for your backend and create solid front-end mobile apps for your customers to use as well. Get in touch with us today to know more.
joseprest
779,831
Combining Jest and Cypress code coverage reports in your Angular app
Photo by Isaac Smith on Unsplash When writing front-end tests, code coverage is an important metric...
0
2021-08-03T02:55:54
https://fasterinnerlooper.medium.com/combining-jest-and-cypress-code-coverage-reports-in-your-angular-app-595b2bd4a125
jest, cypress, angular, testing
--- title: Combining Jest and Cypress code coverage reports in your Angular app published: true date: 2021-07-31 18:38:58 UTC tags: jest,cypress,angular,testing canonical_url: https://fasterinnerlooper.medium.com/combining-jest-and-cypress-code-coverage-reports-in-your-angular-app-595b2bd4a125 --- ![](https://cdn-images-1.medium.com/max/1024/1*pv2AJGKWmqbFC9NjAWXuCg.jpeg)<figcaption>Photo by <a href="https://unsplash.com/@isaacmsmith?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Isaac Smith</a> on <a href="https://unsplash.com/s/photos/report-graph?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption> When writing front-end tests, code coverage is an important metric that helps you determine how many critical paths of your application are covered. Cypress has their own tutorial and example repo which uses Babel, but in this post I’ll go through the process of doing this with an Angular application, without using Babel. ### Instrumenting and reporting Codecov has a feature whereby any code coverage metrics that are uploaded with the same branch name will be merged into one report. This is a useful feature if you have several different test runners and need them all to be combined into one report. For producing coverage reports for Jest and Cypress some features need to be added and enabled. Jest comes with the istanbul instrumenter and coverage reporting tool built-in, and since this is now the standard for instrumenting code, this is a useful feature. To use it, you have to run jest with the --coverage flag added. If you already have this in your package.json file, you can make the following modification: ``` - "test": "ng test" + "test": "ng test -- --coverage" ``` This will produce a coverage report for your Jest tests, but this isn’t enough. We also need Jest to collect coverage for all of the files that have no test. In your jest.config.js file, you will need to make this modification: ``` + rootDirs: ['<rootDir>/src'], + collectCoverage: true, - coverageReporters: ['json', 'html'], + coverageReporters: ['lcov'], + collectCoverageFrom: ['**/*.ts'], + coverageDirectory: './coverage/jest', ``` This will create an lcov format coverage report, but will also give us a proper understanding of what is and isn’t covered across our entire application. We’re also going to put out output in a separate directory so that when we run our Cypress tests, our Jest coverage report doesn’t get overwritten. To get Cypress set up there are a few more steps involved. As mentioned in the readme on [skylock/cypress-angular-coverage-example](https://github.com/skylock/cypress-angular-coverage-example) because the Cypress package doesn’t do instrumenting, we have to add the @cypress/code-coverage package and modify the build process. This will trigger instrumenting on our code so that we can produce a coverage report. This package uses Istanbul’s successor nyc, and as such a second step is required for us to generate an lcov format report. We also need to modify the webpack config for the development build so that the code gets instrumented correctly. Begin by adding the @cypress/code-coverage package: ``` npm i -D @cypress/code-coverage ``` Then add the package to Cypress’ support/index.ts file: ``` + import '@cypress/code-coverage/support ``` Now that we have Cypress set up, we can move onto modifying the build process. We achieve this by using the ngx-build-plus package which can be installed with the following command: ``` npm i -D @ngx-build-plus ``` In the cypress folder, a coverage.webpack.js needs to be added with the following content: ``` module.exports = { module: { rules: [ { test: /\.(js|ts)$/, loader: 'istanbul-instrumenter-loader', options: { esModules: true }, enforce: 'post', include: require('path').join(__dirname, '..', 'src'), exclude: [/\.(e2e|spec)\.ts$/, /node_modules/, /(ngfactory|ngstyle)\.js/], }, ], }, }; ``` This will allow us to instrument our code, but we also need to modify our angular.json file to use this webpack file: ``` - "builder": "[@angular](http://twitter.com/angular)-devkit/build-angular:dev-server", + "builder": "ngx-build-plus:dev-server", + "options": { "browserTarget": "ng-new-app:build", "extraWebpackConfig": "./cypress/coverage.webpack.js" } ``` Now we can test our project by running ``` ng e2e ``` or ``` npm run <project>:cypress-run ``` Once the test run has completed, you still need to covert the nyc output to lcov so that you can proceed with the next step. You can achieve this by running this command: ``` npx nyc report --reporter=lcov ``` This will transform your nyc output into lcov output, so that both reports are in the same format. You can now decide how you wish to track your code coverage. ### Option 1: Codecov When you upload multiple reports, Codecov will merge these reports together, so simply uploading the coverage directory’s content will combine these results and produce an accurate coverage report. If you are using GitHub Actions, uploading the files is as simple as calling the relevant Action: ``` uses: codecov/codecov-action@v1 ``` Once it is uploaded, you will get access to Codecov’s different coverage diagrams as well as a histogram of your coverage over time. ![](https://cdn-images-1.medium.com/max/1024/1*uFRKjxrF1KcM9uDBRn6SnQ.png)<figcaption>Coverage over time for an example project</figcaption> ### Option 2: Manually merge the reports If you simply want to manually merge the reports and feed them into your own code coverage tool, lcov reports can be appended to one another and still retain the same detail. After installing the lcov package in Ubuntu, you can generate a HTML report of the combined coverage by running the following command: ``` genhtml --prefix <directory> lcov.info --output-directory=./cov ``` ![](https://cdn-images-1.medium.com/max/1024/1*UiSx5f6quEy5kJC9MAtvYQ.png)<figcaption>An example of the genhtml output</figcaption> If you’d like to have a ready-to-go project that has all of this included, and more, check out this project I helped out with: [sardapv/angular-material-starter-template](https://github.com/sardapv/angular-material-starter-template) Now go forth, and cover!
fasterinnerlooper
779,841
Logflake, a NodeJS Console Logger with superpowers
Logflake is a NodeJS package that brings a better console log experience while developing
0
2021-08-04T15:38:54
https://dev.to/felipperegazio/logflake-a-nodejs-console-logger-with-superpowers-ek0
showdev, opensource, node, javascript
--- title: Logflake, a NodeJS Console Logger with superpowers published: true description: Logflake is a NodeJS package that brings a better console log experience while developing tags: #showdev #opensource #node #javascript cover_image: https://images.unsplash.com/photo-1603736260016-6dfe88537067?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1950&q=80 --- I just finished this lib that i've been working on for the few past weeks. LogFlake is a NodeJS console logger with superpowers. It has the same API as the usual `Console` but with beautified output, a message header with timestamp and useful information, trackability and a toolset for a better control of your log messages. You can check out the lib and the docs on this link: https://www.npmjs.com/package/logflake. I decided to write this lib because i like the Console simplicity, but i miss some features. I was searching for a very simple, out of the box tool just to have better output and control of the console message logging. Then i wrote "logflake", which is pretty neat and, despite lots of options, requires zero configuration to use its basic features. The lib was written with TypeScript, and tested with Jest. It has a test coverage (unity and integration) near 90%, and its available on NPMJS. You can download, or install it using npm/yarn. ## Getting started I'll show some of the basic features. If you like it, please consider to leave a star on GitHub. PR's are very welcome! Hands on, you can install it using NPM or Yarn: ```js npm install logflake ``` Then you must create your `log` function (you can give the name you prefer). In CJS ```js const logger = require('logflake'); const log = logger(); ``` or EJS ```js import logger from 'logflake'; const log = logger(); ``` Now you can log things. As said, it has the same API as `Console`, with some advantages. ```js log('Hello world'); ``` Will output: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ey91kaz3lmsw3ed0lgeu.png) The console header shows a namespace [ CONSOLE LOG ] (which is editable), followed by the O.S indentifier, O.S username, current mainfile, date and time. You can configure the header and decide which information you want to show. You can log anything you want, or how many things you want. For example, this is the log function logged by itself: ```js log('Hello %s', 'world', log); ``` Will output: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ptwrhlgcb29z053y5ir.png) ### Log levels The first `log` function argument can be used to change the log level. You can use the following log levels: - log (blue) - info (cyan) - warn (yellow) - error (red) - trace (magenta) - quiet (no console output) An error, for example, would be: ```js log('error', 'Unexpected error', 500); ``` And would produce: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8vvauuu5l4mrp4fjggn.png) ### Namespaces Now lets imagine that you have lots of logs in a huge and distributed application. You can add a namespace for each log function to make them easier to find: ```js const logger = require('logflake'); const log = logger('Example'); // Example is the namespace log('error', 'Unexpected error', 500); ``` Note the [ EXAMPLE ERROR ] prefix on the log header: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gv9q8b4tkirpc9sr640p.png) ### Options Logflake accepts lots of options passed directly to the "logger". To illustrate some of them, lets say you want to count how many times a log was triggered, and save its output on a local file. You could do: ```js const logger = require('logflake'); const log = logger({ prefix: 'Example', // Now we pass Namespace as an option logDir: './', // Directory to save the logs callCount: true // Count how many times a log happened }); /** * Now lets pretend this error happened 1000 times */ for (let i = 0; i < 1000; i++) { log('error', 'Unexpected error', 500).save(); } ``` This will output: (...) ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30oa5dya2g904ud91hgz.png) Note that the function now has a count (x1000, for example). Since we passed the option "callCount", it indicates how many times the `log` has been triggered on the current runtime. The `save()` method tells the logger to save each log output (of this specific call) to a file on the directory passed on the `logDir` option. The logger will automatically organize the different log files by date. ### Methods Now lets say you dont want to pass the `save()` method to specific log calls, instead you want to save all of them. Also you dont want to polute your log file with 1000 duplicated log registers, just one is enough to alarm the team. You can ask `LogFlake` to save all logs for you, and to save some of them only once, like that: ```js const logger = require('logflake'); const log = logger({ prefix: 'Example', // Now we pass Namespace as an option logDir: './', // Directory to save the logs alwaysSave: true, // Save all log outputs to a log file callCount: true // Count how many times a log happened }); log('I'll be saved also :3'); for (let i = 0; i < 1000; i++) { log('error', 'Unexpected error', 500).once(); } ``` The code above will save the first log, then will trigger and save the error log only once, despite being inside a 1000x for loop, because of the .once() method. All logs will be automatically saved due the `alwaysSave` option. Since the `once` has been used on the error, it will be saved only once. We can also imagine that this is a very important log for you, and you want to send an alarm with its content to slack when and if it fires. `LogFlake` STILL dont do that (the slack thing), but you can get the log output + information and sent to whatever you want: ```js log('error', 'Unexpected error', 500) .once(); .get((output, info) => { /* do whatever you want with output and info */ }); ``` As showed above, we are getting the log output using the `get` method. The `output` param will contain the string representing the log exactly as showed on the console. The ` `info` param is a useful object containing information about the log as level, call count, trace, etc. You can also automatically trap all log outputs, allowing you to send them to slack, a database or whatever you want. ### Conclusion There are lots of options and usages to `LogFlake` and would be a huge post to show all of them, those were only some cool examples. If you liked, you can checkout the complete documentation and sources at GitHub: https://github.com/felippe-regazio/logflake. As already mentioned, this lib is intended to be VERY simple and useful. Its a very handy way to track and save some runtime information while running your code. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3v7sw53nyh81zuyqa36d.png) --- Cover image by Jason Richard at Unsplash
felipperegazio
780,037
Different Important Aspects and Guideline to Learn Web Design
In today’s technology era, many questions arise in the mind while learning web design from the...
0
2021-08-03T07:51:16
https://dev.to/aishaon/different-important-aspects-and-guideline-to-learn-web-design-2phm
webdev, technology, trend, guidelineforwebdesign
In today’s technology era, many questions arise in the mind while learning web design from the scratch. For me, At the very beginning when I first start learning Web Design back in 2009/2010, I also remained in the same situation. For those who are searching this type of question’s answer, this article is for them. Guidelines for learning Web Design (Static Design): You have to learn HTML first. You need to learn CSS to style HTML. There are two versions in CSS now, version 2 and version 3. Version 2 and 3 are almost the same, version 3 just added an extra bit of the latest functionalities. So, keep learning without worrying about it. Of course, HTML 5 is a little different. To learn HTML and CSS, go to W3Schools.com or for Bengali training, you can check my free course. Mockup is created in Photoshop before designing any website. This Photoshop file is called PSD. After PSD, it has to be converted to HTML. This function is called PSD to HTML. You can check my PSD to HTML course also for free. If the site is built with HTML and CSS, the site needs JavaScript to do animations, slideshows, etc. This is the makeup box of the site. To learn JavaScript, go to W3Schools.com Once the site is created, it has to be responsive. Responsive means arranging the display of the site according to different screen sizes. There are many frameworks to be responsive. Such as Less Framework, Foundation, HTML Kickstart, Bootstrap, Skeleton, HTML5 Boilerplate, etc. Guidelines for learning Web Design (Dynamic Design): Then server-side scripting is required to make the site dynamic. This means that scripting is needed to store any information from the site form in the database and to retrieve the information stored in the database. That’s why I will say two pairs here. One is PHP and MySQL, the other is ASP and MSSQL. Many people think a lot about small things like span tags. Many designers design without span tags. There is no reason to be panicked. Now let’s talk about CMS. The most popular CMS is WordPress. The advantage of CMS is: a lot can be done, the site can be designed in a short time. Lots of plugins available. There is no pair of CMS to create a site by saving time using the plugin. Many peoples hate CMS. If you know the RAW coding, you can customize the CMS to your desire. Another popular CMS is Joomla. Joomla’s security is lower than WordPress’s, so many people don’t like it. Plugins are available on WordPress for creating eCommerce sites. "WP eCommerce" is such a plugin. Joomla also has plugins for creating eCommerce sites. One such plugin is VirtueMart. There are some special CMS’s for creating eCommerce sites. For example WooCommerce, Shopify, OpenCart, ZenCart, OsCommerce, NopCommerce, BigCommerce, PrestaShop, etc. To create an eCommerce site, you have to integrate the payment method on the site. The most reliable of the international payment methods are PayPal. Other methods include Payoneer, 2Checkout, Stripe, and many more. To integrate these payment methods on the website, instruction can be found in the help file or support of the site of that payment method. Many people want to add Facebook Like box, Share Button, Pinterest, Twitter, etc. These are available on the respective sites. If you search on Google, you will get the link. Either they will give iframe code or link or HTML code. Copy and paste it on the site. To set up Google Map, go to the link of Google Map and if you fix the location, width/height, you will get a map of a certain size and place. Generate the code and put it in the HTML code of the site. If you can explain to Google what do you want to know more, Google will answer step by step. So, I think, I can give you a brief explanation of the important aspects and guidelines for learning Web Design. If you find this article helpful, do comment in the comment section and share it on your social media so that other people also get help from it. 🙂
aishaon
780,316
Python super() vs Base.__init__ Method
When defining a subclass, there are different ways to call the __init__ method of a parent class....
0
2024-01-08T08:58:15
https://bhavaniravi.medium.com/python-super-vs-base-init-method-d923ca595ad3
django, flask, python
--- title: Python super() vs Base.__init__ Method published: true date: 2024-01-08 08:58:00 UTC tags: django,flask,python,pythonprogramming canonical_url: https://bhavaniravi.medium.com/python-super-vs-base-init-method-d923ca595ad3 --- When defining a subclass, there are different ways to call the \_\_init\_\_ method of a parent class. Let’s start with a base class and go through each of these methods. For this blog, it’s better that you open a sample.py python file and follow along. ```python class Base(object): def __init__ (self): print "Base created" ``` ### Method 1 :: Using Parent Reference Directly ```python class ChildA(Base): def __init__ (self): Base. __init__ (self) print ("Child A initlaized") ``` ### Method 2:: Using Super with child class ```python class ChildB(Base): def __init__ (self): print ("Child B initlaized") super(ChildB, self). __init__ () ``` ### Method 3:: Using the super method ```python class ChildC(Base): def __init__ (self): super(). __init__ () print ("Child C initlaized") ``` ### Questions 1. What are the pros and cons of each method? 2. Is there one single right way to do this? When you run this code as a single Python script, initializing child classes A, B, and C., You will notice absolutely no difference. ```python cA = ChildA() cB = ChildB() cC = ChildC() ``` How can we demystify this? Let’s start with the documentation. 1. As of [Python3](https://docs.python.org/3/library/functions.html#super) super() is same as super(ChildB, self).\_\_init\_\_(). That rules out one of the three methods. 2. To compare Base.\_\_init\_\_(self) and super().\_\_init\_\_() we need multiple Inheritance. Consider the following snippet ```python class Base1: def __init__ (self): print ("Base 1 created") super(). __init__ () class Base2: def __init__ (self): print ("Base 2created") super(). __init__ () ``` Let’s write the subclasses ```python class A1(Base1, Base2): def __init__ (self): print ("Child A1 Initialized") super(). __init__ () class A2(Base2, Base1): def __init__ (self): print ("Child A Initialized") super(). __init__ () ``` Let’s initialize the objects ```python a1 = A1() print ("\n\n") a2 = A2() ``` On running the above snippet, we get the following Output. ```python Base 1 created Base 2 created Child A1 initialized Base 2 created Base 1 created Child A2 initialized ``` In the case of class A1(Base1, Base2) Base1 is initialized first, followed by Base2. It’s the inverse for class A2. We can conclude that the methods are called based on the order of specification. 1. When you use the Base1.\_\_init\_\_() method, you lose out on this feature of Python 2. When you introduce new hierarchies, renaming the classes will become a nightmare So how does Python know which function to call first, introducing MRO(Method Resolution Order) ### Method Resolution Order > Method Resolution Order(MRO) denotes the way a programming language resolves a method or attribute. In the case of single inheritance, the attribute is searched only at a single level, with multiple inheritance Python interpreter looks for the attribute in itself then its parents in the order of inheritance. In case of A1 -> Base 1 -> Base 2 One can use the mro function to find the method resolution order of any particular class. ```python print (A2.mro()) ``` \*\* Output\*\* ```python [<class ' __main__.A2'>, <class ' __main__.Base2'>, <class ' __main__.Base1'>, <class 'object'>] ``` ### The Last Punch Comment out the super calls in base class and check the ouput of your script. ```python class Base1: def __init__ (self): print ("Base 1 created") # super(). __init__ () class Base2: def __init__ (self): print ("Base 2 created") # super(). __init__ () ``` What do you see? ```python Base 1 created Child A1 initlaized Base 2 created Child A2 initlaized [<class ' __main__.A2'>, <class ' __main__.Base2'>, <class ' __main__.Base1'>, <class 'object'>] ``` In spite of having Base1 and Base2 in the MRO list, mro won’t resolve the order unless the super() function is propagated all the way up to the base class, i.e., Python propagates the search for the attribute only until it finds one. Comment on the init method in Base1 and see for yourself. ```python class Base1: pass # def __init__ (self): # self.prop1 = "Base 1" # self.prop11 = "Base 11" # print ("Base 1 created") # # super(). __init__ () ``` **Output** Since Python can’t find the \_\_init\_\_ method in Base1 it checks Base2 before sending it all the way to object class ``` Base 2 created Child A1 initlaized Base 2 created Child A2 initlaized [<class ' __main__.A2'>, <class ' __main__.Base2'>, <class ' __main__.Base1'>, <class 'object'>] ``` People ask me why I love Python so much. It’s not because Python is simple and easy. It is all these things Python does to make things easy for us, sometimes a little hard, too :) [_Want to build a project with Python, Join the Python to Project Bootcamp](https://gumroad.com/l/LaFSj)
bhavaniravi
780,342
Maneira simples de construir objetos utilizando Object.assign
Quando trabalhamos com formulário as vezes precisamos ter uma maneira mais prática de fazer "a coisa"...
0
2021-08-03T19:38:39
https://dev.to/michael08928874/maneira-simples-de-construir-objetos-utilizando-object-assign-55ml
Quando trabalhamos com formulário as vezes precisamos ter uma maneira mais prática de fazer "a coisa" acontecer. Passei por uma situação quer precisava realizar a união de dois objetos, literalmente fazer um merge. E encontrei essa solução que achei incrível e sem dúvida vai me facilitar muito a vida a partir de hoje. Vamos ao exemplo! Temos informações básicas de um cliente. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3brh7mg2kgwmnvtyrd44.png) Também informações básicas do cartão de crédito desse cliente. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sdd24jvgo1v81nnogl0p.png) Podemos usar o Object.assign dessa forma para unir esses dois objetos e ter um consolidado dessa maneira e formar um novo objeto que para o nosso exemplo vamos chamar de fatura. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3bls1r3j85ztvv24olz9.png) Como resultado teremos o seguinte objeto. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3na2k88rw1mbc7xyupg.png) Podemos ir um pouco mais além e como você chegou até aqui, vou demontrar um segredo legal. Agora vamos supor que precisamos adicionar um array dentro desse objeto, esse array contem a descrição dos itens que devem ser pagos. Para isso basta criar um novo objeto do tipo array e fazer o seguinte: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6frsxfevimt2d2itdlll.png) No exemplo criei um objeto cobranca que armazena um objeto contendo o array das parcelas pendentes. O próximo passo é usar o Object.assing para fazer o merge desse objeto cobranca com o objeto fatura. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/adnojdq98axrc8y50bqo.png) e temos o objeto fatura com o objeto cobranca. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f1j8ejnxlf5cd3aidetx.png) Concluíndo... Usar Object.assing pode ser muito útil e versátil, atualemente uso muito esse recurso no Angular porque ele permite para alguns casos específicos montar o objeto de json de maneira mais livre, isso é muito útil quando estamos trabalhando com formulários que possam exigir um pouco mais de flexibilidade.
michael08928874
780,568
bug report
when ever i comment the site automatically makes me like my own comment not sure if it is intentional...
0
2021-08-03T17:06:30
https://dev.to/aheisleycook/bug-report-4ck0
when ever i comment the site automatically makes me like my own comment not sure if it is intentional or nont anyone else experience this?
aheisleycook
780,572
So you ever wonder what the heck is kubernetes
Well kubernetes is like a control panel of a space ship that manages and gives a form of control of...
0
2021-08-03T17:11:30
https://dev.to/greatness1504/so-you-ever-wonder-what-the-heck-is-kubernetes-36aj
kubernetes, devops, docker, aws
Well kubernetes is like a control panel of a space ship that manages and gives a form of control of the different engines, network, and components of the space ship to the space pilot; Kubernetes simply controls and manage a huge list of mini like computers that are kind of isolated from each other. It's management process includes and not limited to deploying new mini computers when old ones fail: which is called a fault-tolerant system; Creation and management of network to various mini computers that it manages; Allocation of resources and secrets like password between various mini-computers; Ability to control shutting down and deployment of new mini like computers; Ability to observe, monitor and alert this mini computers; there is a whole lot more that is not mentioned. This mini computers are actually containers that are running. Follow me to catch my coming series on how to be a devops engineer.
greatness1504
780,659
TIL: Wildcard SSL certificate does not support nested subdomains
A wildcard SSL certificate for *.example.net will match sub.example.net but not sub.sub.example.net....
0
2021-08-03T18:48:52
https://dev.to/jadia/til-wildcard-ssl-certificate-does-not-support-nested-subdomains-47bf
todayilearned, web
--- title: TIL: Wildcard SSL certificate does not support nested subdomains published: true description: tags: til, web //cover_image: https://direct_url_to_image.jpg --- A wildcard SSL certificate for `*.example.net` will match sub.example.net but not `sub.sub.example.net`. You need to generate a separate certificate for `*.sub.example.net`. [Source](https://stackoverflow.com/questions/2115611/wildcard-ssl-on-sub-subdomain) Free Cloudflare account does not support creating a wildcard SSL certificate for `sub.sub.example.net`, [you need $10/m subscription for Advanced Certificate Manager from Cloudflare](https://community.cloudflare.com/t/argo-tunnel-nested-subdomain/273061/2).
jadia
780,817
Astro + Foresty CMS Revisited
Static sites powered by Forestry's git-based CMS, made even easier.
0
2021-08-07T17:13:35
https://navillus.dev/blog/astro-plus-forestry-revisited/
astro, cms
--- title: Astro + Foresty CMS Revisited description: Static sites powered by Forestry's git-based CMS, made even easier. published: true date: 2021-08-03 00:00:00 UTC cover_image: https://navillus.dev/posts/2021-08-03-astro-plus-forestry-revisited.jpg tags: astro, cms canonical_url: https://navillus.dev/blog/astro-plus-forestry-revisited/ --- It's been just over a month since the original [Astro + Forestry CMS](/blog/astro-plus-forestry) demo, but things have been [moving quickly](https://github.com/snowpackjs/astro/blob/main/packages/astro/CHANGELOG.md) in Astro land! We'll build upon the original demo, go ahead and check out the first post if you haven't done so already! **tl;dr;** We received some great feedback from [Forestry](https://twitter.com/forestryio) after the original demo was released. I had been holding off on revisiting that post until a few Astro features were released. Check out the [live demo](https://demo-astro-forestry.netlify.app), or dive into the main [diff](https://github.com/Navillus-BV/demo-astro-forestry/commit/8660fb54988390b3a27d65a3abfe784725d789df) that includes most of the updates listed below. Specifically, Astro's [Collections API](https://docs.astro.build/core-concepts/collections) was updated to handle even more use cases. Oh yeah, and their [docs site](https://docs.astro.build/) was launched! It's hard to believe this project was only announced a few months ago, the community has really grown quickly and countless hours were put in to build a great looking (and localized!) documentation site. ## Feedback straight from the source <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Awesome post 👏<br>We should give a try to <a href="https://twitter.com/astrodotbuild?ref_src=twsrc%5Etfw">@astrodotbuild</a> 🚀<br><br>Minor feedback:<br>1. You could set /images as your default media folder instead of the default /uploads to further reduce the diff. <br>2. Authors could be stored as JSON file(s) instead of Markdown if you don&#39;t need a body.</p>&mdash; Forestry CMS (@forestryio) <a href="https://twitter.com/forestryio/status/1409905329845030916?ref_src=twsrc%5Etfw">June 29, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ### Default media folders Forestry's CMS is [extremely flexible](https://forestry.io/docs/quickstart/configure-cms/), and honestly its crazy the feature set they're able to offer while storing all your data in your own git repo. _No, this isn't a paid post. I'm a user of Forestry and big fan of the git-based CMS approach!_ One of many options when configuring Forestry is the default folder for [media uploads](https://forestry.io/docs/quickstart/configure-cms/#media-settings-examples). I definitely had an eye towards minimizing the diff in the original demo, I'm just in the habit of using a `/uploads` directory for user uploaded content. Old dogs, new tricks, and all that. ### Authors stored in JSON This was excellent feedback, and worth digging into a little further. I originally had author information stored in separate markdown files, `/data/authors/don.md` and `/data/authors/sancho.md`.This honestly didn't make that much sense, markdown is a great way to combine properties and content (usually built to HTML). The blog demo doesn't have any author-specific content, just a few properties like `name` and `image`. Given that the site doesn't need to pull any HTML content for the author, it makes much more sense to store that data in a simple JSON file. Let's get rid of the author markdown files entirely, replacing it with `src/data/authors.json`: ``` { "don": { "name": "Don Quixote", "image": "/uploads/don.jpg" }, "sancho": { "name": "Sancho Panza", "image": "/uploads/sancho.jpg" } } ``` Forestry supports this out of the box, once you [setup the sidebar](https://forestry.io/docs/quickstart/configure-cms/#setting-up-sidebar-content-sections) to include the new JSON file it recognizes that the file is a map and it **just works**. I honestly expected this to fight me a little bit, and was to when I had no issue removing references to the old markdown files. I was even able to reuse the same [content model](https://forestry.io/docs/quickstart/configure-cms/#content-modeling). I did need to update the content model for posts to reference the new JSON file instead of a markdown file, but a few clicks in the settings menu and it was all hooked back up. ### Bonus points: Instant Previews Forestry's [instant previews](https://forestry.io/docs/previews/instant-previews/) run your development server in a docker container and allow you to preview CMS updates in realtime. That's one of those features that can push plenty of projects to use a hosted CMS platform, very cool to see live previews working so seamlessly in a git-based CMS. One issue I ran into when deploying the first demo was that Astro only supported node 14+. Instant Previews allow you to customize which docker image is used for your development server, but I couldn't quite get it to work with an early version of Astro and ran out of time. As of a couple weeks though, Astro now supports node 12 out of the box! After updating the demo project, setting up instant previews was as simple as going back to Forestry's default preview settings. I had tried a custom docker container with no luck, but the included node 12 + yarn image worked like a charm with the latest version of Astro. ## The New Collections API The original collections API in Astro was designed before the beta was publicly released, and it turns out there were a few use cases that were more common than expected. There aren't any monumental changes here, you can dig through the [merged RFC](https://github.com/snowpackjs/astro/pull/703) if you're curious. A few of the API names were updated to be more clear, and the API was updated to work with the newer `Astro.props` API. You can check out the [diff here](https://github.com/Navillus-BV/demo-astro-forestry/commit/8660fb54988390b3a27d65a3abfe784725d789df#diff-a12b9a8302a65aacc7f592f6058bbc7b2eebcc2509a70ec64f182a67c9d54e45L3) to see exactly what I had to do to update the `$posts.astro` route for the new API. Personally, I'm a fan of the newer design and think the code is a bit cleaner and easier to read. ## Conclusion Astro has been moving quickly since it's public beta launched! I was glad to see how easy it was to clean up the demo a bit and take even better advantage of Forestry. If you haven't worked with a git-based CMS before I highly recommend you take an afternoon and give it a try. It may not be right for every project, but the developer experience literally having all your CMS data on your local machine just can't be beat!
navillus_dev
781,026
JWT: JsonWebToken demystified
Warning: This is not a how-to, but a what-is. Somebody already wrote a really nice how-to here:...
0
2021-08-04T03:00:32
https://dev.to/khoinguyenkc/jwt-jsonwebtoken-demystified-3dc6
Warning: This is not a how-to, but a what-is. Somebody already wrote a really nice how-to here: https://medium.com/@nick.hartunian/knock-jwt-auth-for-rails-api-create-react-app-6765192e295a But I think it can be broken down to even simpler terms. We'll learn how you can use JWT in your backend and frontend. You'll learn it's rather simple. So read this guide before following the technical set-up guide above. ----------------- WHAT'S A TOKEN? First off, before we even learn how these things work, let's get the big scary guy out of the way. We'll learn what a token is. It's not as scary as it looks. A token is just a piece of gibberish like "asdfqwertyjkl123". That's it. When we're carrying tokens it means we're carrying that piece of gibberish. And that piece of gibberish might translate to "username: brucewiththeboots, email: brucespringsteen2014@gmail.com". The key thing is: you and I can't understand that. Nobody does besides JWT. JWT wrote that piece of gibberish and only JWT can understand it. Ahhhh.. So it's like a secret language. The secrecy of the language makes the piece of gibberish trustworthy. Whoever composed this "asdfghqwertyjkl123" - he must've been one of us. That's how JWT thinks. So JWT believes the content of the message. ------ Now that you have a concrete picture in your mind what a token is, hopefully you're more confident to dive in further and learn how we actually obtain such a token and what are they good for. We're gonna learn what the conveyer belt looks like. This is also not very complicated. But there will be several moving pieces. To see the big picture you need to look at them as a whole, at the same time. So my best advice to you is slow down. Slow way down. Take notes in physical paper and explain it to yourself aloud. It seems like overkill but it might just be the quickest and smoothest way to learn this. Before we begin, keep in mind these 3 things: 1. Don't question "how is this secure?". It's not. Nothing is 100% secure. You can evaluate how secure it is later. Also, don't be curious of the mechanism that churns out these gibberish code. You can find out on your own later. We've already got too much on our plate. 2. Check your assumptions. You probably already have unconscious assumptions in your mind what an auth system look like. And when you learn about this new auth system, you instinctively assume this system also has these characteristics. But it might not. Which can lead to a wrong impression. So be cautious. Don't be that smug kid who thinks he knows what the teacher's going to say next. 3. Don't take the metaphors too far. Metaphors help put a concrete image in your mind, but they're not perfect representations. If I say a lime is kind of like a lemon because they both are so and so, don't start extrapolating and thinking it's just like a lemon. Sounds silly, but we do it all the time. Alright, time to dive in. Again, go slowly! So our app will have the backend and the frontend. We'll look at the backend first. Before we begin, remember this: the backend does not keep track of sessions. It does not "log people in". It does not keep track which user is logged in. It is much more simple than you think. Now we're finally beginning for real. ------------------ ![Diagram Flow Illustration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/42nz605lckjp2qyf3k7d.jpg) BACKEND Our backend consists of just 2 things: the auth booth and the vending machine. 1.auth booth The auth booth is kind of like a booth that you check in as you attend a conference. Then they might give you a lanyard with your name and picture on it. Similarly, the auth booth issues a token to the front end as you login. The frontend might send data like {username: "brucewiththeboots," password: "drowssap"} to mywebsite.com/api/auth, the auth booth will verify those credentials are correct. If correct, the auth booth delivers a token like "asdfqwertyjkl123" to the frontend. The name on the lanyard is a trustworthy ID card. why? if you have a lanyard, it indicates you signed in at the booth. The guy working at the booth verified you paid your tickets and wrote your name down before giving it to you. It means the name on your lanyard is indeed your name (If we assume the lanyard is impossible to fake and hard to steal from other someone else). Your token "asdfqwertyjkl123" is your lanyard. It's also trustworthy like the lanyard because it's impossible to fake and hard to steal. The difference is, the lanyard can be read by anybody, whereas the token is a secret language. So it's definitely not a perfect metaphor. summary: auth booth takes username and password, send token down 2. vending machine Our backend will be an api. And what are api's anyway? They're essentially json vending machines. For example, the backend receives a request likes animalphotos.com/api/q?type=cat&amount=35. It delivers a json that includes data to 35 cat pictures. An api that uses JWT will also receive a request with the url like animalphotos.com/api/q?type=cat&amount=35, but that request will also include a token like "asdfghqwertyjkl123". Our backend will relay this request to the vending machine. The vending machine read this request , translate the secret language and realize the token says it's brucewiththeboots and he wants 35 cat pictures. It can then decide whether to send him 35 cat pictures. Perhaps he already requested 300 pictures earlier and reached his daily limit. So it won't send down any cat pictures. Or perhaps he's getting more cats. Or, another example, facebook receiving a request with url facebook.com/api/q?datatype=photos&id=19834872323 with a token asdfghqwertyjkl123. That means brucewiththeboots wants access to a certain picture. The api can decide whether he's granted access. He might be authorized to see it or he might not. To extend the lanyard-at-a-conference metaphor: the lanyard means you're allowed to attend the lecture from the speaker. You're entitled to the free drinks and lunch offered. You're allowed to use their spotless restroom. Our vending machine also checks your token/lanyard to determine whether to serve you cat pictures. And that is basically what the vending machine does. It uses the token to find out who's requesting. It verifies if this person is authorized to receive the photos. Then it delivers accordingly. A request that doesn't come with a token will be rejected. Nothing will get delivered. ------------------- ------------------- FRONTEND I just sketched a rough picture of what the backend looks like. You only need these two parts to make it work! From here, you can vaguely imagine the role of the frontend. Let's visualize the flow of the frontend: It simply sends up your username and password, receives a token, and then keeps that token somewhere. For example, in localStorage. And how does it keep track of a session? It does not not need to keep track of the username at all. Instead, it just keeps the token! The presence of a token indicates the user is logged in. That token will be all that the frontend needs! Your frontend will use that token on every single fetch request to the backend. For example, ``` fetch( animalphotos.com/api/q?type=cat&amount=35, { method: "GET", headers: { "Content-Type": "application/json", "timerange": "recent", Authorization: `Bearer asdfghqwertyjkl123` } } ) ``` The backend reads the token, realizes it's brucewiththeboots, and decides if Bruce is authorized to access that particular content. As long as you stay logged in, it will keep that token. As you log out, it will delete the token. Next time you log in, it will use your username and password to get a brand new token. it won't say "asdfqwertyjkl123". it will be "oawifsdaklfsdafsd237842" for example, but that message also translates to "brucewiththeboots" ---------------- And those are all the parts ! Nothing insane. But takes a while for everything to sink in. Just stare at the picture for a bit. Redraw them without looking and start teaching yourself aloud like a crazy person. And suddenly, things will click.
khoinguyenkc
781,047
Hello World!
Hello world! I've been a long time lurker and have posted may once or twice under a different name,...
0
2021-08-04T04:21:42
https://dev.to/victorlamssc/hello-world-cna
devjournal, beginners
Hello world! I've been a long time lurker and have posted may once or twice under a different name, but switching over here since this is my serious name. All jokes aside, I'm making it a point now to focus on development, blogging, and sharpening my skills in order to both share and monetize a couple of blogs as a side hustle. Here's what I'm working on right now: - Being an Admin of a Wordpress site and trying to do it for free. - Building and running my own blog built from Hugo to hopefully share knowledge with both beginners and folks who need some help with tech - My own personal portfolio site I'll be using this dev.to blog to bring anyone who is curious along the way. I'll also be looking to learn React, Gatsby, and whatever it takes to make my own portfolio site shine since that will be the most complicated of the bunch. The other two I just want to be simple, fast, to the point and make a difference in peoples lives. So I hope this and my subsequent posts help someone out in the future with getting started with getting with their own site or set of sites and know that we all need to start somewhere!
victorlamssc
781,174
borb, the open source, pure Python PDF library
Hyphenation ensures your document just flows, without the hideous gaps borb allows you to use...
0
2021-08-04T06:24:11
https://dev.to/jorisschellekens/borb-the-open-source-pure-python-pdf-library-4b94
python, pdf, borb
1. Hyphenation ensures your document just flows, without the hideous gaps ![c64gpukzted71](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ef2kgo1mhmc65t5x77h.png) 2. borb allows you to use emoji, even if your font doesn't contain these characters. ![cj805rlzted71](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ji4eowebnv1fd8idxhoy.png) 5. Lists and Tables (or combinations thereof) are no issue for borb ![g4geunkzted71](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ouoojpzziremga0gv546.png) 2. borb comes with an extensive library of line-art, to ensure your documents are always on fleek. You can tweak line-thickness, stroke color, fill color, etc ![po41epkzted71](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yoxswds8csts2d85xah6.png) 1. borb can handle any MatPlotLib figure, as well as regular images (url, path, PIL) ![yf75bnkzted71](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/leadeph90wdc7gs7zvqb.png) Check out the project on [Github](https://github.com/jorisschellekens/borb), or download directly using [pip](https://pypi.org/project/borb/).Be sure to have a look at the extensive [EXAMPLES.md](https://github.com/jorisschellekens/borb/blob/master/EXAMPLES.md) file, which should answer most (if not all) of your questions. I would appreciate it if you star my repo.
jorisschellekens
781,201
Async/await operations - a new way of writing asynchronous code
Async/await mechanism is the new way to write asynchronous code that you traditionally write with a...
0
2021-08-09T11:38:54
https://www.barrage.net/blog/technology/async-await-operations
mobile, development, swift, ios
Async/await mechanism is the new way to write asynchronous code that you traditionally write with a completion handler (known as closure). Asynchronous functions allow asynchronous code to be written as if it were synchronous. For demonstration purposes, we used Xcode 13 beta with the Swift 5.5 version. <br/> ##A new way of writing asynchronous code Five main problems that async/await could solve are: - Pyramid of doom - Better error handling - Conditional execution of the asynchronous function - Forgot or incorrectly call the callback - Eliminating design and performance issue of synchronous API because of callbacks API awkwardness Async await provides the mechanism for us to run asynchronous and concurrent functions in a sequential way which helps us make our code easier to read, maintain and scale, which are very important parameters in [software development](https://www.barrage.net/solutions/custom-software-development). <br/> ### Async/await vs. completion handlers First, we are going to show you how you would traditionally write functions with completion handlers. Everyone is familiar with this approach. Don't get us wrong, this isn't a bad way of writing functions with completion handlers. We've been doing it like this for years on our [projects](https://www.barrage.net/work); completion handlers are commonly used in Swift code to allow us to send back values after a function returns. Still, once you have many asynchronous operations that are nested, it's not really easy for the Swift compiler to check if there is anything weird going on in terms of any bugs introduced into your code. For cleaner code, we made a new Swift file called NetworkManager.swift, and in this file, we are handling our requests. Our User is a struct with values of name, email, and username. <br/> ``` struct User: Codable { let name: String let email: String let username: String } ``` <br/> ``` // MARK: fetch users with completion handler func fetchUsersFirstExample(completion: @escaping (Result<[User], NetworkingError>) -> Void) { let usersURL = URL(string: Constants.url) guard let url = usersURL else { completion(.failure(.invalidURL)) return } let task = URLSession.shared.dataTask(with: url) { (data, response, error) in if let _ = error { completion(.failure(.unableToComplete)) } guard let response = response as? HTTPURLResponse, response.statusCode == 200 else { completion(.failure(.invalidResponse)) return } guard let data = data else { completion(.failure(.invalidData)) return } do { let users = try JSONDecoder().decode([User].self, from: data) completion(.success(users)) } catch { completion(.failure(.invalidData)) } } task.resume() } ``` <br/> As you can see, a lot is going on here; it is pretty hard to debug, there is a lot of error handling for "just" fetching user data, and we ended up with a bunch of lines of code. Most programmers needed to write a couple of these in projects, which gets very repetitive and ugly. In our ViewController.swift file, we decided to show data inside a table view, so we made one and conformed to its data source. Once we decided to call this function, it would look something like this: ``` //MARK: 1. example -> with completion handlers private var users = [User]() ​ private func getUsersFirstExample() { NetworkManager.shared.fetchUsersFirstExample { [weak self] result in guard let weakself = self else { return } switch result { case .success(let users): DispatchQueue.main.async { weakself.users = users weakself.tableView.reloadData() } case .failure(let error): print(error) } } } ``` <br/> ##Defining and calling an asynchronous function Now, on the other hand, here is how you would write a function with async/await, which is doing everything like the example above: ``` //MARK: fetch users with async/await using Result type func fetchUsersSecondExample() async -> Result<[User], NetworkingError> { let usersURL = URL(string: Constants.url) guard let url = usersURL else { return .failure(.invalidURL) } do { let (data, _) = try await URLSession.shared.data(from: url) let users = try JSONDecoder().decode([User].self, from: data) return .success(users) } catch { return .failure(.invalidData) } } ``` <br/> An asynchronous function is a special kind of function that can be suspended while it's partway through execution. This is the opposite of synchronous functions, which either run to completion, throw an error, or never return. We are using two keywords, async and await. We use the async keyword to tell the compiler when a piece of code is asynchronous. And with await keyword, we tell our compiler that it has the option of suspending function until data or error is returned; and to indicate where the function might unblock the thread, similar to other languages such as C# and Javascript. Swift APIs, like URLSession, are also asynchronous. But then, how do we call an async marked function from within a context that’s not itself asynchronous, such as a UIKit-based view controller? What we’ll need to do is to wrap our call in an async closure, which in turn will create a task within which we can perform our asynchronous calls - like this: ``` //MARK: 2. example -> async/await with Result type private func getUsersSecondExample() { async { let result = await NetworkManager.shared.fetchUsersSecondExample() switch result { case .success(let users): DispatchQueue.main.async { self.users = users self.tableView.reloadData() } case .failure(let error): print(error) } } } ``` <br/> Result type was introduced in Swift 5.0, and its benefits are to improve completion handlers. They become less important now that async/await is introduced in Swift 5.5, of course. However, it's not useless (as you can see in the example above) since it's still the best way to store results. Here is one more example of usage async/await without Result type, which is even cleaner approach: ``` //MARK: fetch users with async/await third example, without Result type func fetchUsersThirdExample() async throws -> [User]{ let usersURL = URL(string: Constants.url) guard let url = usersURL else { return [] } let (data, _) = try await URLSession.shared.data(from: url) let users = try JSONDecoder().decode([User].self, from: data) return users } ``` <br/> Making a call of an async function in our ViewController.swift file: ``` override func viewDidLoad() { super.viewDidLoad() configureTableView() async { let users = await getUsersThirdExample() guard let users = users else { return } self.users = users self.tableView.reloadData() } } ​ //MARK: 3. example -> async/await without Result type private func getUsersThirdExample() async -> [User]? { do { let result = try await NetworkManager.shared.fetchUsersThirdExample() return result } catch { // handle errors } return nil } ``` <br/> One more really good thing about the above example is that we can use Swift's default error handling with do, try, catch, even when asynchronous calls are performed. As you can see in both our examples, there is no weak self capturing to avoid retain cycles, and there is no need to update UI on the main thread since we have the main actor (@MainActor) who is taking care of that (the main actor is accessible only if you are using Swift's new concurrency pattern). And voilà, here is our result: ![alt text] (https://images.prismic.io/barrage/d8e5a87c-80ec-4f64-a420-5e800ded119a_async-await-operations-6.png) <br/> ##Conclusion Async/await is just a part of Swift Concurrency options; Apple's SDKs are starting to make heavy use of them. This offers a new way to write asynchronous code in Swift; the only drawback is that it's not compatible with older operating system versions, and we'll still need to interact with other code that doesn't yet use async/await. We hope this article will help you better understand asynchronous programming in Swift. If you want to learn more about this pattern, or about everything that was introduced with Swift concurrency, visit Apple's official website and read their documentation. Additionally, you can check out these useful links: - [Use async/await with URLSession] (https://developer.apple.com/videos/play/wwdc2021/10095/) - [Meet async/await in Swift](https://developer.apple.com/videos/play/wwdc2021/10132/) - [Swift Evolution Proposal - Async/await (GitHub)] (https://github.com/apple/swift-evolution/blob/main/proposals/0296-async-await.md) - [Swift Programming language - Concurrency] (https://docs.swift.org/swift-book/LanguageGuide/Concurrency.html) <br/> And if you are interested in source code, visit [my GitHub account] (https://github.com/miranhrupacki/async-await-WWDC21-blog-post).
igorcekelis
781,217
Why I fell out of love with inheritance
A tale of growing up, and realizing your heroes have flaws The Beginning Almost...
0
2021-08-18T08:32:12
https://dev.to/yonatankorem/why-i-fell-out-of-love-with-inheritance-1fh
design, oop
## A tale of growing up, and realizing your heroes have flaws #### The Beginning Almost nine years ago, when I was near the end of my BSc, I was sure that inheritance is the greatest thing an OO programmer has in their arsenal of tools. During most of my studies, we were taught the principles of OOP, why interfaces are important, and how inheritance is a great tool for reusing code, abstracting your code and design, and what a generally powerful tool it is. That's not incorrect. It is good for those things. But it took me a few years to realize that when I idolized those concepts, I tended to use inheritance where it shouldn't have been used. We were shown the example where inheritance fails: > A circle, and an ellipse. Which is the abstraction of which? But we were also told examples like this one: ```c# class employee { pay() generateHoursReport() assignVehicle() } class Manager extends Employee { assignEmployee() } class CEO extends Manager { } ``` When you think about it in terms of technical abstraction, it makes perfect sense. A CEO is a manager. A Manager is an employee. A CEO is an employee. All three share the basic functionality of the employee, and each extends those functions, and/or add new functionality. This solution is exactly the solution I would come up with, before I started to internalize the first of the SOLID principles, and learned more about the delegation/composition pattern. ####Single Responsibility Principle Robert C. Martin wrote about the SOLID principles in the [Agile Software Development](https://books.google.com/books?id=0HYhAQAAIAAJ&redir_esc=y) and the definition was that: > A module (class, interface, package,…) should do only one thing. It was later refined by Martin to state > A module should have only one reason to change I won't go into details on the meaning of the refinement, and you'll find a lot written about it already, especially in [Martin's own blog](https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html). But let's apply this principle to our Employee example. Looking at the Employee class, we can see that it handles three things: financials, motor department, and HR subjects. To discuss it with Martin’s refinement in mind, there are three departments that might give a reason to change something in the employee class. 1. Payroll department decides to update the way they calculate the salary of employees. 2. Motor department wants to make it so only managers can be assigned vehicles due to budget cuts. 3. HR replaced their reporting system and require a new template for the hourly report. It took me a while to understand why this is even an issue. I would think that the `Employee` class knows how to handle the specifics for its own level, and once you travel up the chain, the `Manager` will probably override the things it needs, and so on. But... > * You implement the `pay` (finance) and `generateHoursReport` (HR), and quickly notice a piece of repetitive code between the two that counts the total amount of hours. You’ve read about code smells and you want to be DRY, so you refactor it out to a third, private method: `getTotalHours`. Months later, HR wants to change the report so that it shows the remaining hours. The new developer is not familiar with the system and changes `getTotalHours` so that it returns the value times -1. `generateHoursReport` now works as expected, but `pay` is now very broken. The change requested in one method affected the other. > * We have this structure, and now we want to introduce a new type of employee: `Contractor`. The contractor needs to generate a hours report, but not to your system, and your system does not need to pay him because you pay his company. You want to have Contractor extend Employee, but the new class should not include a pay method. Maybe have Employee extend the Contractor, but an employee is not a contractor. So you start to introduce base classes that do not actually represent anything, it’s just a “base” class to extend from. I promise you this: a year later, your simple inheritance structure became a huge inheritance tree, with classes like `RootEmployee`, `ContractorWithVehicle`, and `TechnicalManager` that cannot have any employees. How can we change `Employee` and its structure to avoid these issues? How can we design the system to support all of these departments? One way to do it, is to compartmentalize the logic to three different parts, and change the design to one of composition & delegation. This creates four pieces of code, each responsible for one topic: HR, Payroll, Motor, and the Employee that composites the three and interfaces with them by providing information for the relevant employee. With this modification, our design now looks like this: ```c# interface IPaymentStrategy { pay() } class SalaryEmployeePayment implements IPaymentStrategy { pay() {...} } interface IReportFactory { generateHoursReport() } class HourlyEmployeeReportFactory implements IReportFactory { generateHoursReport() {...} } interface IVehicleDecorator { assignVehicle() } class LeasingVehicleDecorator { assignVehicle() {...} } class Employee { private IPaymentStrategy paymentStrategy private IReportFactory reportFactory private IVehicleDecorator vehicleDecorator public Employee(IPaymentStrategy p, IReportFactory r, IVehicleDecorator v) { ... } pay() { this.paymentStrategy.pay() } generateHoursReport() { this.reportFactory.generateHoursReport() } assignVehicle() { this.vehicleDecorator.assignVehicle() } } var employee = new Employee( new SalaryEmployeePayment(), new HourlyEmployeeReportFacotry(), new LeasingVehicleDecorator() ) var manager = new Employee( new ManagerPayment(), new HourlyEmployeeReportFactory(), new LeasingVehicleDecorator() ) var ceo = new Employee( new ManagerPayment(), new HourlyEmployeeReportFactory(), new CompensationVehicleDecorator() ) ``` With this separation, we only need one `Employee` class, and the inheritance is provided by giving it different behaviors for each part. Replacing the behavior of one element has no effect on the other parts of the system, which is basically the core idea behind the single responsibility principle. Inheritance may still be used, but it will be applied to a single element of behavior, and will not cause the `ContractorWithVehicle` type of problem or create a need for [multiple inheritance](https://en.wikipedia.org/wiki/Multiple_inheritance). ####Composition & Delegation Composition does not necessarily mean delegation. The composition design pattern describes a way to define an object by its connection to other objects: `TreeNode`, or `ListNode` are the most basic of examples for it. Delegation does not necessarily mean composition. Delegation (as is implied by the name) can be used when we want our object to fulfill a specific API, without being connected to the implementation of it. The above example, and plenty of others like it (like [GameObject](https://en.wikipedia.org/wiki/Composition_over_inheritance#Example)) show how using inheritance can sometimes lead to poor design, especially when the inheritance is already implemented and more, different usages appear. The composition design has more benefits: 1. It makes our design more testable, since each part of the code can be tested completely separately 2. It makes our design more aligned with the Liskov Substitution Principle 3. It makes our design more aligned with the Interface Segregation Principle Composition and Delegation together, make us look at our classes in terms of behaviors, and not as only “X is a Y”. ####Wrap up I want to stress one final thing about “loving” a concept. In the past, thinking that a specific design is **best** often got me to a place where I tried to fit the problem to the design, and not the other way around. Remember that you have more than a hammer. Don't see your problems as only nails.
yonatankorem
781,220
How to make Infinity!
New Method to make Infinity! So... I found a new way to make Infinity in Javascript! ...
0
2021-08-04T08:46:17
https://dev.to/mafee6/how-to-make-infinity-1pa7
javascript
### New Method to make `Infinity`! > So... I found a new way to make `Infinity` in Javascript! #### This is how: > Using `parseInt()` with `.repeat()`! 🤣 #### Meaning of `parseInt()` > `parseInt()` basically converts a string into Integer. If some invalid string is passed (like: "hi"), it returns `NaN` #### Meaning of `.repeat()` > `string.repeat()` returns the same string but repeated the number of times given! #### Using it to make `Infinity` ∞! > Code: ```js parseInt(`${"9".repeat(999)}`) ``` > So if you just use ```js "9".repeat(999) ``` > It will return something like this: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u44hxmbpnijjhe5ol62d.png) > More the 9's more the number's digits! (But I used `999`) > #### Now, `parseInt` converts the string with the `999` 9's into an integer. The integer is so big that javascript treats it like `Infinity`! > #### Yay we got `Infinity` in javascript! #### 😳 I found this when trying to make a string with millions of `Hi!`s [**New**] Easier way ```js Math.pow(9999, 9999) ```
mafee6
781,226
AWS IAM Policy Simulator
Not the hero we want, but the hero we need TL;DR; You can use the Policy Simulator of AWS...
0
2021-08-07T20:05:21
https://dev.to/hristiyan/aws-iam-policy-simulator-3i2j
cloudskills, aws, iam
## Not the hero we want, but the hero we need TL;DR; You can use the Policy Simulator of AWS to test your IAM policies. This allows you to quickly find the correct access rights and to debug problems with your policies. Recently I started experimenting with AWS. Following the least privilege principle, I created a separate development account for my experiments. Right at the beginning I ran into a brick wall – AccessDenied! I know I am doing something wrong, but what exactly? This is where the AWS IAM Policy Simulator saved my day. Let me first express my frustration with AWS IAM. It could be that I am naïve and that the problem with access management is really complex. Or it could be, that AWS built an overly complex solution and is now stuck with it. In both cases there is no excuse for an official AWS tutorial about Elastic Kubernetes Service starting with the prerequisite that you grant yourself full access admin privileges. I decided to do it the right way und create a separate account with the least privileges it needs to work with EKS. Instead of using the AWS CLI, I took [eksctl](https://eksctl.io/). This is a third-party tool, but even AWS has officially accepted it for managing EKS clusters. Fortunately, eksctl has a list with the [necessary policies](https://eksctl.io/usage/minimum-iam-policies/). However, my fortune did not last long, because the first call with eksctl returned AccessDenied! Now what? ![Access Denied with eksctl](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d74zfpu6bisfb71xw434.png) Per default AWS is blocking all calls, which are not explicitly allowed. This leaves me with two options. Either I do not have sufficient rights to access EKS or some other policy I have defined is explicitly blocking my access. What I find frustrating here is the lack of information. The “explicit deny” in the error message does not help me to narrow it down. I tried to find more information in CloudTrail, but for some reason even my admin account, could not access the service. One of those days… This is where the [IAM Policy Simulator](https://policysim.aws.amazon.com/home/index.jsp) helped me to bring some light in the dark world of AWS policies. Maybe I am new to AWS, but I don't understand why the simulator is not more visible. There is just one button on the Permissions page of a group. Otherwise you have to know it exists, in order to find it. Beware you may need some [addition rights](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam_policy-sim-console.html) to access the simulator. ![Simulate button on the Permissions page of a group](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qijm6s4ho9i9rqelqep5.png) When you open the simulator, you can choose the user and what calls you want to simulate with this account. Don't worry about using a delete command for example. No real calls will be sent to your AWS account. ![Simulator startpage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5goo1061wohfsxt1057a.png) One of the reasons I love the simulator is that it doesn't just show “allowed” or “denied”. When my simulated call was denied, the exact part of the policy blocking my call was highlighted. ![Simulated eks call denied](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s0opgw3jrd28dv9au9y.png) And suddenly everything made sense. In my attempts to secure the AWS account, I created a new policy. This policy allows non-admin users to manage their own credentials, like access keys, etc. But you can do this only after you have logged in with multi-factor authentication. Without MFA you are more or less only allowed the manage the MFA devices. What I overlooked was that sending CLI commands does not use MFA. I have added the correct access rights according to the eksctl documentation. However, my earlier experiments with MFA were the reason I was being blocked now. Because I have added the MFA constraint some time ago, I completely forgot about it. Removing the DenyAllExceptListedIfNoMFA block proved to be a quick fix in this case. I don't have bad feelings about this. I made a habit of always changing the generated passwords and creating a virtual MFA device with every new account. My team is small enough, that we don't need such rigid control with explicit deny policies. ![Simulated eks call allowed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ge3ccl27t2uhuu910dmp.png) In retrospective I should have found the problem even without the simulator. However, AWS IAM is not exactly easy to work with. I want a simple solution to manage identity and access. However, until we get this simple solution, I`ll need all the help I can get. AWS IAM Policy Simulator – not the solution we want, but the help we currently need.
hristiyan
781,296
JINA AI | Cloud native neural search
Jina AI is a neural search company. Jina is the core product, released on April 28th, 2020. The...
0
2021-08-04T11:38:55
https://dev.to/adityamangal1/jina-ai-1e66
jina
Jina AI is a neural search company. Jina is the core product, released on April 28th, 2020. The official tagline Jina put on it's Github repository is: Jina is a cloud-native neural search solution powered by state-of-the-art AI and deep learning. To put it simply, you can use Jina to search for anything: image-to-image, video-to-video, tweet-to-tweet, audio-to-audio, code-to-code, etc. To understand what we want to achieve at Jina AI, I often explain Jina with the following two expressions. Google recently announced they are using a “neural matching” algorithm to better understand concepts. Google's Danny Sullivan said is being used for 30% of search queries. Google has recently published a research paper that successfully matches search queries to web pages using only the search query and the web pages. A "TensorFlow" for search. TensorFlow, PyTorch, MXNet, and Mindspore are all universal frameworks for deep learning. You can use them for recognizing cats from dogs, or playing Go and DOTA. They are powerful and universal but not optimized for a specific domain. In Jina, they are focusing on one domain only: the search. We are building on top of the universal deep learning framework and providing an infrastructure for any AI-powered search applications. The next figure illustrates how they position themselves. A design pattern. There are design patterns for every era, from functional programming to object-oriented programming. Same goes for the search system. 30 years ago, it all started with a simple textbox. Many design patterns have been proposed for implementing the search system behind this textbox, some of which are incredibly successful commercially. In the era of neural search, a query can go beyond a few keywords; it could be an image, a video, a code snippet, or an audio file. When traditional symbolic search systems cannot effectively handle those data formats, people need a new design pattern for the underlying neural search system. That's what Jina is: a new design pattern for this new era. ## JINA RESOURCES Github: https://github.com/jina-ai/jina/ Opensource: https://opensource.jina.ai Website: https://jina.ai Twitter: https://twitter.com/jinaai_?lang=en LinkedIn: https://www.linkedin.com/company/jinaai Press: press@jina.ai
adityamangal1
781,326
Are you above average?
“It seems like everyone with more than 5 years coding experience thinks they’re above average.” — A...
0
2021-08-05T07:50:21
https://jhall.io/archive/2021/08/04/are-you-above-average/
seniordev, juniordev
--- title: Are you above average? published: true date: 2021-08-04 00:00:00 UTC tags: seniordev,juniordev canonical_url: https://jhall.io/archive/2021/08/04/are-you-above-average/ --- > “It seems like everyone with more than 5 years coding experience thinks they’re above average.” > — A developer friend of mine, in disgust Bob Martin once famously recounted that since the beginning of the computer industry, roughly starting with Alan Turing in the 1940s, the majority of software developers have had less than 5 years experience. Roughly, this holds true today, too, for the simple reason that the industry is growing faster than people leave the field. The newest [StackOverflow Developer Survey](https://insights.stackoverflow.com/survey/2021) seems to show a similar pattern: ![](https://jhall.io/archive/images/years-coding-survey.png) 49.53% of respondants had 9 years coding experience or less. And given that this survey has a large selection bias, we can assume that if we were to do a representative survey of all software developers, the numbers would include a lot more inexperienced developers. This suggests that Bob Martin’s assertion that the median developer experience is around 5 years is probably correct, and that my friend’s frustration was probably misplaced. What does this mean for you? Well, if you’re in the group of people with less than 5 years experience, don’t worry. You’re not alone. Literally half of the industry is in the same boat with you, making honest mistakes, and learning along the way. If you’re like me, and have more than 5 or 10 years experience, I can think of two ways to look at the situation. One is to simply rest on your laurels. If that’s your approach, I guess in some sense you’ve earned it. I hope you enjoy it. I tend to look at it as having a big responsibility. Maybe the IT industry is no longer in its infancy, but I think it’s fair to say it’s still in early adolescence. And there’s a lot still to be learned—and taught—across an entire gamut of IT experience, from technology, to management styles, to ethical implications of our craft. This is why I started [this mailing list](https://jhall.io/daily), and my [podcast](http://podcast.jhall.io/) and [YouTube channel](https://www.youtube.com/channel/UC5UfX0EgUWlcdQ2RDsq_fcA). I don’t necessarily think I have any “special” insight. But I do have more years experience than roughly 90% of the people in the IT industry. This is my way of giving back, and helping others avoid many of my past mistakes. * * * _If you enjoyed this message, [subscribe](https://jhall.io/daily) to <u>The Daily Commit</u> to get future messages to your inbox._
jhall
781,478
So you want to create a design system, pt. 2: Colors
Color is probably the most distinctive element of any design, and also the most important expression...
0
2021-08-05T09:12:14
https://dev.to/mobileit7/so-you-want-to-create-a-design-system-pt-2-colors-5a9f
design, uiweekly, ux, devops
Color is probably the most distinctive element of any design, and also the most important expression of brand identity (at least until Material You completely reverses this relationship, but it remains to be seen how it will be adopted). So how do we approach color when designing and implementing a design system so that our solution is usable, versatile, and scalable? ## Color me curious Besides conveying the brand and evoking emotions, colors have several other roles in current applications, including: * highlighting different application states such as errors, warnings, success, or info messages * ensuring usability, legibility, and accessibility of the application under all conditions * providing different themes from which the user (or system) can choose according to environmental conditions or personal preferences Regarding the last point, users nowadays expect support for at least light and dark themes. Often this is more than just an aesthetic choice — for example, a car navigation app that dazzles drivers at night with large areas of bright colors can be downright dangerous. And while the app supports switching between themes, it doesn’t have to stop at just these two basic ones, for example: * Is accessibility extremely important to your app? Add a specially designed high-contrast or colorblind-friendly theme. * Does the app owner currently run a major promotion, have an anniversary, or celebrate some significant event? Make it known with a special temporary theme. * Do you want to differentiate products or make it clear that the customer bought a premium version of the app or service? Add a special, more luxurious-looking theme. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/baqhagyt3e6h2eopas8a.png) Theme support is a feature that is unique in that it makes both users and the marketing department happy. But how to construct it so that both designers and developers can actually work with it and be productive? ##Layers of indirection Let's start with what is definitely not suitable: Hardcoding the colors in the design tool and therefore in the code. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4l6rspq9bx6hdhfidyzg.png) There are obvious drawbacks to this method, including the inability to change colors globally in a controlled manner (no, “find & replace” isn’t really a good idea in this case), and the need to copy and edit all designs for each additional theme we want to add (for designers), or cluttering the code with repetitive conditions (for developers). It also often leads to inconsistencies and it’s extremely error-prone - did you notice the mistake in the picture above? Unfortunately, we still occasionally encounter this approach because many design tools will happily automagically present all the colors used, even if they are hardcoded, creating the illusion that the colors are under control and well-specified. They aren’t. Don’t do this. So how to get rid of hardcoded colors? The first step is to hide them behind named constants and reuse these constants in all places. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/okyg7bknmdtqk2pq9gb9.png) This is definitely better - the colors can be changed globally in one place, but the problem arises when supporting multiple themes. The naive solution is to override each constant with a different value in every theme. This works as long as the colors in the different themes change 1:1. But consider the following situation: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/frji1m294bq20ivczbvx.png) Since it is usually not advisable to use large areas of prominent colors in a dark theme, although the toolbar and button in a light theme are the same color, the toolbar should be more subdued in a dark theme. This breaks the idea of overriding the colors 1:1 in different themes because where one theme uses a single color, another theme needs more colors. The solution to this situation is the so-called (and only slightly ironic) fundamental theorem of software engineering: “We can solve any problem by introducing an extra level of indirection.” In this case, that means another layer of named color constants. I kid you not - please stay with me, it’ll be worth it. ## The solution We achieve our goals, i.e. the ability to easily and safely change colors globally, and support any number of themes, by following these steps: * Define a set of semantic colors. These are colors named and applied based on their purpose in the design. Their names must not express specific colors or shades, but roles. For example, Google’s Material Design defines the following semantic colors: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yluw8wwxolh4m63g7lu4.png) These names are a good starting point, but of course, you can define your own, based on your needs. What's important is that semantic colors don't have concrete values by themselves, they are placeholders or proxies that only resolve to specific colors when applied within a specific theme, meaning one semantic color will probably have a different actual value in each theme. * Define a set of literal colors. These constants literally represent the individual colors of your chosen color palette. They are common to all themes, so there are usually more of them than semantic colors. Unlike semantic colors, they are named purely on the basis of their appearance. For example, an earlier version of Material Design defined the following shades: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl1tqlxbu4fxmys0on6p.png) Recently it has become a common practice to distinguish colors with different lightness using a number where 1000 is 0% lightness (i.e. black) and 0 is 100% lightness (white), but of course you can devise your own system. * Follow this rule in both design and code (no exceptions!): Semantic colors must be used exclusively and everywhere. Literal colors (or even hard-coded constants) must never be used directly. This means that the use of colors in design and implementation must have the possibility of being completely specified in the form of "wireframes" like this: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhzsp5m6xk7a8avs5noi.png) * Finally, map semantic colors to concrete literals per theme. This step ultimately produces a specific theme from the design specification, which is in itself independent of a particular theme. Based on our previous example, the final result will look like this: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gyse0nfwby6lto0atxbu.png) For example, toolbar background color is specified as _Primary_, which in _Light_ theme is mapped to _Purple700_ literal color, but in _Dark_ theme it resolves to _Purple900_. The most important thing is that _Purple900_ or _Purple700_ literal colors aren't mentioned in the design specification, only in theme definition. It's just a little extra work, but the benefits are enormous. We have successfully decoupled the definition of the colors from the actual colors used in various themes. ## Make it work for you There are usually questions that often arise or choices that need to be made when implementing this system. Here are some tips based on our experience: * Don't go overboard with the number of semantic colors. It's tempting to define a separate semantic color for every part of each UI element (e.g., _ButtonBackground_, _SwitchTrack_, ProgressIndicatorCircle_), which has the theoretical advantage that you can then change them independently, but it also makes it much harder to navigate the design and implementation. The ideal amount of semantic colors is one where one can hold more or less all of them in one's head at once. Try to find a minimum set of sufficiently high-level names that will cover 90% of situations. You can always add more names later. * Naming is hard. Since semantic colors form the basis of the vocabulary used across the team and also appear everywhere in the code, it's a good idea to spend some time looking for the most appropriate names. If some of the chosen names turn out to be not that fitting, don't be afraid to refactor them. It's unpleasant, but living with inappropriate names for a long time is worse. * Never mix literal and semantic names. For example, a set of semantic colors containing _Orange_, _OrangeVariant_, _Secondary_, _Background_, _Disabled_, etc. isn’t going to work well, even if the main color of your brand is orange and everyone knows it. Even so, create a purely semantic name for such a color, like _Brand_ or _Primary. * If you need multiple versions of a semantic color, never distinguish them with adjectives expressing properties of literal colors such as _BrandLight_, _BrandDark_, etc., because what is darker in one theme may be lighter in another and vice versa. Instead, use adjectives expressing purpose or hierarchy, such as _BrandPrimary_, _BrandAccent_, or even _BrandVariant_ (but if you have _Variant1_ thru _Variant8_, you have, of course, a problem as well). * For each semantic color that can serve as a background color, define the corresponding semantic color for the content that can appear on that background. It's a good idea for these colors to contain the preposition _on_ or the word _content_ in the name, like _OnPrimary_ or _SurfaceContent_. Avoid the word text (e.g., _SurfaceText_), as this color will often be applied to other elements such as icons or illustrations, and try to avoid the word _foreground_ because sometimes the use of background and foreground colors can be visually inverted: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3r87z4gaubz79fmquftn.png) * The use of the alpha channel in literal colors is a difficult topic. Generally speaking, the colors that will be used as backgrounds should be 100% opaque to avoid unexpected combinations when several of them are layered on top of each other (unless this effect is intentional). Content colors, on the other hand, can theoretically contain an alpha channel (useful, for example, for defining global secondary or disabled content colors that work on different backgrounds), but in this case, it is necessary to verify that the given color with its alpha value works with any background. Another question is alpha channel support in your design tool and code - is the alpha value an integral part of the color, or can we combine separate predefined colors and separate predefined alpha values? * If your design tools don't directly support semantic colors or multiple themes at the same time, work around that. Tools come and go (or, in rare cases, are upgraded), but your design system and the code that implements it represents much more value and must last longer. Don’t be a slave to a particular tool. * All text should be legible and meet accessibility standards (icons on the other hand don’t need to do that, but it’s generally a good idea for them to be compliant as well) - see The Web Content Accessibility Guidelines (WCAG 2.0), and use automated tools that check for accessibility violations. ## Design is never finished ...so it's important that your design system is able to evolve sustainably. This way of defining color, although certainly not the simplest, allows for exactly that. We'll look at other fundamental elements of design systems and how to handle them next time. Written by: @jhutarek
mobileit7
781,490
Android App Development: You Should Avoid Common Things to
Let us find out some of the many common mistakes that every developer should avoid when developing an...
0
2021-08-04T14:03:28
https://dev.to/technobyt/android-app-development-you-should-avoid-common-things-to-1p02
<div class="s-blog-body s-blog-padding"> <div class="s-repeatable s-block s-component s-mh "> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p style="text-align: justify;"><em>Let us find out some of the many common mistakes that every developer should avoid when developing an Android app. Common Mistakes to Avoid.</em></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><strong>Rewriting existing source codes</strong></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>Arguably, one of the common mistakes made by developers is rewriting an existing code. The apps in today&rsquo;s day and age often share a few common features such as image loading, social logins, JSON parsing, and network calls. In fact, the majority of codes used in implementing these structures are already accessible. It means, these codes that are already made have been deployed and written a couple of times. That is why rewriting an existing code is definitely a mistake that should be avoided.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><strong>Adding too many functions and features in just a single app</strong></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>Actually, this common mistake is normally committed by unprofessional app developers because they are tempted to include all the built-in features in just a single app. As an app developer, it is important that you should always concentrate on your app&rsquo;s unique functions and features. And on how it will benefit the users as well.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><strong>Complicated User Interface (UI)</strong></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>When making an app, make sure that it features an easy-to-use and intuitive interface. The user interface must be adapted by users with ease even though they did not read or refer to the user&rsquo;s manual.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>Unfortunately, some developers tend to develop an app that has a complex user interface. As a matter of fact, average users or youngsters disregard apps that require them to read the user&rsquo;s manual.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><strong>Poor Testing</strong></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>One of the many reasons why some applications became unsuccessful is due to poor testing. Some developers deploy codes to Google Play Store and release apps to their clients without checking them thoroughly and correctly. So as a result, some of these are apps reported with a lot of errors and bugs.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>In fact, these reports may possibly result in poor customer reviews and it will damage the app&rsquo;s rating. That is why, before deploying make sure to run the app on different devices such as smartphones and tablets.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><strong>HokuApps : A Comprehensive&nbsp;</strong><a href="https://www.hokuapps.com/" target="_blank"><strong>Android Application Development Company</strong></a></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p>With several application development companies providing pioneering technology, developing Android applications is now easier compared to before. And one of these solutions is the Android app development platform, such as the one offered by&nbsp;<a href="https://www.prnewswire.com/news-releases/hokuapps---the-engine-for-roofing-southwests-growth-to-us-national-prominence-301189609.html" target="_blank">HokuApps</a>.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><a href="https://www.prnewswire.com/news-releases/firststring--the-recruitment-app-built-by-hokuapps-helping-job-seekers-during-covid-19-301213070.html" target="_blank">HokuApps</a>&nbsp;is an application development company that is situated in Singapore that is composed of knowledgeable and skilled individuals when it comes to application development. With the help of skilled and professional developers of HokuApps, you will definitely have a cost-effective Android app. Apart from that, HokuApps, tend to develop apps faster compared to other application development companies, but they are always making sure that they will come with top-quality and excellent apps.</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><strong>Few More Success Stories about HokuApps:</strong></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><a href="https://finance.yahoo.com/news/hokuapps-creates-engaging-platform-sdi-010000612.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAMUo7Fz3A9kPhGcLY0sRR08tUsTqs_8Hf9_sed9jGF2--FoZ7k87CY_tZfb5Hq4HgAZJs8IDSG_Kl7tcMvgGcSJyueoN1yf4AKyruvTvtjPJHMEF8OOa9I2T0C7c4GuVp1CT8uPtVGRJfG4nGXsLJOTWah8c3BUwXDpL1JlP5v4Q" target="_blank">HokuApps Creates Engaging Platform for SDI Academy Helping Migrants to Cope with the COVID-19 Induced Isolation</a>&nbsp;</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><a href="https://finance.yahoo.com/news/hokuapps-creates-engaging-platform-sdi-010000612.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAMUo7Fz3A9kPhGcLY0sRR08tUsTqs_8Hf9_sed9jGF2--FoZ7k87CY_tZfb5Hq4HgAZJs8IDSG_Kl7tcMvgGcSJyueoN1yf4AKyruvTvtjPJHMEF8OOa9I2T0C7c4GuVp1CT8uPtVGRJfG4nGXsLJOTWah8c3BUwXDpL1JlP5v4Q" target="_blank">HokuApps Facilitates C2C Selling as a New Retail Avenue for De'Longhi Group</a>&nbsp;</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><a href="https://apnews.com/press-release/marketers-media/virus-outbreak-business-technology-media-asia-edb10f1a116c6b31c26525602890ef94" target="_blank">HokuApps Creates an Effective Solution for The Severely Hit Events Business During the Pandemic</a>&nbsp;</p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;"> <div class="container"> <div class="sixteen columns"> <div class="s-blog-section-inner"> <div class="s-component s-text"> <div class="s-component-content s-font-body"> <p><a href="https://www.accesswire.com/611718/HokuApps-Digitalizes-Mentoring-Framework-for-Early-Childhood-Educators-at-Busy-Bees" target="_blank">HokuApps Digitalizes Mentoring Framework for Early Childhood Educators at Busy Bees</a></p> </div> </div> </div> </div> </div> </div> <div class="s-block-item s-repeatable-item s-block-sortable-item s-blog-post-section blog-section" style="text-align: justify;">&nbsp;</div> </div> </div> <div class="s-blog-footer s-font-body s-blog-body">&nbsp;</div>
technobyt
781,499
tmux new-session
This one starts a new chapter in our series that is going to open up a whole new set of workflow...
13,642
2021-08-04T14:35:39
https://waylonwalker.com/tmux-new-session/
cli, linux, tmux
--- tags: ['cli', 'linux', 'tmux',] series: tmux title: tmux new-session canonical_url: https://waylonwalker.com/tmux-new-session/ published: true --- This one starts a new chapter in our series that is going to open up a whole new set of workflow productivity options, understanding how the `new-session` command is a critical command in our adventure into tmux glory. This is going to open the door for some seriously game changing hotkeys and scripting. ``` bash # create a new session tmux new-session # create a new session detached tmux new-session -d # create a new session and name it tmux new-session -s me # create a new named session and attach to it if one exists tmux new-session -As me ``` Be sure to check out the full YouTube playlist and subscribe if you like it. [![tmux playlist on youtub](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4cgo76y53s1hwykajo1.png)](https://www.youtube.com/playlist?list=PLTRNG6WIHETB4reAxbWza3CZeP9KL6Bkr) {% post https://dev.to/waylonwalker/how-i-navigate-tmux-in-2021-2ina %} > Also check out this long form post for more about how I use tmux.
waylonwalker
781,785
My first VSCode Theme...
Hi Folks 👋🏻, Hope you are safe out there! VSCode is a place where developers spent most of their...
0
2021-08-04T17:19:30
https://dev.to/rajezz/my-first-vscode-theme-o2a
vscode, extension, theme
Hi Folks 👋🏻, Hope you are safe out there! VSCode is a place where developers spent most of their time, right. So, it should look appealing to us. I know there are innumerable themes in VSCode. But, I just thought why can't be a part of that. Also, I personally love playing around with themes. So, I tried to create my own personalized colour theme for VSCode. And, I did that. I create pair of themes each for light and dark. Kindly do check out that 🙂. --- ![Coding Theme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ep5q8gatypyhs2todidn.png) **Coding Theme - [here](https://marketplace.visualstudio.com/items?itemName=Rajeswaran.coding-theme)** ![Coding Theme Light](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ys2mvti1zc0nn4pp7yk.png) **Coding Theme Light - [here](https://marketplace.visualstudio.com/items?itemName=Rajeswaran.coding-theme-light)** --- And If you liked my theme, kindly share it with your friends. Have a great day :)
rajezz
781,828
Field Guide to Technical Editing
You’ve been tasked with editing material with deep knowledge—it’s complex and highly...
0
2021-08-04T19:36:38
https://draft.dev/learn/posts/field-guide-to-technical-editing
![](https://draft.dev/learn/assets/posts/lxa7fzn.jpg) You’ve been tasked with editing material with deep knowledge—it’s complex and highly industry-specific. You’re an expert in the art of writing, but perhaps not exactly an expert on this particular topic. Perhaps even far from it. Now what? Don’t panic. I’ve been at this for years and I’ll gladly pass along my bag of editing tricks to you. ### Want to reach more software developers? Download [**The Technical Content Manager's Playbook**](https://draft.dev/playbook?utm_source=academy&utm_medium=inline&utm_campaign=post&utm_content=/technical-editing) Understand Who Your Writer Is ----------------------------- First, it’s important to understand who the writer is on a professional level. In most cases, the writer of the piece you are editing is a subject matter expert first and a writer second. You can probably trust that the content is correct, or at least that a technical reviewer has given it a stamp of approval. So you can focus on simply the quality of writing. How much _writing_ experience this expert has can vary greatly. You may be working with a seasoned voice of authority who’s written their own blog for years, or you may be working with a junior developer writing their first tutorial. Knowing this will inform the depth of the edit you should be prepared for. Obviously, a more experienced writer will need a lighter edit than someone who’s just starting to write professionally. Either way, strive to treat the author with the same respect no matter their writing experience. But we’ll talk more about communicating with writers later. Do Some Content Research ------------------------ If you’re going to be working with complex technical content, you’ll have to do at least some research. Being able to write on the topic yourself isn’t a requirement, of course, but if you’re out of your depth, reading at least the intro paragraph on Wikipedia can get you familiar with a few keywords. ### Handling Unfamiliar Jargon Familiarizing yourself with keywords around the topic is important because, no surprise, deep technical knowledge is full of [jargon](https://www.merriam-webster.com/dictionary/jargon). Jargon can often come across as poor spelling or grammar if you don’t recognize it. Fortunately, you can usually satisfy the question, _Is this a Tech Thing_ or is this just awkward writing before introducing errors with a well-intentioned but uninformed edit: * Search online for the phrase as the author has written it. If it’s commonly used in the industry, you’ll see it pop up regularly in similar content. * Double-check the intended audience. Hopefully this is information you’ve received when taking on the project. If not, this is a perfectly reasonable question to ask. * If the audience is, for example, junior developers or nontechnical executives, you may decide to go ahead and ask the author to reword for broader accessibility. * If on the other hand the audience is, say, senior developers, leave the jargon in the article as is. * If you can’t find the jargon with an online search, query your contact (presumably either the writer or whoever assigned you the project) and ask for clarity. ### Handling Unfamiliar Concepts A talented writer will craft sentences you can easily follow, even if you don’t understand the technicality of the content. As an experienced editor, you’ll be able to see points tie into each other and recognize smooth transitions into new sections of information. You know what a good sentence looks like; you don’t necessarily have to understand what every word in the sentence means. However, you may run across a few paragraphs that impede this smoothness. The concept just isn’t falling into place, or the shape of the writer’s point is still too vague. Has the writer just made some assumptions about their readers’ knowledge and they’ve elected to simply skip over a few details? Is there really material missing that readers need to be able to follow an idea successfully? Or—say it isn’t so—is your writer out of their own depth and the writing is actually weak or inaccurate? First, try a short online search before querying your author. Remember, you’re working with very technical content written by and for people with a specific knowledge set. You can’t afford to ask for explanations on everything you as the editor are personally unfamiliar with. Not only would that lower your writer’s confidence in your edit, it would also waste time, and you probably have a deadline. As a general rule, don’t ask questions you can answer for yourself with a quick search. If you’re still unsure, try to ask the project manager (eg, your supervisor or your assigning client) or leave a comment for the writer in the document. Understand the Type of Edit You’ve Been Asked For ------------------------------------------------- If you’ve been editing for any length of time, you probably appreciate that there are different types of edits. What have you been asked for? A [developmental edit](https://en.wikipedia.org/wiki/Developmental_editing)? A [line edit?](https://en.wikipedia.org/wiki/Developmental_editing) A [copy edit](https://en.wikipedia.org/wiki/Copy_editing)? Take care to adhere to the type of edit you’ve been asked for as much as possible. If you’ve been asked for a copy edit, for example, the amount of research you’ll have to do will be considerably lesser than the other two. If you do run across something that’s outside the bounds of the edit you’ve been asked for, but you still feel strongly that something needs adjusted, leave a comment with a suggested edit. If the entire document needs a heavier edit than what you’ve been asked for, it’s time to get in touch with your manager and let them know you need to make heavier edits. ### Avoid Rewriting…Until You Can’t The line between editing and rewriting can get blurry, but here’s my quick and dirty differentiation: * **Editing:** Word-by-word adjustments, retaining the author’s voice while clarifying their meaning. * **Rewriting:** Scrapping what the author wrote entirely and creating something new from scratch. Sometimes this means half a sentence, sometimes this means an entire section. > When it comes to complex, technical material, avoid rewriting. The possibility of introducing errors is high. If something is simply not clear, query the writer. If you feel a rewrite is unavoidable, and you’re itching to make your own words appear on the page, make sure these boxes are checked first: * You’ve been asked for a heavier edit. * You have appropriate knowledge in the subject matter. * The writing skill has been consistently rough. An inexperienced writer may appreciate that they don’t have to redo a lot of work (that they may not have time for—your technical experts will most likely not be writing as their day job). An experienced writer will not thank you for rewriting their material when a simple query would let them know they need to rework something. Before you rewrite, make sure you’re assessing the situation correctly. ### A Word About ESL Edits When it comes to complex, technical material, it’s not uncommon for authors to be writing in a non-native language. For [English-as-a-second-language](https://en.wikipedia.org/wiki/English_as_a_second_or_foreign_language) writing, keep the following tips in mind as you edit: * Be mindful of their voice in English. Would they probably not use a baseball idiom? Do they avoid compound sentences? Don’t introduce them with your edits. * Punctuation and capitalization have different rules in different languages. English punctuation alone varies between countries (US vs UK) and style guides (CMS vs AP). Quietly tidy up copy edits according to the style guide your publication is working with. * Be respectful and don’t condescend. You are an expert in writing and editing. They are an expert in their subject matter. Recognize that you are helping each other bring a finished product into being, and keep your queries to the writer professional and respectful. Raise the Alarm for Big Problems -------------------------------- Occasionally, you’ll run into problems an edit just can’t handle. You may suspect that your writer has oversold their own knowledge of the subject matter, or perhaps your research uncovered the fact that your writer has plagiarized. These problems don’t go away, and if they make it to publication, they can explode in a bad way. Reputations can get called into question, business can be lost, posts can be deleted, and apologies can be necessary. ### Inaccurate or Superficial Writing If you suspect that a writer is simply not capable of writing the material accurately or at sufficient depth, it’s time to call in another expert. If you already have access to a technical reviewer, fantastic—have them look over the piece, highlighting your areas of concern. Otherwise, let the manager know there’s an issue and ask them to suggest a suitable expert, perhaps another developer they trust, for example. Hopefully the second expert can suggest more specific ways to request that the writer add new information or correct what’s in error. However, be open to the idea that the piece simply may need to be rewritten by someone else, understanding that it will most likely mean a new expense. ### Plagiarism If your writer is inexperienced, they may not understand how a quoted citation makes verbatim use of research okay, but it’s [plagiarism](https://en.wikipedia.org/wiki/Plagiarism) otherwise. The constant cross-sharing of internet content only blurs the line further. [Plagiarism checkers](https://www.grammarly.com/plagiarism-checker) are a thing, but you can also develop an eye for the red flags yourself: * A well-written sentence in the midst of an otherwise poorly written article. * A paragraph with a suddenly different tone/voice. * Points or sentences that seem to contradict each other (this can also be a sign that an author may not quite have the knowledge base they need or didn’t do careful enough research). Googling the sentence in question can turn up the original article it was pulled from (Wikipedia articles and other authoritative blog posts are the usual suspects). For a first offense, leave a nonaccusatory, professional query requesting a rewrite in the author’s own words, or a proper quote and citation if a rewrite doesn’t make sense. Obviously, an article that is entirely quoted paragraphs is not acceptable either. If plagiarism is a pattern with a writer, it’s time to let them go. Their material will be time-consuming to edit, the author’s rewrites will be poorly done, or worst case, you may have to bring in another writer entirely. That’s time and money no one wants to spend. Again, politely and professionally bring this to the manager’s attention. Conclusion ---------- Highly technical content isn’t the easiest material in the world to edit, but with care and attention, it can be done well. Know your writer, know your audience, know when to ask questions, and be prepared to put in some research, and you’re well on your way to crafting publishable work that’s helpful to its audience. It does take time to become a skilled editor, of course, and it’s not feasible for every business to employ their own editorial team for their technical publication. If you’d like to publish polished, informative technical writing without taxing your own staff, [contact us at Draft.dev](https://draft.dev/call) to see if we can help. #### By Chris Wolfgang
karllhughes
781,835
Beginner Kafka tutorial: Get started with distributed systems
Distributed systems are collections of computers that work together to form a single computer for...
0
2021-08-04T19:59:15
https://www.educative.io/blog/beginner-kafka-tutorial
beginners, tutorial, opensource, webdev
Distributed systems are collections of computers that work together to form a single computer for end-users. They allow us to scale at exponential rates, and they can handle billions of requests and upgrades without downtime. Apache Kafka has become one of the most widely used distributed systems on the market today. According to the official Kafka site, Apache Kafka is an *“open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.”* Kafka is used by most Fortune 100 companies, including big tech names like LinkedIn, Netflix, and Microsoft. In this Apache Kafka tutorial, we’ll discuss the uses, key features, and architectural components of the distributed streaming platform. Let’s get started! **We’ll cover**: * [What is Kafka?](#kafka) * [Key features of Kafka](#features) * [Components of Kafka architecture](#architecture) * [Advanced concepts to explore next](#nextsteps) <br> <a name="kafka"></a> ## What is Kafka? Apache Kafka is an open-source software platform written in the Scala and Java programming languages. Kafka started in 2011 as a messaging system for LinkedIn but has since grown to become a popular distributed event streaming platform. The platform is capable of handling **trillions of records** per day. Kafka is a distributed system comprised of servers and clients that communicate through a TCP network protocol. The system allows us to read, write, store, and process events. We can think of an event as an independent piece of information that needs to be relayed from a producer to a consumer. Some relevant examples of this include Amazon payment transactions, iPhone location updates, FedEx shipping orders, and much more. Kafka is primarily used for **building data pipelines** and implementing streaming solutions. Kafka allows us to build apps that can constantly and accurately consume and **process multiple streams** at very high speeds. It works with streaming data from thousands of different data sources. With Kafka, we can: * process records as they occur * store records accurately and consistently * publish or subscribe to data or event streams The Kafka publish-subscribe messaging system is extremely popular in the Big Data scene and integrates well with Apache Spark and Apache Storm. ### Kafka use cases You can use Kafka in many different ways, but here are some examples of different use cases shared on the official Kafka site: * Processing financial transactions in real-time * Tracking and monitoring transportation vehicles in real-time * Capturing and analyzing sensor data * Collecting and reacting to customer interactions * Monitoring hospital patients * Providing a foundation for data platforms, event-driven architectures, and microservices * Performing large-scale messaging * Serving as a commit-log for distributed systems * And much more <br> <a name="features"></a> ## Key features of Kafka Let’s take a look at some of the key features that make Kafka so popular: * **Scalability**: Kafka manages scalability in event connectors, consumers, producers, and processors. * **Fault tolerance**: Kafka is fault-tolerant and easily handles failures with masters and databases. * **Consistent**: Kafka can scale across many different servers and still maintain the ordering of your data. * **High performance**: Kafka has high throughput and low latency. It remains stable even when working with a multitude of data. * **Extensibility**: Many different applications have integrations with Kafka. * **Replication capabilities**: Kafka uses ingest pipelines and can easily replicate events. * **Availability**: Kafka can stretch clusters over availability zones or connect different clusters across different regions. Kafka uses ZooKeeper to manage clusters. * **Connectivity**: The Kafka Connect interface allows you to integrate with many different event sources such as JMS and AWS S3. * **Community**: Kafka is one of the most active projects in the Apache Software Foundation. The community holds events like the Kafka Summit by Confluent. <br> <a name="architecture"></a> ## Components of Kafka architecture Before we dive into some of the components of the Kafka architecture, let's take a look at some of the key concepts that will help us understand it: ### Kafka Consumer Groups Consumer groups consist of a cluster of related consumers that perform certain tasks, such as sending messages to a service. They can run multiple processes at one time. Kafka sends messages from partitions of a topic to the consumers in the group. When the messages are sent to the group, each partition is read by a single consumer within the larger group. ### Kafka Partitions Kafka topics are divided into partitions. These partitions are reproduced across different brokers. Within each partition, multiple consumers can read from a topic simultaneously. ### Topic Replication Factor The topic replication factor ensures that **data remains accessible** and that deployment runs smoothly and efficiently. If a broker goes down, topic replicas on different brokers stay within those brokers to make sure we can access our data. ### Kafka Topics Topics help us **organize our messages**. We can think of them as channels that our data goes through. Kafka producers can publish messages to topics, and Kafka consumers can read messages from topics that they are subscribed to. Now that we’ve covered some foundational concepts, we’re ready to get into the architectural components! <br> ### Kafka APIs Kafka has four essential APIs within its architecture. Let’s take a look at them! **Kafka Producer API** The Producer API allows apps to publish streams of records to Kafka topics. **Kafka Consumer API** The Consumer API allows apps to subscribe to Kafka topics. This API also allows the app to process streams of records. **Kafka Connector API** The Connector API connects apps or data systems to topics. This API helps us build and manage producers and consumers. It also enables us to reuse connections across different solutions. **Kafka Streams API** The Streams API allows apps to process data using stream processing. This API enables apps to take in input streams from different topics and process them with a stream processor. Then, the app can produce output streams and send them out to different topics. ### Kafka Brokers A single Kafka server is called a broker. Typically, multiple brokers operate as one Kafka cluster. The cluster is controlled by one of the brokers, called the controller. The controller is responsible for administrative actions like assigning partitions to other brokers and **monitoring for failures** and downtime. Partitions can be assigned to multiple brokers. If this happens, the partition is replicated. This creates redundancy in case one of the brokers fails. A broker is responsible for receiving messages from producers and committing them to disk. Brokers also receive requests from consumers and respond with messages taken from partitions. Here’s a visualization of a broker hosting several topic partitions: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21yxopy7s0q9do3pguzv.png) ### Kafka Consumers Consumers **receive messages** from Kafka topics. They subscribe to topics, then receive messages that producers write to a topic. Normally, each consumer belongs to a consumer group. In a consumer group, multiple consumers work together to read messages from a topic. Let’s take a look at some of the different configurations for consumers and partitions in a topic: **Number of consumers and partitions in a topic are equal** In this scenario, each consumer reads from one partition. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4xhbl3d2akc6rnlqqyj.png) **Number of partitions in a topic is greater than the number of consumers in a group** In this scenario, some or all of the consumers read from more than one partition. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uiq2we433r9nbpzbfmmd.png) **Single consumer with multiple partitions** In this scenario, all partitions are consumed by a single consumer. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lazodlua9ypz2xa0ixn2.png) **Number of partitions in a topic is less than the number of consumers in a group** In this scenario, some of the consumers will be idle. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yd2997vg1snphw6nfu1o.png) ### Kafka Producers Producers write messages to Kafka that consumers can read. <br> <a name="nextsteps"></a> ## Advanced concepts to explore next Congrats on taking your first steps with Apache Kafka! Kafka is an efficient and powerful distributed system. Kafka's scaling capabilities allow it to handle large workloads. It's often the preferred choice over other message queues for real-time data pipelines. Overall, it's a versatile platform that can support many use cases. You're now ready to move on to some more advanced Kafka topics such as: * Producer serialization * Consumer configurations * Partition allocation To get started learning these topics and a lot more, check out Educative's curated course [**Building Scalable Data Pipelines with Kafka**](https://www.educative.io/courses/scalable-data-pipelines-kafka). In this course, we'll introduce you to Kafka theory and provide you with a hands-on, interactive browser terminal to execute Kafka commands against a running Kafka broker. You'll learn more about the concepts we covered in this article, along with other important topics. By the end, you'll have a stronger understanding of how to build scalable data pipelines with Apache Kafka. *Happy learning!* ### Continue reading about distributed systems and big data * [Top 5 distributed system design patterns](https://www.educative.io/blog/distributed-system-design-patterns) * [An introduction to scaling distributed Python applications](https://www.educative.io/blog/scaling-in-python) * [Introduction to Apache Airflow: get started in 5 minutes](https://www.educative.io/blog/intro-apache-airflow)
erineducative
781,955
OpenShift for Dummies - Part 2
Thank you for reading part two of OpenShift for Dummies! In this article, I will briefly outline...
13,945
2021-08-04T23:47:31
https://dev.to/stevenmcgown/openshift-for-dummies-part-2-2eg4
devops, docker, kubernetes, python
![openshiftlogo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vomn5j6jtz8s58ki8kl.png) Thank you for reading part two of OpenShift for Dummies! In this article, I will briefly outline advantages and use cases of OpenShift. Additionally, I will go into technical detail of how you can get started using OpenShift. As a reminder, OpenShift is open source and has a free tier intended for experimentation and development, which is perfect for beginners. If you haven’t read OpenShift for Dummies - Part 1, please read it here before continuing. In the last post, we talked about containers and their advantages over VM’s, but these important questions still remain: why should you use OpenShift and how do you get started using it? <h1>Why Should I Use OpenShift?</h1> In Kubernetes for Dummies, we talked about the need for a container orchestration system. In 2015, there were many different orchestration systems that people were using including cloud foundry, mesosphere, docker swarm, and kubernetes to name a few. Today, the market has consolidated and kubernetes has come out on top. Red Hat bet early on K8s and is now the second largest contributor and influencer of its direction, only next to Google. K8s is the kernel of distributed systems, while OpenShift is a distribution of it. What this means for developers is that whenever there is a new version of Kubernetes available, Red Hat can take K8s from upstream, secure it, test it and certify it with hardware and software vendors. In addition, Red Hat patches 97% of all security vulnerabilities within 24 hours and 99% within the first week, showing the difference between Red Hat and their competition. <h3>OpenShift, the Platform of the Future</h3> OpenShift is a platform that can run on premise, in a virtual environment, in a private cloud, or in a public cloud. You can migrate all of your traditional applications to OpenShift so you can get all of the advantages of containerization, as well as software from independent software vendors. You can also build cloud-native greenfield applications (greenfield describes a completely new project that has to be executed from scratch) as well as integrate Machine Learning and Artificial Intelligence functions. ![openshiftfancy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8aae747uxd9rgumloub.jpg) OpenShift also provides automated operations, multi-tenancy, secure by default capabilities, network traffic control, and the option for chargeback and showback. OpenShift is also pluggable so you can introduce third party security vendors if you wish. Developers also get a self service provisioning portal so operations teams can define what is available for developers and developers can request controls as authorized by the operations team. The OpenShift platform is very versatile in that it runs on most public cloud services such as AWS, Azure, Google Cloud Platform, IBM Cloud, and of course it runs on-premises as well. <h1>OpenShift Demo</h1> You can use the trial version of OpenShift by visiting: https://www.redhat.com/en/products/trials?products=hybrid-cloud For this demo, you will need a Red Hat account. We will be selecting the option that plainly says ‘Red Hat OpenShift - An enterprise-ready Kubernetes container platform’ ![tryit](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttqv4q50uxghy21or7e0.png) Select ‘Start your trial’ under ‘Developer Sandbox.’ The developer sandbox will suffice for this walkthrough. Please note that the account created will be active for 30 days. At the end of the active period, your access will be deactivated and all your data on the Developer Sandbox will be deleted. Upon logging in, you should be brought to this webpage: ![devscreen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z8r2nimglrqog0zlwjb.png) If you are not brought here, visit https://developers.redhat.com/developer-sandbox. Click ‘Get started in the Sandbox’ and then ‘Launch your Developer Sandbox for Red Hat OpenShift’ and then ‘Start using your sandbox.’ You may also need to verify your email address to continue. <h3>Welcome to OpenShift!</h3> On the side bar you can see different options to select from... ![devview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rp7cbyki55n5212xr3pr.png) <b>Perspective Switcher</b> You can toggle between Developer and Administrator perspectives using the perspective switcher. The Administrator perspective can be used to manage workload storage, networking, cluster settings, and more. This may require additional user access. Use the Developer perspective to build applications and associated components and services, define how they work together, and monitor their health over time. <b>Add</b> You can select a way to create an application component or service from one of the options. <b>Monitor</b> The monitoring tab allows you to monitor application metrics, create custom metrics queries, and view & silence alerts in your project. <b>Search</b> Search for resources in your Project by simply starting to type or by scrolling through a list of existing resources. Now, switch to the Administrator perspective and look under projects. ![adminview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3sklqizyaih3fnpqckr.png) Under <b>projects</b>, you may see two different projects, one for development and one for staging. The projects section allows you to create projects based on domains within IT (Developers, Operations, Security, Network, Infrastructure, Storage, etc) and isolate their functions from one another. Normally these teams would have their own systems, but through OpenShift they all have one singular console where they can have control for their respective roles. ![oneplatform](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/60qvv38eupu25slj0y9v.png) Now, change back to the developer perspective. Under topology, we can see that we currently do not have any workloads. OpenShift gives us many options to create applications, components and services using the options listed. ![catalog](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6xqnzobinyb2647vv79.png) Let’s explore the catalog to see what we can choose from. Through the developer catalog, the developer does not need to request from the infrastructure team that they need a new developing environment, database, runtime, etc. Rather, the developer can choose from a list of pre-approved apps, services or source-to-image builders. For our purposes, we will be using python to create a front end. I will be using a sample random background color generator to demonstrate the use of python in OpenShift. This app will randomly generate a color and a welcome message to the user who opens the website. Simply type in ‘Python’ in the developer catalog or find it under Languages > Python and click the option that plainly says ‘Python.’ ![devcatalog](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6iar6r1uoxjx50rvgkz.png) Next, click ‘Create Application’ ![createapplication](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzise0any50k0f9noboh.png) From here, we will paste the link from the github repository that holds the python script we will use for our webpage: https://github.com/StevenMcGown/OpenShift_Demo You can also change the name of the application if you wish. For our purposes, we will leave everything in default settings. Once you click ‘Create’, OpenShift will begin to build the application. You can see the build process from the side bar by navigating Builds > open-shift-demo > Builds > open-shift-demo-1 > Logs. In this screenshot, we can see that OpenShift goes to the location of the source code, and copies the source code. Once the source code is copied, it is analyzed and it will build an application binary. Next, OpenShift creates a [dockerfile](https://dev.to/stevenmcgown/docker-for-dummies-2bff) which will install all of the app dependencies needed to run the application binary. The application dependencies are layered to make a container image, where it will be stored in a registry which is built into OpenShift. Finally, from that registry it will deploy an application file. ![build](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4piggcgwtstkqhzk5rv.png) Next, click on the Topology tab in the sidebar. We can see our python application in a bubble with 3 smaller bubbles attached. The green check mark shows that the build was successful, and we can actually check the build log we just saw by clicking on it. ![withoutcoderw](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2uj5g0t6c0kon4vrfho.png) The bubble on the bottom right with a red C allows us to edit our source code with CodeReady Workspaces. CodeReady Workspaces allows you to edit the code within the browser. Opening CodeReady Workspaces will take some time to open, but when it opens you should see an IDE similar to that of VSCode. ![codereadyworkspaces](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xxtwafjag418ym2kxkak.png) Looking back at the Topology of our application, we can now see that a CodeReady Workspaces icon is now added to our project. ![withcoderw](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8uc4qj3l3m0762bgvuf6.png) Clicking the bubble on the top-right on the python icon will open the container application. In this instance, it took green as the random color and we are welcomed with a message from the application open-shift-demo hosted on the ‘hkqbv’ container under the ‘7c749ff559’ replica set. ![green](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpqp2183ocota25hh345.png) As an administrator, we are interested in giving the application high availability by scaling, control routing, etc. Let’s look at the application from an administrator’s perspective now. In the admin perspective, we can view our application pods by navigating to Workloads > Pods. Here we can see that only one pod is serving our application. If we want to increase our availability, we can navigate to Workloads > Deployments and increase the number of pods to serve our application. As a reminder, a deployment is a set of pods and ensures that a sufficient number of pods are running at one time to service an application. If you need to brush up on [Kubernetes](https://dev.to/stevenmcgown/kubernetes-for-dummies-5hmh) concepts such as deployments and pods, please read [Kubernetes for Dummies.](https://dev.to/stevenmcgown/kubernetes-for-dummies-5hmh) ![increasepods](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1v0u0voouti6iamdevo.png) Traditionally, if you wanted to increase the availability of your app, you would have to create an additional VM, create a load balancer, install the application and only then would you be set to have high availability. In OpenShift, increasing the availability is as simple as incrementing or decrementing the pod counter under ‘Deployment Details,’ which is done in seconds. Increasing the number of pods means an application will be hosted on each pod, meaning that the application we have will use 3 pods and thus 3 random colors. After refreshing your page, you may notice that the app does not ever change color… What gives? From a networking perspective, the default configuration is to have a sticky session, meaning that the user will always be hosted by the same container once they connect to the application. To change this, we will navigate to Networking > Routes and click on the 3 dots to edit annotations. We will add these key-value pairs to our existing annotations: haproxy.router.openshift.io/balance : roundrobin haproxy.router.openshift.io/disable_cookies : true ![edit annotations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjq9ifjlfqy2wglie2dw.png) For more information on the round robin scheduling algorithm and cookies, visit these links: https://en.wikipedia.org/wiki/Round-robin_scheduling#Network_packet_scheduling https://en.wikipedia.org/wiki/HTTP_cookie When you refresh the page, you will receive a new message each time indicating that you are being serviced by a different pod for the application. The background color, however, might be the same as another container since the app is initialized with a random color from an array of 7 colors. <h3>Simulating a Crash</h3> Let’s simulate one of the pods crashing to test our availability. In the administrator view, navigate to Workloads > Pods. You should see 3 pods running under the Replica Set tag, indicating that the pods are created using the same data set. If we delete one of these pods, it would mean that the pod immediately fails. In doing so, Kubernetes will simultaneously create a container to replace the pod that failed. This shows that the controller is always looking at how many pods are running vs. how many are needed. In this case, K8s will detect that there are only 2 pods running and immediately create a new pod to replace the failed pod. ![afterdelete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w3qoddve75y4nua0plv.png) Because the old pod was deleted, a new pod was created with a container ID ‘hxghh’ and purple background. ![purple](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tldv65vt6zwe05f95kh4.png) <h3>Developer Updates</h3> Let's suppose the developer of the application updates the source code. When this happens, OpenShift needs to reflect the changes made by the developer. We can do this by building the project again by going into the developer perspective, clicking on the python icon, and clicking 'Start Build.' In this case, I added black to the array of colors. ![new build](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8qwj84trp0i667m7pi9q.png) One thing to note is a feature OpenShift uses called 'rolling updates.' Rolling updates ensure seamless transitions from one update to another. With rolling updates, new pods are commissioned while old ones are decommissioned one at a time until completion. This way, there is never a service loss for the end user. With some luck, we can now see a new color background for our web page as given by the developers. ![black](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhkq9ul9oatlnkcdrr6a.png) <h1>Conclusion</h1> That's all I have for now! Thank you so much for reading part 2 of OpenShift for Dummies. I plan on making more of these in the future, but please let me know if you have any questions or concerns for these posts! I hope you have enjoyed reading. If you did, please leave a like and a comment! Also, follow me on LinkedIn at https://www.linkedin.com/in/steven-mcgown/
stevenmcgown
781,964
Making beautiful websites: Top 5 FREE color palettes resources
Choosing a beautiful and pleasing color scheme is one of the most difficult tasks for every...
13,909
2021-08-05T17:07:10
https://dev.to/martinkr/making-beautiful-websites-top-5-free-color-palettes-resources-4jpd
webdev, design
Choosing a beautiful and pleasing color scheme is one of the most difficult tasks for every designer. Even with the general rule of using a base, accent and a neutral color for your palette, choosing the colors is mostly a matter of intuition and experience. Don't worry, I have you covered - check out the top five resources and choosing your perfect color palette will be a breeze. PS: Don't forget to check the *bonus link*! ## [coolors](http://coolors.co) The super fast and super fun color schemes generator. Gamification for the win. You can play for hours or just pick the first color scheme. ## [colorhunt](http://colorhunt.co) Thousands of pre-made color palettes. Each appealing palette consists of four matching colors ready to use. ## [paletton](http://paletton.com) Are you looking for a more traditional and professional way of choosing matching colors? You are looking for "triad" or "tetrad" palettes? Try Paletton, it is the complete opposite of coolors.co. - no gamification but lots of options. ## [colors.muz.li](https://colors.muz.li/) Upload a picture and muzli colors does not only generates a matching palette from the predominant colors, you will also get a sample page and related palettes. ## Bonus: [colormind](http://colormind.io/) Use technology instead of intuition: Colormind is a color scheme generator that uses deep learning. It can learn color styles from photographs, movies, and popular art. --- Follow me on [Twitter: @martinkr](http://twitter.com/_martinkr) and consider to [buy me a coffee](https://www.buymeacoffee.com/martinkr)
martinkr
782,166
VS Code plugins to increase coding speed
Hey world, here is the list of plugins that I found helpful for react native developers to increase...
0
2021-08-17T05:08:21
https://dev.to/harikrshnan/vs-code-plugins-to-increase-coding-speed-4469
reactnative, vscode, react
Hey world, here is the list of plugins that I found helpful for react native developers to increase coding speed. 1. **[AutoRename Tag](https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag)** Since VS code will not rename the paired tag automatically, this plugin will help you solve that issue. 2. **[Intellij IDEA keymap](https://plugins.jetbrains.com/plugin/12062-vscode-keymap)** If you are an android developer who recently migrated to react, learning new VS code shortcuts will be a tiresome job. And this plugin will be a life saver. It will allow you to use Intellij idea shortcuts in VS code without any further change. 3. **[Git Lens](https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens)** Git lens plugin will enhance your VS code git capabilities with features like viewing authorship, code comparison and blaming, navigation between repositories, etc. 4. **[Settings sync](https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync)** Next in the list, we have the settings sync to sync all our customizations, keyboard shortcuts, extensions, and other settings so that you don't need to waste your time on setting up your IDE next time. You can use your GitHub account token and gist to upload your file. 5. **[Bookmarks](https://marketplace.visualstudio.com/items?itemName=alefragnani.Bookmarks)** Since VS code doesn't have an option to bookmark, this plugin will help you create bookmarks and navigate between them easily. Thanks for your minute :-), __Happy coding!__
harikrshnan
782,249
A photographer’s view on alt text
Images are visual communication. Consider what you're trying to communicate before you do.
0
2021-08-05T07:51:18
https://www.erikkroes.nl/blog/a-photographer-s-view-on-alt-text/
a11y, html, webdev
--- title: A photographer’s view on alt text published: true canonical_url: https://www.erikkroes.nl/blog/a-photographer-s-view-on-alt-text/ description: Images are visual communication. Consider what you're trying to communicate before you do. tags: a11y, html, webdev cover_image: https://www.erikkroes.nl/assets/media/81ec6af5-1080.jpeg --- Images are visual communication. Consider what you're trying to communicate before you do. <aside>The "cover image" of this writing is from a project I did while studying photography. I wanted to discuss photography but not specific photos. I discovered it's almost impossible to talk about photos, without discussing what's in the photos. So I created images that "felt" like photos but didn't have a clear subject. I wrote a script that grabs random blobs of images from Flickr and turns them into a sort of subjectless collage.</aside> <aside> Originally posted on [erikKroes.nl](https://www.erikkroes.nl/). Questions are also welcome on [Twitter](https://twitter.com/erikKroes)</aside> ## What is alt text? When I say alt text I’m usually talking about the [alt-attribute](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-alt) for the [img-element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img) in HTML. But most of what I say goes for other text alternatives as well. So maybe the question should be, what is a text alternative? An image says more than a thousand words. But if you can't see an image, well, then it doesn't say much does it? A way to compensate for this is to add a text alternative; a bit of text that serves the same purpose as the image. ## What's the purpose of the image? This is the big question if you'd ask me. Wether you add an image to an article you're writing, or you're adding it as an icon to a button, you can't avoid this question. What is the purpose? Why are you adding this image? **What are you trying to communicate?** In the end, an image is "just" a way of getting something across. When you write, you pick certain words. You write your sentences in a certain way. You can be aware of how you're communicating and what you're bringing across. An image isn't much different in my experience. When you pick an image, ask yourself the same questions. What am I trying to communicate? (Notice this question is very much focussed on the person creating the content and not on what a user wants.) ## What does that mean in practice? ![President Obama speaking from behind a pedestal](https://www.erikkroes.nl/assets/media/afde93a0-640.jpeg) Let's take this image as an example. What does it denote? And I'm picking this word because it is one I picked up during my study in photography. The literal meaning of something is its [denotation](https://en.wikipedia.org/wiki/Denotation). In this case it could be something like the pretty generic alt text I added in the code: "President Obama speaking from behind a pedestal". The denotation is also right up the alley of image recognition by artifical intelligence (AI). You literally describe what's in the image. Although you could easily go for an even more literal description here like: "A man in a suit behind a pedestal". Stating it's President Obama is already more of a [connotation](https://en.wikipedia.org/wiki/Connotation). It's an interpretation of what we see. It's a cultural addition. Other connotations could include mentioning it's the first black president, that it's a former president or that he's talking about Donald Trump here. And this is where the purpose of the image plays a role. If you're writing an article on the achievements of black people in the USA, you might add Obama as the first black president. If you're writing a course on public speaking, you might add Obama to illustratie that public speaking is very important for presidents. If the image is to supplement information about pedestals, then you might want to focus on highlighting details on the pedestal in the picture. It all about the purpose. Why are you adding the image? The denotation has value but the connotation is often why we add an image. When somebody adds a description of an image into the file (like discussed in this [Twitter thread](https://twitter.com/jonsneyers/status/1422646901439086592)), it might be enough to derive a connotation from it. But to really get the message across, write your own text alternative. ## Some more tips * **Don't include that it's an image.** Or a picture, a graphic, a visual, etc. That only adds noise as it's already clear from the context. * **Write out text.** * **Don't stylize text.** Italic and bold text don't change the message (and might not even be communicated). The same goes for anything beyond basic punctuation. * **"Null" the alt of a decorative image.** In HTML, if an image is [decorative](https://www.w3.org/WAI/tutorials/images/decision-tree/), add an **empty** alt like `<img alt>` or `<img alt="">`. And I do mean empty. No spaces or other text like "image" (I'm looking at you Twitter 👀). ## Why I care For the past few years, I've been working as a specialist in digital accessibility and inclusive design. In this role I work with WCAG, and the addition of text alternatives is pretty much the [first thing](https://www.w3.org/WAI/WCAG21/Understanding/non-text-content.html) I check in an audit. Through this role, I've formed an opinion on text alternatives. Before this job, I was a photographer. I had 5 years of formal education in this direction. Visual communication is awesome and I wish the theory had stuck with me even more. ## Concluding Studying photography has taught me that it's all "just" visual communication. Wether it's text or imagery. Think about the message you're trying to communicate, and shape your communication accordingly. ## Resources * [https://axesslab.com/alt-texts/](https://axesslab.com/alt-texts/ "https://axesslab.com/alt-texts/") * [https://jakearchibald.com/2021/great-alt-text/](https://jakearchibald.com/2021/great-alt-text/ "https://jakearchibald.com/2021/great-alt-text/") * [https://www.w3.org/WAI/tutorials/images/decision-tree/](https://www.w3.org/WAI/tutorials/images/decision-tree/ "https://www.w3.org/WAI/tutorials/images/decision-tree/") * [https://www.smashingmagazine.com/2021/06/img-alt-attribute-alternate-description-decorative/](https://www.smashingmagazine.com/2021/06/img-alt-attribute-alternate-description-decorative/ "https://www.smashingmagazine.com/2021/06/img-alt-attribute-alternate-description-decorative/") * [https://www.youtube.com/watch?v=IxHng2L_-aQ&t=19s&pp=sAQA](https://www.youtube.com/watch?v=IxHng2L_-aQ&t=19s&pp=sAQA "https://www.youtube.com/watch?v=IxHng2L_-aQ&t=19s&pp=sAQA")
erikkroes
782,381
A Fun Programming Joke To Start Your Day
Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)
4,070
2021-08-05T12:00:20
https://dev.to/dailydeveloperjokes/a-fun-programming-joke-to-start-your-day-44gp
jokes, dailydeveloperjokes
--- title: "A Fun Programming Joke To Start Your Day" description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)" series: "Daily Developer Jokes" published: true tags: #jokes, #dailydeveloperjokes --- Hi there! Here's today's Daily Developer Joke. We hope you enjoy it; it's a good one. ![Joke Image](https://private.xtrp.io/projects/DailyDeveloperJokes/public_image_server/images/5e1259118e11e.png) --- For more jokes, and to submit your own joke to get featured, check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/). We're also open sourced, so feel free to view [our GitHub Profile](https://github.com/dailydeveloperjokes). ### Leave this post a ❤️ if you liked today's joke, and stay tuned for tomorrow's joke too! _This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit._ <!-- Joke text: ___Q:___ What's a compiler developer's favorite spice? ___A:___ Parsley. -->
dailydeveloperjokes
782,389
Hello, World! in 10 different languages 🔥🔥
1. Python print("Hello World!") Enter fullscreen mode Exit fullscreen...
0
2021-08-05T12:15:41
https://dev.to/rohidisdev/hello-world-in-10-different-languages-6ko
programming, helloworld, coding, languages
#1. Python ```python print("Hello World!") ``` #2. Java ```java public class Main { public static void main(String[] args) { System.out.println("Hello, World!"); } } ``` #3. JavaScript ```javascript console.log("Hello, World!") ``` #4. C Sharp ```c# using System; class Hello { static void Main(string[] args) { Console.WriteLine("Hello, World!"); } } ``` #5. Swift ```swift print("Hello, World!") ``` #6. Dart ```dart void main() { print("Hello, World!"); } ``` #7. Go ```go package main import "fmt" func main() { fmt.Println("Hello, World!") } ``` #8. C++ ```c++ #include <iostream> using namespace std; int main() { cout<<"Hello, World!"; return 0; } ``` #9. C ```c #include <stdio.h> int main(void) { printf("Hello, World!"); } ``` #10. Kotlin ```kotlin fun main(args: Array<String>) { println("Hello, World!") } ```
rohidisdev
782,465
“Greenfield” doesn't exist in agile projects
Many engineers love the idea of working on greenfield projects. That is, new projects, where design...
0
2021-08-09T10:16:06
https://jhall.io/archive/2021/08/05/greenfield-doesnt-exist-in-agile-projects/
refactoring, greenfield, brownfield, rewrite
--- title: “Greenfield” doesn't exist in agile projects published: true date: 2021-08-05 00:00:00 UTC tags: refactoring,greenfield,brownfield,rewrite canonical_url: https://jhall.io/archive/2021/08/05/greenfield-doesnt-exist-in-agile-projects/ --- Many engineers love the idea of working on [greenfield projects](https://en.wikipedia.org/wiki/Greenfield_project). That is, new projects, where design mistakes and technical debt have not yet been acrued. I’ve worked on a number of such projects over the years, but there’s a problem that’s practically always overlooked with these types of projects. A greenfield project is only greenfield for about a week. Very quickly, you’ll start bumping into decisions you made, which don’t fit the current circumstance perfectly, and you’ll begin refactoring. You’ll begin to see that you’re working on a brownfield project. What can you do? Stop dreaming of your next greenfield project, and just learn to deal with technical debt, legacy code, refactoring, and all the other things that come with the territory. * * * _If you enjoyed this message, [subscribe](https://jhall.io/daily) to <u>The Daily Commit</u> to get future messages to your inbox._
jhall
782,473
whatsapp message sending bot using selenium
in this post i will say how to send whatsapp message using selenium libaries required are""'iam also...
0
2021-08-05T12:49:37
https://dev.to/vaibhav688/whatsapp-message-sending-bot-using-selenium-2i51
in this post i will say how to send whatsapp message using selenium libaries required are""'iam also begnniner in python """ selenium and code goes here # import selenium libary in that import webdriver from selenium import webdriver '''now create variable which calls ur default browser anything i have used chrome,for chrome u need download chromedriver from chrome drivermanager site and stored in any drive''' driver=webdriver.Chrome('C:/Users/lenovo/OneDrive/Documents/PYTHON COURSE/chromedriver.exe') driver.implicitly_wait(15) #get whatsapp web url driver.get('https://web.whatsapp.com') # find user name or friend name driver.find_element_by_css_selector("span[title='" + input("Enter name to spam: ") + "']").click() #message u want to send inputs=input("enter ur message") #convert it to string and store in variable word=str(inputs) driver.find_element_by_xpath('//*[@id="main"]/footer/div[1]/div[2]/div/div[2]').click() while True: driver.find_element_by_xpath('//*[@id="main"]/footer/div[1]/div[2]/div/div[1]/div').send_keys(word) driver.find_element_by_xpath('//*[@id="main"]/footer/div[1]/div[2]/div/div[2]').click()
vaibhav688
783,783
Making Your First Game in Blue
Hello everyone! Today, I'm writing a post about how to get started with Blue. Blue is a creative,...
0
2021-08-11T23:18:40
https://dev.to/i8sumpi/making-your-first-game-in-blue-f8k
gamedev, javascript, beginners
Hello everyone! Today, I'm writing a post about how to get started with Blue. Blue is a creative, graphical, and browser-based programming language which makes it easy and enjoyable to get started with programming. First off, you can check it out at https://blue-js.herokuapp.com. Blue is also open source, and its GitHub is https://github.com/i8sumPi/blue. In this tutorial, we'll be making a carrot catching game, with only 25 lines of code (try it [here](https://blue-js.herokuapp.com/view/610dd0565ee1172d24cdcf96)) ![carrot catch](https://blue-js.herokuapp.com/demos/carrot%20catch.png) ## The Code! Let's start by drawing our main character. We can do this using: ```typescript var player = new Premade(costume, x, y, size) ``` We replace the word "costume" with the character that we want, and x and y with the coordinates of where we want to place our new character. Blue uses the same coordinate system as Scratch. The x axis goes from -240 to 240, and the y axis goes from -180 to 180. <br> ![graph](https://blue-js.herokuapp.com/graph.png) In our case, we can use: ```typescript var player = new Premade("bunny1_ready", 0, -112, 0.4) ``` This places the bunny in the bottom-middle, and makes its size 0.4 of the original. Note that the name of the costume <b>must</b> be in quotations. If you would like to use a different character than the bunny, go into the documentation > premade characters and sounds > all images. ## The Background Now let's draw a simple background. We can draw it using rectangles. Rectangles are created using: ```typescript new Rectangle(x, y, width, height, color) ``` The `x` and `y` values of a rectangle represent the coordinates of the top-left corner. The color can be a string with the color name, like "red" or "blue", but if you want more detail, you can also use hexidecimal colors. You can find a hexidecimal color using <a href="https://htmlcolorcodes.com">htmlcolorcodes.com</a>. In our case, we want a blue sky and a green ground, which can be done using: ```typescript new Rectangle(-240,180, 480, 360, "#D0F8FF") # sky new Rectangle(-240, -150, 480, 30, "green") # ground ``` <br> Note that the grey text after a `#` does not run. It is a comment, and its purpose is just to remind us of what we're doing. Note: if you don't see the bunny anymore after drawing the background, you drew the background over it. You can put the bunny on top by either putting the bunny's code after the background, or by adding the line `player.layer = 1`. A layer of 1 or more brings a character to the top, and a negative layer brings it underneath. ## Motion We need to make the bunny follow the mouse. We can do this with the following code: ```typescript forever: player.x = mouse.x ``` The code inside the forever loop runs constantly. The second line is setting the player's x position to the mouse's x position. This means that at every moment, the player is moving to where the mouse is, or in other words, the mouse is moving the player. How does Blue know what is inside or outside the forever loop? It's pretty simple—code that is inside the forever loop is indented. This indented chunk of code is known as a code block. Code that isn't inside the forever loop isn't indented. An example of this (that doesn't relate to the current project, so don't add this to your code): ```typescript forever: print("I am inside the forever loop") print("I am also inside the forever loop") print("I am not inside the forever loop") ``` Note that you can also have a code block within a code block, or a code block within a code block within a code block. To do this, you simply use multiple indentations. ## Clones Now we need to generate many many many carrots :D In order to keep track of the carrots, we will use a list. A list is a special kind of variable that can hold multiple values. We initialize (start) a new, empty list using: ```typescript var carrots = [] ```<br> We can add lots of carrots using: ```typescript var carrots = [] repeatEvery 0.3: carrots.push(new Premade("crop_carrot", random(-230, 230), 180)) ``` Let's break down this code. `new Premade("crop_carrot", random(-230, 230), 180)` is creating a new carrot with a random x value, and a y value of 180, which puts it at the top of the screen. `random(-230, 230)` returns a random value from -230 to 230. `carrots.push()` adds this newly generated carrot to our list called carrots. `repeatEvery 0.3` repeats the code below it every 0.3 seconds. You can change the difficulty of the game by changing this number, for example, if you used `repeatEvery 0.5` instead, the carrots would appear more slowly, and the game would be easier. When you run this code, you should see lots of carrots appearing at the top of the screen. ## Moving the carrots We can move each carrot down by using a `forEach loop`. The forEach loop will iterate through (or go through each one of) the carrots so that we can move each carrot down. We add it to the end of our already existing forever loop in order to do this constantly. Note that the first two lines of this code are from the forever loop that we already have. ```typescript forever: player.x = mouse.x forEach carrot in carrots: carrot.y -= 10 ```<br> `carrot.y -= 10` is shorthand for `carrot.y = carrot.y - 10`. It just moves the carrot's y position down by 10. ## Score We can display the score using a `text`. You create a new text using: ```typescript new Text(text, x, y, font size) ```<br> We need one variable to be the text that displays the score, and another to store the score itself. ```typescript var scoreCounter = new Text("Score: 0", 0, 0, 20) var score = 0 ```<br> In order to update the score whenever the bunny touches a carrot, we can use `distanceTo`. We add this to the end of our forEach loop: ```typescript if carrot.distanceTo(player) < 50: carrot.delete() score += 1 scoreCounter.text = "Score: "+score new Sound("jingles_PIZZI16", 0.2) ``` `carrot.delete()` deletes the carrot so it disappears. <br> `score += 1` adds 1 to the score. <br> `scoreCounter.text = "Score: "+score` updates the score display. <br> `new Sound("jingles_PIZZI16", 0.2)` plays the bu-dup sound. The 0.2 means it is 0.2 of the original volume. You can choose another sound in Blue Documentation > Premade Characters and Sounds > All sounds. ## Losing The last thing to add is making the game stop when you miss a carrot. We can do this by checking if any carrot's y is less than -240, which is the bottom of the screen, and if so, stop the game. So, we can add this to the bottom of our forEach loop: ```typescript if carrot.y < -240: scoreCounter.text = "You missed a carrot! Your score was "+score+"." pause() new Sound("jingles_PIZZI01") ``` The `pause()` freezes the game at that moment. The `new Sound("jingles_PIZZI01")` plays the losing sound. ## Music As a final touch, we need to add some music to complete the vibe. The 1 means to keep 100% of the volume, and the true indicates that you want the music to loop as the game continues. ```typescript new Sound("bensound-jazzyfrenchy", 1, true) ``` ## You're Finished! Congrats on finishing your first game in Blue! Feel free to share it with your friends, and start another project of your own. Thanks for reading! ## The Final Code: ```typescript new Sound("bensound-jazzyfrenchy", 1, true) # background music new Rectangle(-240,180, 480, 360, "#D0F8FF") # sky new Rectangle(-240, -150, 480, 30, "green") # ground var carrots = [] # store carrots var player = new Premade("bunny1_ready", 0, -112, 0.4) var scoreCounter = new Text("Score: 0", 0, 0, 20) var score = 0 forever: player.x = mouse.x forEach carrot in carrots: carrot.y -= 10 if carrot.distanceTo(player) < 50: carrot.delete() score += 1 scoreCounter.text = "Score: "+score new Sound("jingles_PIZZI16", 0.2) if carrot.y < -240: scoreCounter.text = "You missed a carrot! Your score was "+score+"." pause() new Sound("jingles_PIZZI01") repeatEvery 0.3: carrots.push(new Premade("crop_carrot", random(-230, 230), 180)) ```
i8sumpi
784,429
Programming Term: "glob"
glob patterns specify sets of filenames with wildcard characters. For example, the Unix Bash shell...
0
2021-08-07T10:23:52
https://dev.to/a510/programming-term-glob-5234
**glob patterns** specify sets of filenames with wildcard characters. For example, the Unix Bash shell command `mv *.txt textfiles/` moves (`mv`) all files with names ending in `.txt` from the current directory to the directory `textfiles`. Here, `*` is a wildcard standing for "any string of characters" and `*.txt` is a glob pattern. ([From Wikipedia](https://en.wikipedia.org/w/index.php?title=Glob_(programming)&oldid=1034211884))
a510
788,907
How to execute shell commands in Javascript
Working in Javascript apps, you might have to use shell commands to retrieve some informations or...
11,206
2021-09-21T11:28:13
https://dev.to/mxglt/how-to-execute-shell-commands-in-javascript-123b
Working in Javascript apps, you might have to use shell commands to retrieve some informations or execute some treatments. So here is the snippet to do it! --- ## Code ```javascript const childProcess = require('child_process'); async function sh(cmd_to_execute) { return new Promise(function (resolve, reject) { childProcess.exec(cmd_to_execute, (err, stdout, stderr) => { if (err) { reject(err); } else { resolve({stdout, stderr}); } }); }); } ``` You can use this function which will return you the result of the command. --- I hope it will help you! 🍺
mxglt
789,875
A TelegramBot for true paranoids.
https://t.me/MasquerBot I watched Snowden in 2016. It was the year, I became paranoid....
0
2021-08-12T18:15:30
https://dev.to/ra101/a-telegrambot-for-true-paranoids-16po
telegram, python, showdev, cryptography
{% youtube yH3SVmCZD7Q %} ### https://t.me/MasquerBot I watched Snowden in 2016. It was the year, I became paranoid. 👻 Introducing 𝗠𝗮𝘀𝗾𝘂𝗲𝗿𝗕𝗼𝘁! It is a telegram_bot that can hide any given text message inside any given image, by manipulating the very pixels of that image (steganography) • URL changes every 6 hrs, with 130 char long, therefore making it impossible to trace by anyone other than Telegram. 𝘍𝘢𝘳 𝘣𝘦𝘵𝘵𝘦𝘳 𝘥𝘰𝘤𝘶𝘮𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯 𝘪𝘴 𝘰𝘯 𝘨𝘪𝘵𝘩𝘶𝘣, 𝘭𝘪𝘯𝘬𝘴 𝘢𝘵 𝘵𝘩𝘦 𝘦𝘯𝘥. 📈 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄: • It works by creating an ECDSA encryption 🔑🗝key pair, returns you the 🔑public key, to distribute. • To encrypt, one will have to send your 🔑key along with ✉text and 🖼image. • The bot will encrypt the text and hide it then it will return the 🖼 encode-image. • You as a recipient will send the encoded-image to the bot. and it will take your 🗝private key from the 📒database and will send you the hidden ✉text. PS: Icon is not just eye-candy for otakus. try "/icon" command, within the bot. I bet you will love it. 😉 ⚡𝗟𝗶𝗻𝗸𝘀: Github: https://github.com/ra101/MasquerBot LBRY: https://lbry.tv/@ra101/MasquerBot
ra101
790,759
Web Log a Minimal Design Blogging Site
Web Log is a clean SEO friendly blog theme with minimal design architecture. It is a cool and perfect...
0
2021-08-14T06:11:05
https://wprefers.com/web-log-a-minimal-design-blogging-site/?utm_source=rss&utm_medium=rss&utm_campaign=web-log-a-minimal-design-blogging-site
themes, thememiles, weblog, wordpress
--- title: Web Log a Minimal Design Blogging Site published: true date: 2021-07-21 06:02:49 UTC tags: Themes,ThemeMiles,WebLog,WordPress canonical_url: https://wprefers.com/web-log-a-minimal-design-blogging-site/?utm_source=rss&utm_medium=rss&utm_campaign=web-log-a-minimal-design-blogging-site --- **[Web Log](https://www.thememiles.com/?wpam_id=25)** is a clean SEO friendly blog theme with minimal design architecture. It is a cool and perfect theme for writers who need to create simple and creative personal blogging sites. The main objective of this theme is to create effects to make readers feel the pleasure of reading blog posts and articles. Among the various other free **[WordPress](https://wprefers.com/blog-web-theme-review/)** themes available in **[WordPress](https://wprefers.com/business-trade-pro/)** repository, **[Web Log](https://www.thememiles.com/?wpam_id=25)** is best among them. The main attribute is classic styling that helps to create a simple and clean blog. With list view and grid view this theme is perfectly eye catching. **[Web Log](https://www.thememiles.com/?wpam_id=25)** can be used for Travel, Fashion, Beauty, Lifestyle and many more. ![](https://lh3.googleusercontent.com/kTA0WlRTCzUlyt9zizhmkFcTJ8swOxoiSq7xBDxUEc2boRh9dTCMPLU72PUJWDh8aIBTinhK57VaUdhCA_FWaYePVeJwQ_z1TEQvFyDqC78xvZFTC2W5WtQ50GP7T6FLYXuYHPpr) [**Web Log**](https://www.thememiles.com/?wpam_id=25) theme comes with classic styles that helps to create a simple and clean blog. It’s easy and way too simple to set up. One Click demo Import plug-in to view the cool and simple aspect of the theme. You will get a high quality, responsive, well crafted blog out of the box to make writers only focus on writing content and it has great typography to make your fans and followers focus on every word you write. Theme offers both single and multiple post layout options; Classic view, List view and Grid View added extra importance in the theme. Optimized SEO which means that your website will rank higher on all the SERPs Start up efficiently and make an impact on anyone. ![](https://lh4.googleusercontent.com/2VzKqgXpd0YYb8kLwJ9qhKR8EifUcZ967L5yL3AZ8KR6M1U4iEKx_AHydHDHyX7S_wVkfbXOYIQBP27juSgVM3bLz9cQLBbpW7x2otQCv4nb2VXo5Y2FC8oaJ5aojx8KXxLeQJ5x) #### Web Log – minimal design blogging site If you are looking for a blogging site then **[Web Log](https://www.thememiles.com/?wpam_id=25)** is best for you. **[Web Log](https://www.thememiles.com/?wpam_id=25)** is a clean SEO friendly blogging theme with minimal design, completely responsive, retina-ready; adjust to every device screen size. Furthermore the theme user will get options such as color scheme, custom fonts and other easy customization options for cool and classic look. ##### Free Vs Pro Theme comes up with [both free and paid versions](https://www.thememiles.com/?wpam_id=25). You can download the free version and install easily, but the free version is with limit options. The full function blogging Pro version provides lots of options, some of those are different font family style options, typography Options, Social Sharing , Author Bio detail page, footer customization, footer widgets and many more. You can buy this theme for just $39.00 with 24/7 support from theme miles. . It’s fully supported RTL language with almost no difference to LTR where you can use web Log theme to impress your readers with this amazing minimal RTL blog if Your Language is written from Right to Left we used the Arabic language as an example. ![](https://lh4.googleusercontent.com/aiV1NNTQ1P097Igp7apHf4wHIAXz5VKIPC4-8DnE7TqQRLELSX14GpMP6Rxr3RvzumGh--NbYJiZAZzFLUZNEQhUgktfI31nmWe6eZ59E5n7qyxPS_GcE5FkKoPziuV2ohJH6t-m) ###### **Product Information:** · Rating : 5 stars · Latest version: 1.0.7 · Layout : 100% responsive · Browser: IE10, IE11, Firefox, Safari, Opera, Chrome, Edge · Columns: Two Columns ###### Key Features: 1. Elementor page builder compatible wordpress 2. Advance color customization 3. Additional layout pages 4. Primary color options 5. Cross Browser compatibility 6. SEO Friendly 7. Best theme for all type of website 8. Simple, clean and Light weight theme 9. Speed Optimized theme(90% GTMetrix Page Speed Score) 10. 100% Responsive 11. Sidebar Options for Different Page 12. Numeric Pagination 13. Breadcrumb Options 14. Translation Ready 15. RTL Ready Theme 16. Footer Widget Options 17. One Click Demo Import ##### Conclusion **[Web Log](https://www.thememiles.com/?wpam_id=25)** is a clean SEO friendly blogging theme with minimal design for all those bloggers. Perfect theme for blog writers. The main motto of this theme is to make readers feel the pleasure of reading blog posts. With device compatibility, multiple options for customization and eye catching responsive design this theme is best among themes. The post [Web Log a Minimal Design Blogging Site](https://wprefers.com/web-log-a-minimal-design-blogging-site/) appeared first on [WP Refers](https://wprefers.com).
wprefers
791,004
What was your win this week?
Got to all your meetings on time? Started a new project? Fixed a tricky bug?
0
2021-08-13T17:36:53
https://dev.to/devteam/what-was-your-win-this-week-46ik
discuss, weeklyretro
--- title: What was your win this week? published: true description: Got to all your meetings on time? Started a new project? Fixed a tricky bug? tags: discuss, weeklyretro cover_image: https://cl.ly/188e843c2985/download/Image%202019-02-15%20at%202.36.37%20PM.png --- Hey there! **Looking back on your week, what was something you're proud of?** All wins count — big or small 🎉 Examples of 'wins' include: - Starting a new project - Fixing a tricky bug - Cleaning up your workspace... or whatever else might spark joy ❤️ --- Congrats in advance! ![Happy Friday the 13th](https://media.giphy.com/media/3o7aD2saalBwwftBIY/giphy.gif)
graciegregory
791,157
Led Circuit Using Arduino
Loop Iteration Circuit Using Arduino Hi Fellows! During this project, we'll discuss the method to...
0
2021-08-13T18:31:16
https://dev.to/projectiot123/led-circuit-using-arduino-47cd
programming, arduion, beginners, arduino
Loop Iteration Circuit Using Arduino Hi Fellows! During this project, we'll discuss the method to blink LEDs victimization for loop. The LEDs can light one when the opposite. The LEDS square measure turned on and off, in sequence, by victimization Arduino Module. Let's begin the method. ## Introduction: There square measure few functions therefore helpful that you just realize them all over. For loop is one among those functions. A for loop repeats associate degree action for a range of iterations, reducing the lines of code that require to be written so creating the programmer’s life easier. In this tutorial, six LEDs square measure interfaced to the Arduino Uno. This can be not sophisticated it's a bit like interfacing one light-emitting diode to the [Arduino Library for Proteus](https://projectiot123.com/2019/01/04/arduino-library-for-proteus-simulation/) . ## Principle: We will connect the six LEDs to pins two, 3, 4, 5, 6, and seven of the Arduino. The limiting worth of resistance ought to be between 220 ohms to line the best current through the LEDs. The desired resistance is enough to light associate degree light-emitting diode while not damaging the Arduino and therefore the light-emitting diode. We’ll flip the light-emitting diode ON/OFF one by one. ## Components Required: Arduino LED Resistor ## Circuit Connection: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sia7ph41u684f3mlxiuh.PNG) 1. Attach one leg of a resistor into Arduino pin 2; attach the other leg into a line on the LED. 2. Now attach a resistor to Arduino pin 3, and other leg in a line on a LED. 3. Now attach a resistor to Arduino pin 4, and put the other leg in a line on a LED. 4. Now attach a resistor to Arduino pin 5, and put the other leg in a line on a LED. 5. Now attach a resistor to Arduino pin 6, and put the other leg in a line on a LED. 6. Now attach a resistor to Arduino pin 7, and put the other leg in a line on a LED. 7. Now connect all the led in series and connect them to Arduino pin 13 for ground. 8. Connect an LED in the same manner. 9. Connect the Arduino to your computer for coding. ## Working: OK, currently you're able to run our code. If you probably did it properly, it ought to run, and blink the light-emitting diode one times, so blink the opposite light-emitting diode just one occasion. Our objective during this exercise is to be ready to severally management the LED’s. We’ll wish to blink the light-emitting diode one times during a row, so blink the opposite one once. A “blink” ought to be turning light-emitting diode on, going away it on for 1 / 4 second, turning it off, and going away it off for 1 / 4 second. Then that sequence is going to be recurrent. ## Application:   • Always use a current-limiting resistor • Remember your electrical device color codes • 220-470 ohm square measure smart, all-purpose values for LEDs • Drive from Arduino on digital pins • Use PWM pins if you wish to use analog Write for dimming OK, hopefully you get pleasure from this tutorial. See you later. Smart bye and rejoice.
projectiot123
791,209
React Cookies management with simple hooks
The post has been moved to https://pavankjadda.dev/react-cookies-management-with-simple-hooks/
0
2021-08-13T21:58:28
https://dev.to/pavankjadda/react-cookies-management-with-simple-hooks-3h5i
react, javascript, typescript
The post has been moved to https://pavankjadda.dev/react-cookies-management-with-simple-hooks/
pavankjadda
791,326
SDS Internship Experience!
Camilo Cortes Blog SDS Introduction TransitHealth is a website that allows the general population to...
0
2021-08-14T02:56:25
https://dev.to/camilocortes/sds-internship-experience-7o
internship, computerscience, python, sql
Camilo Cortes Blog SDS Introduction TransitHealth is a website that allows the general population to access comparative data and other metrics concerning the cross of the CTA and health within the Chicago community. Data from the CTA is compiled and processed in an offline pipeline using custom metrics. I utilized CTA bus ridership data from the year 2019 to 2020 to analyze the quantity of trips taken daily. Although I ran out of time, I planned on using this data to form a timeline graph comparing the number of COVID-19 cases reported with the number of rides taken, in an attempt to find a correlation. My Interest: My interest in computer science started when I was six years old and my dad brought home our first computer! I was fascinated with it! Ever since I’ve been encaptured with the beauty of both hardware and software! My main goal is to somehow contribute to assistive technologies. And although I did run out of time, I was able to create scripts for endpoints crossing Chicago transit systems and health. First Image This picture demonstrates my aggregation of data using SQL Second Image This photo demonstrates the subsequent transformation of data utilizing Python 3 Third Image This photo demonstrates the beginnings of crossing the two reference points Lessons Learned: I highly recommend this internship. You take a very hands on approach to problem-solving and this allows you to think more creatively. As you tackle issues on your own, you learn critical thinking skills, and you are always able to reach out to your mentor for assistance. I was able to witness the backend pipeline for the first time and understand it’s dynamics, an experience I really valued! Take your time, and you will see success!
camilocortes
791,653
Build a simple Pie Chart with HTML and CSS
You can create a Pie Chart in HTML using a simple CSS function called conic-gradient. First, we add...
0
2021-08-14T18:32:45
https://dev.to/cscarpitta/build-a-simple-pie-chart-with-html-and-css-32dn
css, webdev, html, beginners
You can create a **Pie Chart** in HTML using a simple CSS function called `conic-gradient`. First, we add a `<div>` element to our HTML page, which acts as a placeholder for our pie chart. ```html <div id="my-pie-chart"></div> ``` We need to provide a `width` and a `height` to the `<div>` element, which determine the size of our pie chart: ```css #my-pie-chart { height: 100px; width: 100px; } ``` Then, we need to make our pie chart circle-shaped by setting the `border-radius` value to `50%`: ```css #my-pie-chart { border-radius: 50%; } ``` And finally we are ready to populate the pie chart with our data. As an example, let's consider the world population data reported at the following link: https://www.worldometers.info/geography/7-continents/ We want to show the population distribution per continent using our pie chart. For each continent, we associate an arbitrary color and the population percentage taken from the above link. The data is summarized in the following table: | Continent | Color | Population | | ------------- |:------:| ----------:| | Asia | red | 59.54% | | Africa | orange | 17.20% | | Europe | yellow | 9.59% | | North America | green | 7.60% | | South America | blue | 5.53% | | Australia | black | 0.55% | | Antarctica | brown | 0.00% | To apply these values to our pie chart, we need to partition it into 7 sectors, a sector for each continent. To create the sectors, we can use the `conic-gradient` CSS function. Each sector has a color, a start position and a stop position. For example, Antarctica is represented by the brown color and has 0.00% of the world population. Therefore, we want a brown sector from 0.00% to 0.00%. Then, we want to plot a black sector representing the Australia, which has 0.55% of the world population. This results in a black sector going from 0.00% to 0.55%. Similarly, to represent the South America we want a blue sector going from 0.55% to 6.08% (= 0.55% + 5.53%). And so on. At the end we will have the following CSS background property: ```css #my-pie-chart { background: conic-gradient(brown 0.00%, black 0.00% 0.55%, blue 0.55% 6.08%, green 6.08% 13.68%, yellow 13.68% 23.27%, orange 23.27% 40.47%, red 40.47%); } ``` :bug: That's all. Now we are able to create a pie chart in CSS. {% codepen https://codepen.io/cscarpitta/pen/XWRQmxm %}
cscarpitta
791,838
How to replace an existing document in MongoDB
For a full overview of MongoDB and all my posts on it, check out my overview. MongoDB provides...
13,964
2021-08-14T15:53:24
https://donaldfeury.xyz/how-to-replace-an-existing-document-in-mongodb/
mongodb
--- title: How to replace an existing document in MongoDB published: true date: 2021-08-14 15:41:40 UTC tags: MongoDB canonical_url: https://donaldfeury.xyz/how-to-replace-an-existing-document-in-mongodb/ cover_image: https://donaldfeury.xyz/content/images/2021/08/MongoDB_Logo2-1.png series: "Small Bytes of MongoDB" --- ![How to replace an existing document in MongoDB](https://donaldfeury.xyz/content/images/2021/08/MongoDB_Logo2-1.png) --- For a full overview of MongoDB and all my posts on it, check out my [overview](https://donaldfeury.xyz/introduction-to-mongodb/). [MongoDB provides several ways to update specifically one document](https://donaldfeury.xyz/how-to-update-a-single-document-into-a-mongodb-collection-2/) that works great when doing partial updates. If you want to completely replace an existing document with a new one, you can use `replaceOne`. First, let's insert some data into a collection called `podcasts`: ``` db.podcasts.insertMany([ { "name": "Tech Over Tea", "episodeName": "#75 Welcome Our Hacker Neko Waifu | Cyan Nyan", "dateAired": ISODate("2021-08-02"), "listenedTo": true, }, { "name": "Tech Over Tea", "episodeName": "Neckbeards Anonymous - Tech Over Tea #20 - feat Donald Feury", "dateAired": ISODate("2020-07-13"), "listenedTo": true }, { "name": "Tech Over Tea", "episodeName": "#34 The Return Of The Clones - feat Bryan Jenks", "dateAired": ISODate("2020-10-19"), "listenedTo": false } ]) ``` Let's completely replace the podcast that aired on `2020-07-13` with a new podcast. Unlike `update` and `updateOne`, `replaceOne` does not use `update operators`. ``` db.podcasts.replaceOne( {dateAired: ISODate("2020-07-13")}, { "name": "Tech Over Tea", "episodeName": "#73 Is This A Gaming Podcast Now | Solo", "dateAired": ISODate("2021-07-19"), "listenedTo": false } ) ``` The arguments are similar to `update` and `updateOne` where the first argument is the query to match a document to replace. However, the second argument is a new document that will completely replace the first matched document.
dak425
792,021
Use of string concatenation within loops in JAVA
In Java, do you know about the pitfall of using string concatenation within loops?? Since strings are...
0
2021-08-14T21:18:12
https://dev.to/faizm4765/use-of-string-concatenation-within-loops-in-java-2j6l
java, strings, loops
In Java, do you know about the pitfall of using string concatenation within loops?? Since strings are immutable in JAVA, when we try to append another char to a string, it creates a new copy of the string and updates the copy instead of the original string.😯 Example : ```java String s = "value"; s.concat("d"); ``` On doing so, value of string s is not changed rather a new updated copy of string s is created in heap. Now think, if concatenating string creates one another copy of string object, what would happen if do string concatenation within a loop. ```java string s = "coding"; for(int i = 0;i < 100000;i++){ s += "java"; } ``` This would create 100000 new copies of string object s!!!😲😶😲 Creating 100000 copies leads to significantly impacting code performance. So you must be thinking what's the solution?? We have two alternative to tackle this issue? 🎉🎉 StringBuffer & StringBuilder 🙌🙌 Both serve the purpose of avoiding creation of string objects upon concatenation! Example: ``` java StringBuffer s3 = new StringBuffer("value"); String s2 = "value2"; for(int i = 0;i < 100000;i++){ s3.append(s2); // s3 = s3 + s2; } ``` In this scenario, no creation of string objects happen! Cool, isn't it?😁 Same can be achieved with StringBuilder, so what's the difference between the two then?? StringBuilder is not thread safe while StringBuffer is! But more on threads later!! Hope you enjoyed this !!
faizm4765
792,036
10 Of The Most Amazing JS Libraries That Almost You Will Enjoy Using Them In Your Project!
Hello everybody, I'm Aya Bouchiha, in this post, I'll share with you 10 amazing javascript libraries....
14,581
2021-08-14T23:27:38
https://dev.to/ayabouchiha/10-of-the-most-amazing-js-libraries-that-almost-you-will-enjoy-using-them-in-your-project-3amo
javascript, typescript, webdev, tutorial
Hello everybody, I'm [Aya Bouchiha](developer.aya.b@gmail.com), in this post, I'll share with you 10 amazing javascript libraries. ## Chart.js **Chart.js** is an open-source library that lets you visualize data. + [github](https://github.com/chartjs/Chart.js) + [docs](https://www.chartjs.org/docs/) + [demo](https://www.chartjs.org/docs/latest/samples/bar/vertical.html) + [tutorial](https://www.youtube.com/watch?v=sE08f4iuOhA) ### cdn ```html <script src="https://cdn.jsdelivr.net/npm/chart.js"></script> ``` ### npm ```shell npm i chart.js ``` ## Anime.js **Anime.js**: is one of the most popular libraries which adds awesome animations to your web application. + [github](https://github.com/juliangarnier/anime) + [docs](https://animejs.com/documentation/) + [tutorial](https://www.youtube.com/watch?v=g7WnZ9hxUak&t=934s) ### cdn ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/animejs/3.2.1/anime.min.js" integrity="sha512-z4OUqw38qNLpn1libAN9BsoDx6nbNFio5lA6CuTp9NlK83b89hgyCVq+N5FdBJptINztxn1Z3SaKSKUS5UP60Q==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i animejs ``` ## D3.js **D3.js** is a JavaScript library for manipulating documents based on data. + [github](https://github.com/d3/d3) + [docs](https://d3js.org/) + [tutorial](https://www.youtube.com/watch?v=_8V5o2UHG0E)(13h!) ### cdn ```js <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/7.0.0/d3.min.js" integrity="sha512-0x7/VCkKLLt4wnkFqI8Cgv6no+AaS1TDgmHLOoU3hy/WVtYta2J6gnOIHhYYDJlDxPqEqAYLPS4gzVex4mGJLw==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i d3 ``` ## GSAP + **GSAP** is one of the most famous libraries that animates anything JavaScript can touch, such as CSS properties and SVG. + [github](https://github.com/greensock/GSAP) + [docs](https://greensock.com/docs/) + [demo](https://greensock.com/showcase/) + [tutorial](https://www.youtube.com/watch?v=YqOhQWbouCE&t=4s) ### cdn ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.7.1/gsap.min.js" integrity="sha512-UxP+UhJaGRWuMG2YC6LPWYpFQnsSgnor0VUF3BHdD83PS/pOpN+FYbZmrYN+ISX8jnvgVUciqP/fILOXDjZSwg==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i gsap ``` ## vivus.js **vivus**: is a lightweight JavaScript class that gives SVGs the appearance of being drawn. + [github](https://github.com/maxwellito/vivus) + [demo](http://maxwellito.github.io/vivus/) ### cdn ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/vivus/0.4.6/vivus.min.js" integrity="sha512-oUUeA7VTcWBqUJD/VYCBB4VeIE0g1pg5aRMiSUOMGnNNeCLRS39OlkcyyeJ0hYx2h3zxmIWhyKiUXKkfZ5Wryg==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i vivus ``` ## TypeIt.js **TypeIt**: is a JavaScript tool for creating typewriter effects. + [github](https://github.com/alexmacarthur/typeit) + [docs](https://typeitjs.com/docs) + [demo](https://typeitjs.com/#examples) + [tutorial](https://www.youtube.com/watch?v=FM6cLucUqlw) ### cdn ```html <script src="https://cdn.jsdelivr.net/npm/typeit@7.0.4/dist/typeit.min.js"></script> ``` ### npm ```shell npm i typeit ``` ## dropzone.js **Dropzone** is a JavaScript open-source library that turns any HTML element into a dropzone. This means that a user can drag and drop a file onto it, and Dropzone will display file previews and upload progress, and handle the upload for you via XHR. + [github](https://github.com/dropzone/dropzone) + [docs](https://dropzone.gitbook.io/dropzone/) + [demo](https://www.dropzonejs.com/) + [tutorial (with django)](https://www.youtube.com/watch?v=jUtCtlCRAT4&t=869s) ### cdn ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/dropzone/5.9.2/min/dropzone.min.js" integrity="sha512-VQQXLthlZQO00P+uEu4mJ4G4OAgqTtKG1hri56kQY1DtdLeIqhKUp9W/lllDDu3uN3SnUNawpW7lBda8+dSi7w==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i dropzone ``` ## Scroll Out **ScrollOut** is a javascript library that detects changes in scroll for reveal, parallax, and CSS Variable effects. + [github](https://github.com/scroll-out/scroll-out) + [docs](https://scroll-out.github.io/guide.html) + [demo](https://codepen.io/collection/npPbNM/) + [tutorial](https://www.youtube.com/watch?v=m-MpXGFKomE) ### cdn ```html <script src="https://unpkg.com/scroll-out/dist/scroll-out.min.js"></script> ``` ### npm ```shell npm i scroll-out ``` ## Three.js **Three.js**: is a powerful javascript library that helps you to create 3D computer graphics. + [docs](https://threejs.org/docs/) + [demo](https://threejs.org/examples/#webgl_animation_cloth) + [github](https://github.com/mrdoob/three.js/) + [tutorial](https://www.youtube.com/watch?v=pUgWfqWZWmM&t=59s) ### cdn ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js" integrity="sha512-dLxUelApnYxpLt6K2iomGngnHO83iUvZytA3YjDUCjT0HDOHKXnVYdf3hU4JjM8uEhxf9nD1/ey98U3t2vZ0qQ==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i three ``` ## leaflet **leaflet**: is an open-source JavaScript library for mobile-friendly interactive maps. + [github](https://github.com/Leaflet/Leaflet) + [docs](https://leafletjs.com/reference-1.7.1.html) + [demo](https://leafletjs.com/) + [tutorial](https://www.youtube.com/watch?v=ls_Eue1xUtY) ### cdn ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.7.1/leaflet.js" integrity="sha512-XQoYMqMTK8LvdxXYG3nZ448hOEQiglfqkJs1NOQV44cWnUrBc8PkAOcXy20w0vlaXaVUearIOBhiXZ5V3ynxwA==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> ``` ### npm ```shell npm i leaflet ``` ## Suggested Posts + [Youtube Courses, Projects To Master Javascript](https://dev.to/ayabouchiha/youtube-courses-projects-to-master-javascript-3lhc) + [Your Essential Guide To Map Built-in Object In Javascript](https://dev.to/ayabouchiha/the-essential-guide-to-map-built-in-object-in-javascript-17d2) + [All JS String Methods In One Post!](https://dev.to/ayabouchiha/all-js-string-methods-in-one-post-4h23) To Contact Me: + email: developer.aya.b@gmail.com + telegram: [Aya Bouchiha](https://t.me/AyaBouchiha) Happy codding!
ayabouchiha
792,084
Flutter & Dart Tips - Week In Review #10
Hello Reader, Welcome back to the 10th post of the Flutter &amp; Dart Tips series. Ten weeks ago,...
13,200
2021-08-15T00:28:36
https://dev.to/offlineprogrammer/flutter-dart-tips-week-in-review-10-55o8
flutter, dart, beginners, codenewbie
Hello Reader, Welcome back to the 10th post of the Flutter & Dart Tips series. ![dude-its-the-72b78733f8](https://i.imgur.com/tkwEyYD.jpg) Ten weeks ago, I started this series to share the tips I tweet during the week. My goal is to have at least 100 tips in this series. 1- LayoutBuilder helps to create a widget tree that can depend on the size of the parent widget. ```dart LayoutBuilder( builder: (context, constraints) { if (constraints.maxWidth >= 750) { return Container( color: Colors.green, height: 100, width: 100, ); } else { return Container( color: Colors.yellow, height: 100, width: 100, ); } }, ); ``` > Try it on DartPad <a href="https://dartpad.dev/?id=e4cac9818ebd1bfad62323c834b489b0&null_safety=true">here</a> ![LayoutBuilder](https://media.giphy.com/media/X50n7IZ9s0vdGwnwNL/giphy.gif) 2- The wrap is a widget that displays its children in horizontal or vertical runs. It will try to place each child next to the previous child on the main axis. If there is not enough space, it will create a new run adjacent to its existing children in the cross axis. ```dart Wrap( children: List.generate( 10, (index) => Container( margin: const EdgeInsets.all(10), color: Colors.green, height: 100, width: 100, ), ), ); ``` > Try it on DartPad <a href="https://dartpad.dev/?id=5c0b7e70d19ec7c640fbdd10583b7484&null_safety=true">here</a> ![Warp](https://i.imgur.com/wrcMqRb.gif) 3- AnimatedIcon is a Flutter widget that animates the switching of an icon with other. ```dart AnimatedIcon( icon: AnimatedIcons.pause_play, size: 52, progress: myAnimation ), ``` > Try it on DartPad <a href="https://dartpad.dev/?id=ea830ec764f187e1bd372427509dfd93&null_safety=true">here</a> ![AnimatedIcon](https://media.giphy.com/media/KoU0Ba0j7kwCsau1F3/giphy.gif) 4- The AnimatedContainer widget is a container widget with animations. It can be animated by altering the values of its properties. ```dart AnimatedContainer( width: _width, height: _height, decoration: BoxDecoration( color: _color, borderRadius: _borderRadius, ), duration: Duration(seconds: 1), curve: Curves.fastOutSlowIn, ), ``` > Try it on DartPad <a href="https://dartpad.dev/?id=3a4291b177882967d3e5ab48c00988dc&null_safety=true">here</a> ![AnimatedContainer](https://i.imgur.com/TLWAMwd.gif) 5- The SliverAppBar is a widget that gives a floating app bar. ```dart SliverAppBar( pinned: _pinned, snap: _snap, floating: _floating, expandedHeight: 160.0, flexibleSpace: const FlexibleSpaceBar( title: Text('SliverAppBar'), background: FlutterLogo(), ), ), ``` > Try it on DartPad <a href="https://dartpad.dev/?id=fba7a3ae6b78ab7933c7635e24d7ecf2&null_safety=true">here</a> ![SliverAppBar](https://media.giphy.com/media/nRPyZXLfNoniOeHzzp/giphy.gif) 6- AnimatedOpacity Widget automatically transitions the child’s opacity over a given duration whenever the given opacity changes. ```dart AnimatedOpacity( opacity: _opacity, duration: const Duration(seconds: 1), curve: Curves.bounceInOut, // The green box must be a child of the AnimatedOpacity widget. child: Container( width: 200.0, height: 200.0, color: Colors.green, ), ), ``` > Try it on DartPad <a href="https://dartpad.dev/?id=5d1c2edf4adbf720616a766466a0cdfc&null_safety=true">here</a> ![AnimatedOpacity](https://media.giphy.com/media/ciDOYTDpOkLxPPgy5i/giphy.gif) See you next week. 👋🏻 > Follow me on <a href="https://twitter.com/_Mo_Malaka_">Twitter</a> for more tips about #coding, #learning, #technology...etc. > Check my Apps on <a href="https://bit.ly/3h05gQ7">Google Play</a> & <a href="https://apple.co/3hZXoBx">Apple Store</a> <span>Cover image <a href="https://unsplash.com/@ryanquintal?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Ryan Quintal</a> on <a href="https://unsplash.com/s/photos/ten?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></span>
offlineprogrammer
792,096
Microtasks and (Macro)tasks in Event Loop
JavaScript has a concurrency model based on an event loop, which is responsible for executing the...
0
2021-08-15T02:01:40
https://dev.to/saravanakumarke/microtasks-and-macro-tasks-in-event-loop-4h2h
javascript, webdev
JavaScript has a concurrency model based on an **event loop**, which is responsible for executing the code, collecting and processing events, and executing queued sub-tasks. Here, we will see about microtask and macrotask in event loop and how event loop will handle tasks. let’s dive in! 🏃‍♂️ Within the Event Loop, there are actually 2 type of queues: the (macro)task queue (or just called the task queue), and the microtask queue. The (macro)task queue is for (macro)tasks and the microtask queue is for microtasks. ###Microtask A **microtask** is a short function which is executed after the function or program which created it exits and only if the **JavaScript execution stack is empty**. * Promise callback * queueMicrotask ###Macrotask A **macrotask** is short function which is executed after **JavaScript execution stack and microtask are empty**. * setTimeout * setInterval * setImmediate ###Explanation When a Promise resolves and calls its then(), catch() or finally(), method, the callback within the method gets added to the microtask queue! This means that the callback within the then(), catch() or finally() method isn’t executed immediately, essentially adding some async behavior to our JavaScript code! So when is a then(), catch() or finally() callback executed?🤷‍♂️ Here the **event loop gives a different priority to the tasks**. All functions in that are currently in the **call stack get executed**. When they returned a value, they get **popped** off the stack. When the **call stack is empty**, all queued up **microtasks are popped** onto the call stack one by one, and get executed! (Microtasks themselves can also return new microtasks, effectively creating an infinite microtask loop). If both the **call stack and microtask queue are empty**, the event loop checks if there are tasks left on the **(macro)task queue**. The tasks get popped onto the call stack, executed, and popped off! ###Example Task1: A function that’s added to the call stack immediately, for example by invoking it instantly in our code. Task2, Task3, Task4: microtasks, for example a promise then callback, or a task added with queueMicrotask. Task5, Task6: a (macro)task, for example a setTimeout or setImmediate callback First, Task1 returned a value and got popped off the call stack. Then, the engine checked for tasks queued in the microtask queue. Once all the tasks were put on the call stack and eventually popped off, the engine checked for tasks on the (macro)task queue, which got popped onto the call stack, and popped off when they returned a value. Here’s graphic illustration of the event loop 👇 ![alt text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kr6bsfd5o0ipc85kyaf8.gif) ###Conclusion Congratulations for reading until the end! In this article you’ve learned: * How microtask and macrotask are working in event loop. I hope you find this article helpful in understanding microtask and macrotask how it works. **Suggestions are highly appreciated ❤️**
saravanakumarke
792,116
Quasar's QTable: The ULTIMATE Component (4/6) - ALL The Slots!
What's black, blue, and PACKED full of QTable slots? ... The video version of this blog...
0
2021-08-19T13:19:03
https://dev.to/quasar/quasar-s-qtable-the-ultimate-component-4-6-all-the-slots-40g2
quasar, vue, javascript, webdev
What's black, blue, and PACKED full of QTable slots? ... The video version of this blog post! {% youtube cxNvoSkeLcM %} The ideal progression for customizing **rows** with Quasar's `QTable` is this: 1. **No slots**, only props 2. The **generic** "cell" slot (`#body-cell`) 3. **Specific** "cell" slots (`#body-cell-[name]`) 4. **Row** slots (`#body`) The further down the list, the more **flexibility** and **control** you wield! The further up the list, the more **ease** and abstraction. So keep that in mind! **If slots aren't needed, don't use them**. They're there to offer flexibility when the defaults and props aren't enough. Make sense? Sweet! With that in mind, we'll dive in to Quasar's slots... Oh! And if you want to [learn all 72 of Quasar's components](https://quasarcomponents.com) through videos, checkout [QuasarComponents.Com](https://quasarcomponents.com) 😉 ## Setup First, for all you **git cloners** out there, Here's the [GitHub Repo](https://github.com/ldiebold/q-table-blog)! ... We'll use a similar setup to past examples with a couple of additions: First, install `lodash-es` ```shell yarn add lodash-es ``` Why lodash-es? Because it allows us to **import individual functions** easily without bringing in the THE WHOLE OF LODASH which is a **MASSIVE** dependency! *ahem*, anywho... Here's the setup: ```javascript <script> import { copyToClipboard } from 'quasar' import { ref } from 'vue' import { sumBy, meanBy } from 'lodash-es' export default { setup () { const rows = ref([ { id: 1, name: 'Panda', email: 'panda@chihuahua.com', age: 6 }, { id: 2, name: 'Lily', email: 'lily@chihuahua.com', age: 5 } ]) const columns = ref([ { label: 'name', field: 'name', name: 'name', align: 'left' }, { label: 'email', field: 'email', name: 'email', align: 'left' }, { label: 'age', field: 'age', name: 'age', align: 'center' } ]) return { copyToClipboard, rows, columns, sumBy, meanBy } } } </script> ``` Quasar comes with a handy **copy to clipboard** utility function that we'll use in one of the examples. We'll also use `sumBy` and `meanBy` to build a **summary row**, and an **average row**. I've also used `ref` for the columns. Usually, you shouldn't do this since columns are almost never reactive! I've done it here, because in one of the examples we'll **make columns editable**! Okay, put on your snorkel and we'll **dive** in 🤿 ## Generic Cell Slots (#body-cell) Want to make all cells "copyable" with the press of a button? no problem! We can use the `#body-cell` prop for that... ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #body-cell="props"> <q-td :props="props" > <q-btn flat color="primary" :label="props.value" @click="copyToClipboard(props.value)" /> </q-td> </template> </q-table> ``` ![Customizing Quasar Table Cells With Slots](https://i.imgur.com/WfIZFhg.png) This is an easy way to **target every cell**. Notice that we're passing `props` to `q-td`? This basically allows us to proxy "Quasar Table Cell Stuff" easily 👍 Also notice we have **access to the cells value** with `props.value`! But what if we want to target **specific** cells... ## Specific Cell Slots (#body-cell-[name]) tack on "name" and you can target any cell you like within a row. You'll likely end up using this a lot, it's very handy! It's particularly useful for a **delete button** cell at the end of a row. In this example, we use it to simply alternate colors: ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #body-cell-name="props"> <q-td class="bg-blue-1" :props="props" > {{ props.value }} </q-td> </template> <template #body-cell-email="props"> <q-td class="bg-blue-2" :props="props" > {{ props.value }} </q-td> </template> <template #body-cell-age="props" > <q-td class="bg-blue-1" :props="props" > {{ props.value }} </q-td> </template> </q-table> ``` ![Customizing Specific Quasar Table Cells With Slots](https://i.imgur.com/f3rlNKy.png) The API for `#body-cell-[name]` is almost identical to `#body-cell` (Classic Quasar! amazingly consistent API 🎉) ## Row Slots (#body) (editable cells) Before looking at this example, I want you to notice two things: 1. `props` is proxied to `q-tr` AND `q-td`. Once again, this is important as it allows Quasar to take control over the cell for things like "hiding columns" and setting the `row-key` 2. We use `dense` and `borderless` on `q-input`, otherwise it looks strange in a table cell! ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #body="props"> <q-tr :props="props" > <q-td key="name" :props="props" > <q-input v-model="props.row.name" borderless dense /> </q-td> <q-td key="email" :props="props" > <q-input v-model="props.row.email" borderless dense /> </q-td> <q-td key="age" :props="props" > <q-input v-model="props.row.age" borderless dense input-class="text-center" /> </q-td> </q-tr> </template> </q-table> ``` ![Quasar QTable With Editable Cells](https://i.imgur.com/8cYmbJ9.png) Doesn't look like much does it? But take a look at that code... we're using `QInput`'s in the cells... **These cells are EDITABLE!!!** This is a common question in the community. >"How do we achieve editable cells with q-table?" well **that** my friends ☝️☝️☝️, is how 😉 ----- The rest of this blog post will be very **example driven** with less eplanation. The aim is to make you aware of what's possible, so you can go to bed tonight **dreaming of table possibilities**! 💤💭😶‍🌫️ (I have no idea what that second emoji is. Found it on emojifinder.com when searching for "dream") **SO!** Ready for this? Sweet! Let's go nuts!!! ----- ## Header Cell Slots Pretty much the same concept as `body-cell` slots ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #header-cell="props"> <q-th style="font-size: 1.4em;" class="text-primary" :props="props" > {{ props.col.label }} </q-th> </template> </q-table> ``` ![Quasar QTable Header Cell Slots](https://i.imgur.com/ndADNte.png) ## Specific Header Cell Slot ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #header-cell-email="props"> <q-th :props="props"> <q-icon size="sm" name="email" class="q-mr-sm" color="grey-7" />{{ props.col.label }} </q-th> </template> </q-table> ``` ![Quasar QTable Specific Header Cell Slot](https://i.imgur.com/89ISRBv.png) ## Header Row Slot In this example, we make the header cells editable! Cool stuff 😎 ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #header="props"> <q-tr> <q-th key="name" :props="props" > <q-input v-model="columns[0].label" dense borderless input-class="text-bold" /> </q-th> <q-th key="email" :props="props" > <q-input v-model="columns[1].label" dense borderless input-class="text-bold" /> </q-th> <q-th key="age" :props="props" > <q-input v-model="columns[2].label" dense borderless input-class="text-bold text-center" /> </q-th> </q-tr> </template> </q-table> ``` ![Quasar QTable Header Row Slot](https://i.imgur.com/Utux5Zi.png) ## Bottom And Top Row Slot Great for aggregations and averages! This is where we use those lodash functions... ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #top-row> <q-tr class="bg-blue-1"> <q-td class="text-bold"> Average: </q-td> <q-td /> <q-td class="text-center"> {{ meanBy(rows, 'age') }} </q-td> </q-tr> </template> <template #bottom-row> <q-tr class="bg-green-1"> <q-td class="text-bold"> Total: </q-td> <q-td /> <q-td class="text-center"> {{ sumBy(rows, 'age') }} </q-td> </q-tr> </template> </q-table> ``` ![Quasar QTable Bottom And Top Row Slot](https://i.imgur.com/GoskK5f.png) ## Top Slot (**above** the actual table) Perfect for things like **filters** and a **search input** ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #top> <div class="text-bold" style="font-size: 1.3em;" > Cute Pups </div> <q-input class="q-ml-md" dense outlined placeholder="Search" > <template #prepend> <q-icon name="search" /> </template> </q-input> </template> </q-table> ``` ![Quasar QTable Customizing the Top With A Search Input](https://i.imgur.com/5UcRa96.png) ## Bottom Slot (**below** the actual table) Of course, we have total control over the bottom slot! ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #bottom> <span> dogs from <a href="https://poochypoochypooch.com">poochypoochypooch.com</a> </span> </template> </q-table> ``` ![Quasar QTable Customizing The Bottom](https://i.imgur.com/NoWgCLO.png) ## Top Left and Top Right Slot I like using `#top-left` and `#top-right` more than `#top`. I almost always want something on either side, so it feels nicer than just using `#top`... ```vue <q-table :rows="rows" :columns="columns" row-key="id" > <template #top-left> <div class="text-bold" style="font-size: 1.3em;" > Cute Pups </div> </template> <template #top-right> <q-input class="q-ml-md" dense outlined placeholder="Search" > <template #prepend> <q-icon name="search" /> </template> </q-input> </template> </q-table> ``` ![Quasar QTable Top Left and Top Right Slot](https://i.imgur.com/GJIH80S.png) ## No Data Slot Of course, we can completely overwrite the message for **no-data**... ```vue <q-table :rows="[]" :columns="columns" row-key="id" > <template #no-data> <div>Hmmm, I can't find any dang data!</div> </template> </q-table> ``` ![Quasar QTable No Data Slot Example](https://i.imgur.com/tmA1Ah9.png) ## And That's It! 🎉🍾🎊🤗 Now, a question... ## Can I Share My Story With You? If you enjoyed this post **half as much** as I enjoyed making it for you, we'll be best friends! And if you'd like to hear some of my story, head on over to [QuasarComponents.Com](https://quasarcomponents.com). I'll [share the journey that lead to my love of Quasar](https://quasarcomponents.com), and tell you about the [**Massive** component series](https://quasarcomponents.com) I'm currently working on 🙃 So [Click Here](https://quasarcomponents.com), and I'll see you on the other side! ... Thanks for reading and remember, There is **nothing** you can't build...
ldiebold
792,200
Rust Trait Objects Demystified
Dealing with Trait Objects in Rust is a trap for young players, especially when you want to obtain a...
0
2021-08-15T05:44:29
https://dev.to/bsodmike/rust-trait-objects-demystified-54dk
rust
Dealing with Trait Objects in Rust is a trap for young players, especially when you want to obtain a composition of traits. Here's a deep-dive with code-examples and a Github repo for you to play with - Enjoy! https://desilva.io/posts/rust-trait-objects-demystified
bsodmike
792,208
Create PDF documents with AWS Lambda + S3 with NodeJS and Puppeteer
This post was originally posted on my blog Intro Recently I had to create two serverless...
0
2021-08-15T06:31:25
https://dev.to/javiertoscano/create-pdf-documents-with-aws-lambda-s3-with-nodejs-and-puppeteer-3phi
aws, serverless, node, javascript
This post was originally posted on my [blog](https://javtoscano.com/create-pdf-documents-with-aws-lambda-s3-with-nodejs-and-puppeteer) # Intro Recently I had to create two serverless functions for a client that needed to create a PDF document from an existing HTML format and merge it with another PDF documents provided by users in an upload form. In this article, we will use examples based on real-world applications. Going through project configuration, AWS configuration, and project deployment. # Content 1. [Setting Up](#setting-up) 2. [Setting up serverless configuration](#setting-up-serverless-configuration) 3. [Setting up a Lambda Layer](#setting-up-a-lambda-layer) 4. [Working with Puppeteer](#working-with-puppeteer) 5. [Uploading PDF to S3](#uploading-pdf-to-s3) 6. [Deploying to AWS](#deploying-to-aws) # TL;DR: - Lambda function [Github Repo](https://github.com/JavToscano/serverless-pdf-generator) - Login demo app [Github Repo](https://github.com/JavToscano/puppeteer-login-demo) ## Setting Up ### Serverless Framework We will be using the [Serverless Framework](https://www.serverless.com/) to deploy easily our resources to the cloud. Open up a terminal and type the following command to install Serverless globally using npm. ``` npm install -g serverless ``` ### Initial Project Setup Create a new serverless project: ``` serverless create --template aws-nodejs --path pdf-generator ``` This is going to create a new folder named `pdf-generator` with two files on it `handler.js` and `serverless.yml`. For now, we will leave the files as-is. ### Installing Dependencies. We will need the following dependencies to work with puppeteer on our project. - **chrome-aws-lambda**: Chromium Binary for AWS Lambda and Google Cloud Functions. - **puppeteer-core**: Puppeteer-core is intended to be a lightweight version of Puppeteer for launching an existing browser installation or for connecting to a remote one. - **aws-sdk**: AWS SDK Library to interact with AWS Services. - **serverless-webpack**: A Serverless v1.x & v2.x plugin to build your lambda functions with Webpack. - **node-loader**: Allows to connect native node modules with .node extension. ``` npm install chrome-aws-lambda puppeteer-core npm install -D aws-sdk node-loader serverless-webpack ``` ### Configuring Webpack Once we have our project dependencies installed, we are going to configure Webpack, to package our code and reduce the size of our cloud function, this will save us a lot of problems since lambdas can hit around 1GB of space, and sometimes AWS rejects our package because of the size. Create the file `webpack.config.js` on our project root, and add the following code: ``` module.exports = { target: "node", mode: "development", module: { rules: [ { test: /\.node$/, loader: "node-loader", }, ], }, externals: ["aws-sdk", "chrome-aws-lambda"], }; ``` In the code above we are setting the following options to Webpack: - We are using development mode, so our code isn't minified and we can trace errors with `AWS CloudWatch` - We are importing node modules to our bundle using `node-loader` - We are excluding `aws-sdk` and `chrome-aws-lambda` from our bundle since AWS has a built-in `aws-sdk` library and for `chrome-aws-lambda` we are going to use a [Lambda Layer](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) since Webpack can't bundle the library as-is ## Setting up serverless configuration Next, we are going to configure our `serverless.yml` file, for now, we will add some environment variables, a lambda layer to use `chrome-aws-lambda`, and add Webpack to the list of plugins. First, we define global variables to use along all of our functions. ``` custom: app_url: https://puppeteer-login-demo.vercel.app app_user: admin@admin.com app_pass: 123456789 ``` Here we are defining custom properties that we can access in our configuration file using the syntax `${self:someProperty}` in our case, we can access our properties using the following syntax `${self:custom.someProperty}` Now we define our environment variables inside our function to allow our handler to access these variables. ``` functions: generate-pdf: handler: handler.handler environment: APP_URL: ${self:custom.app_url} APP_USER: ${self:custom.app_user} APP_PASS: ${self:custom.app_pass} ``` Now add the plugins section at the end of our file, so we can use Webpack with our lambdas. ``` plugins: - serverless-webpack package: individually: true ``` So far our `serverless.yml` should look like the following: ``` service: pdf-generator frameworkVersion: '2' custom: app_url: https://puppeteer-login-demo.vercel.app app_user: admin@admin.com app_pass: 123456789 provider: name: aws stage: dev region: us-east-1 runtime: nodejs12.x lambdaHashingVersion: 20201221 functions: generate-pdf: handler: handler.handler environment: APP_URL: ${self:custom.app_url} APP_USER: ${self:custom.app_user} APP_PASS: ${self:custom.app_pass} plugins: - serverless-webpack package: individually: true ``` ## Setting up a Lambda Layer To use the library `chrome-aws-lambda` we need to use it as an external library, for this, we can create our own [Lambda Layer](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) or use a community hosted one. Here I'll explain both options and you can decide whenever option you want to use it. #### Own Hosted Layer First, we have to package the library as a zip file, open up the terminal, and type: ``` git clone --depth=1 https://github.com/alixaxel/chrome-aws-lambda.git && \ cd chrome-aws-lambda && \ make chrome_aws_lambda.zip ``` The above will create a `chrome-aws-lambda.zip` file, which can be uploaded to your Layers console. #### Community Hosted Layer [This repository](https://github.com/shelfio/chrome-aws-lambda-layer) hosts a Community Lambda Layer so we can use it directly on our function. At this time the latest version is `24` ``` arn:aws:lambda:us-east-1:764866452798:layer:chrome-aws-lambda:24 ``` Now we have to add this layer to our `serverless.yml` file and specify that our function is going to use this layer, in this case, we are going to use the community version. ``` functions: generate-pdf: handler: handler.handler layers: - arn:aws:lambda:us-east-1:764866452798:layer:chrome-aws-lambda:24 ``` ## Working with Puppeteer Now that our project is configured, we are ready to start developing our lambda function. First, we start loading the chromium library and creating a new instance in our `handler.js` file to work with Puppeteer. ``` "use strict"; const chromium = require("chrome-aws-lambda"); exports.handler = async (event, context) => { let browser = null; try { browser = await chromium.puppeteer.launch({ args: chromium.args, defaultViewport: chromium.defaultViewport, executablePath: await chromium.executablePath, headless: chromium.headless, ignoreHTTPSErrors: true, }); const page = await browser.newPage(); } catch (e) { console.log(e); } finally { if (browser !== null) { await browser.close(); } } }; ``` In this example, we will use an app that needs login to view the report that we want to convert to PDF, so first, we are going to navigate to the login page and using the environment variables to simulate a login to access the report. ``` await page.goto(`${process.env.APP_URL}/login`, { waitUntil: "networkidle0", }); await page.type("#email", process.env.APP_USER); await page.type("#password", process.env.APP_PASS); await page.click("#loginButton"); await page.waitForNavigation({ waitUntil: "networkidle0" }); ``` In the above code we carry out the following steps: 1. Navigate to the login page 2. Search for the input with ID `email` and `password` and type the user and password credentials from the env variables. 3. Click on the button with ID `loginButton` 4. Wait for the next page to be fully loaded (in our example we are being redirected to a Dashboard) Now we are logged in, so we can navigate to the report URL that we want to convert to a PDF file. ``` await page.goto(`${process.env.APP_URL}/invoice`, { waitUntil: ["domcontentloaded", "networkidle0"], }); ``` Here we go to the `invoice` page and wait until the content is fully loaded. Now that we are on the page that we want to convert, we create our PDF file and save it on the `buffer` to save it later to AWS S3. ``` const buffer = await page.pdf({ format: "letter", printBackground: true, margin: "0.5cm", }); ``` in the above code we added a few options to the `pdf` method: - **format**: the size of our file - **printBackground**: print background graphics - **margin**: add a margin of 0.5cm to the print area So far our `handler.js` should look like this: ``` "use strict"; const chromium = require("chrome-aws-lambda"); exports.handler = async (event, context) => { let browser = null; try { browser = await chromium.puppeteer.launch({ args: chromium.args, defaultViewport: chromium.defaultViewport, executablePath: await chromium.executablePath, headless: chromium.headless, ignoreHTTPSErrors: true, }); const page = await browser.newPage(); await page.goto(`${process.env.APP_URL}/login`, { waitUntil: "networkidle0", }); await page.type("#email", process.env.APP_USER); await page.type("#password", process.env.APP_PASS); await page.click("#loginButton"); await page.waitForNavigation({ waitUntil: "networkidle0" }); await page.goto(`${process.env.APP_URL}/invoice`, { waitUntil: ["domcontentloaded", "networkidle0"], }); const buffer = await page.pdf({ format: "letter", printBackground: true, margin: "0.5cm", }); } catch (e) { console.log(e); } finally { if (browser !== null) { await browser.close(); } } }; ``` ## Uploading PDF to S3 Currently, we can generate our PDF file using Puppeteer, now we are going to configure our function to create a new S3 Bucket, and upload our file to S3. First, we are going to define in our `serverless.yml` file, the resources for the creation and usage of our S3 bucket. ``` service: pdf-generator frameworkVersion: '2' custom: app_url: https://puppeteer-login-demo.vercel.app app_user: admin@admin.com app_pass: 123456789 bucket: pdf-files provider: name: aws stage: dev region: us-east-1 iam: role: statements: - Effect: Allow Action: - s3:PutObject - s3:PutObjectAcl Resource: "arn:aws:s3:::${self:custom.bucket}/*" runtime: nodejs12.x lambdaHashingVersion: 20201221 functions: generate-pdf: handler: handler.handler timeout: 25 layers: - arn:aws:lambda:us-east-1:764866452798:layer:chrome-aws-lambda:24 environment: APP_URL: ${self:custom.app_url} APP_USER: ${self:custom.app_user} APP_PASS: ${self:custom.app_pass} S3_BUCKET: ${self:custom.bucket} plugins: - serverless-webpack package: individually: true resources: Resources: FilesBucket: Type: AWS::S3::Bucket Properties: BucketName: ${self:custom.bucket} ``` Here we defined our resource `FilesBucket` that Serverless is going to create, and we also defined the permissions that our Lambda has over the Bucket, for now, we just need permission to put files. Now in our `handler.js` we load the AWS library and instance a new S3 object. ``` const AWS = require("aws-sdk"); const s3 = new AWS.S3({ apiVersion: "2006-03-01" }); ``` Now, we just need to save our `buffer` variable to our S3 Bucket. ``` const s3result = await s3 .upload({ Bucket: process.env.S3_BUCKET, Key: `${Date.now()}.pdf`, Body: buffer, ContentType: "application/pdf", ACL: "public-read", }) .promise(); await page.close(); await browser.close(); return s3result.Location; ``` Here we uploaded our file to our Bucket, closed our `chromium` session, and returned the new file URL. ## Deploying to AWS First, we need to add our AWS Credentials to Serverless in order to deploy our functions, please visit the [serverless documentation](https://www.serverless.com/framework/docs/providers/aws/guide/credentials/) to select the appropriate auth method for you. Now, open the `package.json` file to add our deployment commands. ``` "scripts": { "deploy": "sls deploy", "remove": "sls remove" }, ``` Here we added 2 new commands, `deploy` and `remove`, open up a terminal and type: ``` npm run deploy ``` Now our function is bundled and deployed to AWS Lambda!
javiertoscano
792,239
Big Shout to Zuri
Thank you Zuri team for giving me a chance to be part of this cohort(Frontend Track).The...
0
2021-08-15T08:05:02
https://dev.to/techmadi/big-shout-to-zuri-i75
react, figma
<main> Thank you Zuri team for giving me a chance to be part of this cohort(Frontend Track).The internship has several tracks : - <ul> <li> Frontend Track </li><li> Backend Track </li> <li> Devops Track </li> <li> Entrepreneurship Track </li> <li> Digital marketing Track </li> <li> UI-UX Track </li> </ul> <h2>My Goals for Zuri Internship</h2> <ul> <li> Be a proficient Frontend developer (React)</li> <li> Use my skills to solve on the SDG Goals (Sustainable Development Goals) </li> <li>Improve my soft skills</li> <li>Grow my skills with my team</li> </ul> <p>As i go along this journey , this will be my guiding tutorials</p> <ul> <li> <a href="https://youtu.be/qz0aGYrrlhU">HTML and Css </a> </li> <li> <a href="https://www.youtube.com/watch?v=Qqx_wzMmFeA&t=50s">Javascript</a> </li> <li> <a href="https://youtu.be/_uQrJ0TkZlc">Python</a> </li> <li> <a href="https://youtu.be/TlB_eWDSMt4">Node js</a> </li> <li> <a href="https://www.youtube.com/watch?v=xuB1Id2Wxak&t=4s">Github</a> </li> <li> <a href="https://www.youtube.com/watch?v=c9Wg6Cb_YlU">Introduction to Figma</a> </li> </ul> </main> <footer> <p>Sign up for the internship <a href="https://internship.zuri.team">Zuri Internship</a></p> </footer>
techmadi
792,241
Why ditched Windows for Linux
What Is Linux When it comes to this topic there are two kinds of people. People who react...
0
2021-08-15T08:17:51
https://dev.to/ishanpro/why-ditched-windows-for-linux-1og3
linux
## What Is Linux When it comes to this topic there are two kinds of people. People who react What is Linux even? and the People who say like Yeah I know Linux and I have it on my pc. Well, Linux is not a single operating system but a whole family of them. A man named Linus Torvalds created the Linux Kernel (A program made to manage all the devices connected to a system) and made it available open source and that was the starting point of Linux. From then People started adding their own code on top of it to create various operating systems. There are 100s no 1000s of operating systems under the linux family of operating systems a whole new you have probably not discovered. These operating systems do again upload things open source and then more and more operating systems use their existing code to create new operating systems. Some of the Jewels of Linux's Crowns are. ArchLinux, Fedora, Ubuntu, Debian and Linux Mint. Most of the operating systems are used to manage servers, but a few people use it as their daily operating system. The one which I use is Linux Mint which is based on Ubuntu and Ubuntu itself based on Debian and Debian just sits on top of the Linux Kernel. ## Why Linux Ok, we just discussed that Linux exists, but its mere existence will not mean that you have to use it over Windows (In this article I am comparing Windows and Linux keeping Macos out of the arena because I never owned a mac). It should have features that make it better than the computer gaint Windows. ### Speed The problem of having a 4gb ram laptop increase when you program in it. I used my laptop to code and it just used to break up every now and then. I was fed up of this problem and decided to do a quick google search about the fastest operating system ( I knew linux existed but I never actually tried it). The answer was Linux Mint and after the switch I got relieved from the problem of repetitive hangs. It stil hangs but not every now and then but occasionaly or if I be honest weekly ### Simplicity Do you own a Windows laptop then you know the pain of updating it now and then like every month or so. A automatic Candy Crush install when you buy your PC. All this won't happen in Linux. They tell you want is happening on your pc. Just the apps you want and no hassle to install updates foricbly. ### Reviving Old Systems If I own a windows laptop then the biggest problem for me is that if it is not that high-end then the operating system will take repetitve hiccups. Linux on the other hand can be run on a 2gb ram laptop as well. Your daddy's old PC has got no need to be upgraded use linux and save money. ### Free This is the most striking feature of Linux. It has better features than Windows but is still free. That is why Linux laptops are cheaper than Windows laptops.
ishanpro
792,279
HNG Internship Goals
If you're a developer with an eye out for internships, you must've heard of the HNG internship and...
0
2021-08-15T09:32:55
https://dev.to/web_walkerx/hng-internship-goals-221a
programming, goals, internship, zuri
If you're a developer with an eye out for internships, you must've heard of the HNG internship and the drills they put developers through to make them world class software engineers. The internship is open to anyone and aims at creating a virtual work environment that fishes out the best candidates from a pool of participants. Operated by the [zuri](https://zuri.team) team, the internship is one developers around the world keep an eye out for. You must be wondering if I am a participant, well, you guessed right, I am in the 8th cycle of the HNG internship and I'll be stating my goals in this article. ## Become a better developer It's undeniable, working with other awesome developers around the world make one better, you get to learn a lot of things quickly and HNGi (HNG Internship) provides a fast paced environment with the talent for such. ## Network with Great Minds There is an excess supply of super awesome developers, designers and the likes in such an environment and I wouldn't let the opportunity to network pass me by. ## Work on Awesome Products Building software is every developers' joy, even better when you're building it with great developers. ## Scale this Challenge I have always considered HNG internship a challenge and I don't take challenges lightly. I was in a previous installment but crashed out halfway, this time, I am seeing it through to the end. In conclusion, I aim to generally get better at what I do, which is building APIs and web apps with MongoDB, Express, React and NodeJS. In case you're wondering, the internship is open to beginners but keep in mind it is a fast-paced environment and as such you should be willing to learn at lightening speed. Basic knowledge of git and github is a plus, check out this [tutorial](https://opensource.com/article/18/1/step-step-guide-git) to get started with git. Basic knowledge of figma is also an added advantage, get up to speed with figma [here](https://trydesignlab.com/figma-101-course/introduction-to-figma/). Check out this [tutorial](https://html.com/) to get the basic knowledge of HTML. I would also recommend getting the most basic know-how (atleast for a start) on a backend language like [NodeJS](https://www.tutorialspoint.com/nodejs/index.htm) With all those, you're more than ready to go, take the leap and dive right in. See you on the other side!
web_walkerx
792,363
✅ Tell Me About A Time You Worked With A Difficult Person | Facebook Behavioral Interview (Jedi) Series 🔥
Before we discuss this question, let us recap what the Behavioral Interview Round at Facebook...
12,638
2021-09-26T18:05:28
https://dev.to/theinterviewsage/tell-me-about-a-time-you-worked-with-a-difficult-person-facebook-behavioral-interview-jedi-series-1j79
beginners, tutorial, programming, career
{% youtube T_kM3daDx2k %} Before we discuss this question, let us recap what the Behavioral Interview Round at Facebook is. 1. Behavioral Interview Round is also known as the Jedi Interview round at Facebook. 2. It is about you and your history, your résumé, and your motivation. 3. The purpose of this interview is to assess whether the candidate will thrive in Facebook's peer-to-peer, minimal process, and unstructured engineering organization. For Software Engineers, the behavioral interview is actually part behavioral and part coding. The coding part is a shorter version of the usual coding interviews and is included to supplement the other two coding interviews to get an additional coding signal. # Tips & Tricks to effectively prepare for Behavioral Interviews ![Tips & Tricks to effectively prepare for Behavioral Interviews](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4y9t832s7yc0j58aycqb.png "Behavioral Interview Tips & Tricks") 1. Know yourself! Take the time to review your résumé, as the interviewer will almost certainly ask about key events in your work history. 2. Have concrete examples or anecdotes to support each of the questions. 3. Familiarize yourself with Facebook's mission statement and its five core values: - Be Bold - Focus on Impact - Move Fast - Be Open - Build Social Value 4. Be yourself! Be open and honest about your successes and failures. 5. Be humble and focus on teamwork, leadership, and mentorship qualities. Now, let us review how to effectively answer this question. --- # Question: Tell Me About A Time You Worked With A Difficult Person ![Tell Me About A Time You Worked With A Difficult Person](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/64c4uyzn2nl8o7ysetj6.png "Tell Me About A Time You Worked With A Difficult Person") > _[Video Explanation](https://www.youtube.com/watch?v=Hr5UJnKxwyg&t=452s) with Evaluation Criteria, Response Framework, Tips & Tricks, Sample Answer (Example), and a Special Case of "Never worked with a difficult person"._ "Tell me about a time you worked with a difficult person" is one of the most frequent questions asked in behavioral interviews. Interviewers sometimes phrase this question as "Tell me about a time you worked with someone challenging". ## Evaluation Criteria ![Evaluation Criteria for Tell Me About A Time You Worked With A Difficult Person](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zzt6w0jikazap38ea4g8.png "Evaluation Criteria for 'Tell Me About A Time You Worked With A Difficult Person'") ![Evaluation Criteria for Tell Me About A Time You Worked With A Difficult Person](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ubiny7f8s4c1qmejre3.png "Evaluation Criteria for 'Tell Me About A Time You Worked With A Difficult Person'") Once in a while, in every workplace, you will face a situation where you have to work with a colleague who has a difficult personality. By asking this question, the interviewer's goal is to assess how you work in difficult situations or unstructured environments. They are trying to judge your: - Maturity level, - Communication skills, and - Willingness to speak up irrespective of your coworkers' seniority. They are also evaluating whether you are empathetic and respectful towards your colleagues while understanding your coworker's motivations and viewpoints behind the conflict. A crucial element to this question is that the interviewer is looking for a positive resolution of the conflict that benefits the company and not just an individual. They are trying to see if you are flexible to compromise and open to learning from challenging experiences. ## Response Framework ![Response Framework for Tell Me About A Time You Worked With A Difficult Person](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uuistwhyp13ozqg8yhdv.png "Response Framework for 'Tell Me About A Time You Worked With A Difficult Person'") Our advice is to pick a compelling and honest story that can articulate an actual situation where you had to work with a colleague who had a difficult personality. Describe the situation, events that occurred, and explain what led to the conflict between you and your colleague. - It can be due to lack of communication and difference of opinions over a project design, code review, or some other disagreement. Present both sides of the arguments in a positive and empathetic way. This will help you to come across as level-headed and professional. It will demonstrate that you take time to understand other people's perspectives and are not narrow-minded when working with others. Explain the exact steps you took to address the challenging situation. - It can be a one-on-one discussion with your colleague, doing more research, creating an updated plan of action, or pair programming with your coworker to come to a resolution. - This will demonstrate your ownership and problem-solving skills. - It will give the interviewer an inside look at how well you work in an unstructured environment. Also, show that you proactively communicated the issue and its resolution to all the stakeholders to keep them well informed. Express how the outcome was beneficial to the project and the company and not just to you and your coworker. Finally, explain the learnings you took from the conflict and how they helped you to avoid similar disagreements from happening again in the future and to become a better engineer. ## Tips & Tricks ![Tips and Tricks for Tell Me About A Time You Worked With A Difficult Person](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c58nemm4bnbmu1t5hbpx.png "Tips & Tricks for 'Tell Me About A Time You Worked With A Difficult Person'") Here are some tips and tricks that will help you effectively prepare this question for the behavioral interview. 1. Always remain calm and professional. - Refrain from being negative and avoid blaming your employer, coworkers, or manager. - Companies generally do not like to hire people who are always pointing fingers at others. 2. Use a compelling story that is honest and believable. - Pick an example involving a business issue and avoid personal disputes. 3. Calmly explain both sides' points of view and show how a complete understanding or a compromise led to a better outcome for the company and not just an individual. 4. Do not sugarcoat your answer with irrelevant details. - Spend more time talking about the resolution than the conflict and mention the learnings that will help you avoid the same disagreements from happening again. 5. Show that you proactively communicated the issue and its resolution to all the stakeholders to keep them well informed. 6. Prepare the response for this question beforehand, as it will be tough to structure your answer on the spot during the interview. 7. Do not memorize the answer as it should come naturally, and you should sound confident to the interviewer. ## Sample Answer (Example) Here is Rachel. She is currently working as a Software Engineer at a major internet company. She is interviewing for the role of Senior Software Engineer at Facebook. 🎧 Listen to her response to this question in this [YouTube Video](https://www.youtube.com/watch?v=Hr5UJnKxwyg&t=690s) ## Special Case: Never Worked With A Difficult Person ![Special Case: Never Worked With A Difficult Person](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/insqlqn81sw6e0fs5cir.png "Special Case: Never Worked With A Difficult Person") It may be the case that you actually never worked with a difficult person so far in your career. New Grads and entry-level software engineers usually fall under this category. If you are in such a situation, do not end your answer by simply saying that you never worked with a difficult person in your career. Instead, provide your interviewer with a hypothetical situation and walk through how you would respond and modify your course of action in such a situation just as you would for a real past experience. This will help the interviewer evaluate you on the following attributes mentioned earlier: - How well you can handle a conflict, - Work in ambiguous situations, and - You're open-minded and flexible. --- # Preparation Material Learn more about the Evaluation Criteria, Response Framework, Tips & Tricks, and Sample Answers (Examples) to effectively prepare and answer these top questions asked in the Behavioral Interviews at Facebook. Certain special cases are also discussed which are usually faced by the candidates during these interviews. ⬇️ [Detailed Notes on Top Facebook Behavioral Interview Questions - Part 2](https://www.buymeacoffee.com/interviewsage/e/40678) --- # Cracking the Facebook Behavioral Interview If you have not read our first article on Top Facebook Behavioral Interview Questions, we recommend reading it by clicking the below link: {% post https://dev.to/theinterviewsage/top-facebook-behavioral-interview-questions-part-1-2a0o %} --- # Cracking the Facebook System Design Interview In case if you have not read our series on Cracking the Facebook System Design Interview, we recommend reading it by clicking the below link: {% post https://dev.to/theinterviewsage/top-facebook-system-design-interview-questions-31np %} --- # Useful Links ✅ [Educative.io Unlimited Plan [💰 10% off for first 100 users]](https://bit.ly/Educative-Unlimited) ✅ [TryExponent.com Membership [💰 Limited Time 10% offer]](https://bit.ly/Try-Exponent) 👩‍💻 [Best System Design Interview Course](https://www.educative.io/courses/grokking-the-system-design-interview?aff=KQZl) 🚀 [Complete SWE Interview Course [💰 Limited Time 10% offer]](https://bit.ly/SWE-Interview-Course) 🙋‍♀️ [Behavioral Interview Guide [💰 Special Discount]](https://www.buymeacoffee.com/interviewsage/e/30176) 📚 [Recommended Interview Preparation Book (on Amazon)](https://smarturl.it/InterviewPrepBook) --- <center> [![Buy Me a Coffee](https://dev-to-uploads.s3.amazonaws.com/i/o2l00b1bt3nl8fdfb0nn.png)](https://www.buymeacoffee.com/InterviewSage) ☕️ Buy us a Coffee at <a href="https://www.buymeacoffee.com/InterviewSage">BuyMeACoffee.com/InterviewSage</a> </center> --- <center> To stay updated about new posts, Subscribe & Follow Us! | [![Subscribe to our YouTube channel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3uqifmelnuenxiik8pwx.png "Subscribe to our YouTube channel")](https://www.youtube.com/TheInterviewSage?sub_confirmation=1) | [![Follow us on Instagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovhcxjpg0v5z2tidtydk.png "Follow us on Instagram")](https://www.instagram.com/TheInterviewSage) | [![Like & Follow us on Facebook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xyy3kyo44tmv645vvrqw.png "Like & Follow us on Facebook")](https://www.facebook.com/TheInterviewSage) | [![Follow us on Twitter](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xwwcu44k00a0a8hpfbmf.png "Follow us on Twitter")](http://twitter.com/intent/follow?source=followbutton&variant=1.0&screen_name=InterviewSage) | [![Follow & Connect on LinkedIn](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zcrwzwc9tnv0q3zmkih8.png "Follow & Connect on LinkedIn")](https://www.linkedin.com/in/TheInterviewSage) | | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | </center> --- > This article is part of the series on Behavioral Interviews at Facebook. So, follow us to get notified when our next article in this series is published. Thanks for reading! {% user theinterviewsage %} 📸 Some images used are from free<span>pik</span>.com: Freepik, pch.vector, vectorjuice, pikisuperstar, raw<span>pixel</span>.com, slidesgo, stories, Upklyak, jcomp, macrovector_official, syarifahbrit, redgreystock [Full Disclosure & Disclaimer](https://disclosureanddisclaimer.theinterviewsage.com/)
theinterviewsage
792,369
How to make JARVIS in Python?
Hello and welcome in this post I am going to talk about how to make jarvis in python. I am going to...
0
2021-08-15T10:24:04
https://crunchythings.xyz/how-to-make-jarvis-in-python/
Hello and welcome in this post I am going to talk about how to make jarvis in python. I am going to provide you complete source code of this project. Also I am going to explain line by line chords and then the complete code will be given to you. There are no limitation of this voice assistant. All the voice commands are going to be run by this voice assistant need to be programmed by its programmer. So let’s get started and find out how to make JARVIS in python. There are few things that are required before starting coding. Some of the Python packages need to be installed and these packages are mentioned below along with their installation command. **SpeechRecognition** = pip install SpeechRecognition **pyttsx3** = pip install pyttsx3 Speech Recognition: – This module is required to capture speech or voice from users source. I mean this package will handle all the voice input needs with minimal coding. pyttsx3: – Which package is required to convert voice into text. We need to show the user what he or she had spoke. So we will need to convert speech into text and pyttsx3 will help us to do this. How to make JARVIS In Python: – Now the main part comes in this section we will start coding our Jarvis. Let’s get started and look to step by step coding guide for making Jarvis. At the end of this post we will also give you complete source code of this project. **Importing Modules: –** Let’s first get started by importing models add important some important packages below code will help us to do this. ``` import wikipedia import os from sys import exec_prefix from urllib.parse import quote import pyttsx3 import speech_recognition as sr import datetime import webbrowser ``` **Setup pyttsx3 & Engine: –** Let’s first now set up or pyttsx and engines to to set up voice. With below code you can setup it. ``` engine = pyttsx3.init('sapi5') voices = engine.getProperty('voices') engine.setProperty('voice', voices[1].id) engine.setProperty('rate', 150) ``` Above we are making a engine variable to make instance of pyttsx3 with sapi5. Let me tell you spai5 is Microsoft default speech API. Next we are setting property to engine to set up voice speed rate. voice[0] will be male voice and voice[1] will be female voice. (‘rate’,150) define how fast or slow it will speak. **Set up speak function: –** Now we are going to setup speak function so that our program can speak. ``` def speak(audio): engine.say(audio) engine.runAndWait() ``` **Setup Speech Recognition: –** Now we have to set up speech recognition. So that our program can understand and recognise our voice. As you know that we have already imported speech recognition module we will use it to set up a source from where input voice will come. ``` def takeCommand(): r = sr.Recognizer() with sr.Microphone() as source: r.adjust_for_ambient_noise(source) r.energy_threshold = 1000 print("Listening...") audio = r.listen(source) try: print("Recognizing...") query = r.recognize_google(audio, language='en-in') print(f"You Said:{query}\n") except Exception as e: print("Say Again Please...") return "None" return query ``` Above we are creating function named takeCommand() and in it we are making a variable for the recognizer. We are set up in our microphone as source. Then we are setting up adjust_for_ambient_noise property so that it can work in loud rooms or noisy rooms also. Next we are listening from variable are which is instance of recognizer and saving it as audio. Then we will need to add try expect block because there can be two things that can happen we might get error or we can successfully print what user has said below code will help us to do this. ``` try: print("Recognizing...") query = r.recognize_google(audio, language='en-in') print(f"You Said:{query}\n") except Exception as e: print("Say Again Please...") return "None" return query Now we had done everything next step is to call our function in main function so that it can be executed whenever the script execute. if __name__ == "__main__": while True: query = takeCommand().lower() if 'hello jarvis' in query: speak("Hello Sir") elif 'what is your name' in query: print("My Name is jarvis") speak("My Name is jarvis") ``` Here in main command we are making a variable query to take command and comparing query with specific word. Whenever our program find a matching word spoken by user and it is available in the command list it will get executed. Like as of above code if you speak “hello jarvis” is will speak and prrint “Hello Sir”. You can ad as many elif block you want and add new functionalities. **Complete Jarvis Source Code: –** Below is the complete source code of this project. If you like my post please give some time and follow me on [instagram](https://instagram.com/sushant102004) ``` import wikipedia import os from sys import exec_prefix from urllib.parse import quote import pyttsx3 import speech_recognition as sr engine = pyttsx3.init('sapi5') voices = engine.getProperty('voices') engine.setProperty('voice', voices[1].id) engine.setProperty('rate', 150) def speak(audio): engine.say(audio) engine.runAndWait() def takeCommand(): r = sr.Recognizer() with sr.Microphone() as source: r.adjust_for_ambient_noise(source) r.energy_threshold = 1000 print("Listening...") audio = r.listen(source) try: print("Recognizing...") query = r.recognize_google(audio, language='en-in') print(f"You Said:{query}\n") except Exception as e: print("Say Again Please...") return "None" return query if __name__ == "__main__": while True: query = takeCommand().lower() if 'hello jain' in query: speak("Hello Sir") elif 'what is your name' in query: print("My Name is jane") speak("My Name is jane") ```
sushantdhiman2004
792,421
My Expectations at Zuri Internship Program
ZURI INTERNSHIP Background I am Atwine Nickson, a software developer with 3 years of playing with...
0
2021-08-15T12:23:44
https://dev.to/atwinenickson/my-expectations-at-zuri-internship-program-2763
ZURI INTERNSHIP Background I am Atwine Nickson, a software developer with 3 years of playing with code experience. Apart from UI development (Frontend), I am a full-stack developer pro efficiently with Python(Django and Flask). I am always looking for new challenges to help me improve my skills and that is when I found the Zuri Internship Program, and I applied, then was selected, and here is my first task of writing this article. About Zuri Internship Zuri internship is an engineering-as-a-service business that helps companies build remote teams quickly and cost-effectively. We have 1,000+ software engineers working as full-time, embedded members of development teams at over 200 leading tech companies. Zuri got its first set of interns in 2018 and is now a multi-national corporation, with over 35,000 employees in more than 115 countries. Today, Cisco solutions are the networking foundations for service providers, small to medium business and enterprise customers which includes corporations, government agencies, utilities and educational institutions. Follow this link to learn more about Zuri Internship https://internship.zuri.team/ Benefits of Zuri Internship Learning Connections Jobs Who can Apply You have prior coding or design skills/knowledge. You are a professional looking to connect with others. You are looking for a challenge. My Expectations The chance to learn valuable work skills and gain useful experience Sensible working hours A reference upon completion of the internship. Meet new software engineers Work on real world Projects Access to Zuri network composed of past finalists. Get recommended to potential employers. Access to job opportunities. Occasional stipends depending on performance during the internship. GIT TUTORIAL https://www.youtube.com/watch?v=RGOj5yH7evk HTML TUTORIAL https://www.youtube.com/watch?v=qz0aGYrrlhU JAVASCRIPT TUTORIAL https://www.youtube.com/watch?v=W6NZfCO5SIk FIGMA TUTORIAL https://www.youtube.com/watch?v=g6rQFP9zCAM Congratulations you have reached the end of this article. Thank you for your time. Feel free to leave a comment below, also advice is also welcome for improvement as this is my first article.
atwinenickson
792,468
HNGi8 x I4G Internship Goals
My name is Pius Osuji. I am a 400L Computer Science student at Imo State University, Owerri Imo State...
0
2021-08-15T14:08:24
https://dev.to/pius_osuji/hngi8-x-i4g-internship-goals-5agc
webdev
My name is Pius Osuji. I am a 400L Computer Science student at Imo State University, Owerri Imo State and aspiring Frontend Developer. I am super excited to be part of this internship program for this year organized by 14G. I have heard a lot about this program and how they help build and push those who make it to the finals to greater heights in the Tech industry. I am ready and determined to be part of the finalists for this year's edition of the program. ## STRUCTURE OF THE INTERNSHIP ## The internship runs for 3 months with 10 stages. Interns who don't pass the tasks given are evicted at each stage. Those who get to stage 10 are those with the highest grit. Passing through the program means your skills would be tested and improved. ## My GOALS FOR THE INTERNSHIP ## - **Learn new skills:** This is a great hands-on way for me to learn new skills and technologies that will be vital in my journey to becoming a Frontend Developer and improve my already acquired skill-set, including my soft skills. - **Exposure:** This is a great opportunity for gaining valuable work experience and exposure. Letting me see what really goes on in the Tech industry as well as it's best practices. - **Build my CV:** With an Internship such as HNG on my CV, I am sure to be more competitive and noticed by job recruiters and companies or organizations. - **Meet new contacts:** The Internship only lasts for three months but it can lead to long-term benefits as you will get to meet all sought of individuals of different positions and from different places around the world. Plus, your colleagues and supervisors can be a reference for job applications, scholarship application E.T.C. So at the end of this Internship, by God's grace and with the help of HNG I will like to be a proficient Frontend Developer and Web Master. Get a good paying job and start my very own YouTube channel to help people who plan to venture into Frontend Development. Click the link to learn more about HNG and it's programs: *[https://zuri.team](https://zuri.team)* ## Below are tutorial links to some beginner friendly courses that might be useful to anyone who wants to start their Development journey today: ## HTML crash course for absolute beginners: *https://youtu.be/UB1O30fR-EE* Git tutorial for beginners: *https://youtu.be/8JJ101D3knE* Figma in 40 minutes: *https://youtu.be/4W4LvJnNegA* JavaScript tutorial for beginners: *https://youtu.be/W6NZfCO5SIk*
pius_osuji
792,543
Understanding WebAssembly better by learning WebAssembly-Text
This article tries to teach yoy the low level of WebAssembly using WebAssembly-Text
0
2021-08-15T14:52:27
https://dev.to/fabriciopashaj/understanding-webassembly-better-by-learning-webassembly-text-50bj
webassembly, wat, lowlevel
# Understanding WebAssembly better by learning WebAssembly Text WebAssembly is a true revolution in tech, not just in the web, but thanks to WASI and friends, it is becoming available everywhere. One of the best things WebAssembly offers is being a compilation target instead of just another programming language. This has the potential to help a lot of non-JS developers get involved with web development. WebAssembly also has its text version called... You got it, WebAssembly Text, or WAT for short! (MDN docs [here](https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format)). It can be compiled to the binary format using [WABT](https://github.com/WebAssembly/wabt). ## Prerequisites (to follow along) - You know how to use WABT to assemble WAT. - You know how to run WebAssembly binaries. ## Understanding the Syntax WAT offers two ways of writing code: The traditional assembly style ```wat local.get 0 local.get 1 i32.add ``` and a more LISPy way (called S-Expression format) ```wat (i32.add (local.get 0) (local.get 1)) ``` The assembler will spit out the same result from both of them, but the former shows in a more clear way how the instructions are placed in the binary. We will be using that style in this article. The most basic, valid, (albeit useless) WAT file has the contents below: ```wat (module) ``` ## How WebAssembly's stack works A stack is nothing more than just a LIFO (last in, first out) linear data structure. Imagine it as an array where you can only `push` and `pop` and can't access the items in any other way. WebAsembly's stack is no difference, but it has a few features which make it cooler and safer. One of which is stack splitting/framing. The name may look scary, but it is simpler than it looks. It just puts a mark at the place where the item at the top of the stack is on the moment it is split. The mark is the bottom of the new frame and it simply says "Hey, this is a stack of its own that this **block** of code here operates on. Only this block of code has access to it and nothing else. The new stack's lifetime is limited to the time that this block of code requires to be fully executed". We are going to call the stack that is split the parent frame and the new stack that results from the split the child frame. A **block** is the part between a `block`/`if`/`loop` instruction and an `end` instruction. Every **block** can have a result, which means that when the block's stack frame reaches the end of its lifetime, the last item is popped. Then that frame is destroyed (i.e. the mark is removed) and the popped item is pushed to the parent frame. That's how functions work too, but they can have parameters and can be executed at any time. ## Hello, world! Well, sort of. WebAssembly Text files always start with the module definition and everything else is put between the `module` word and the last parenthesis. Let's see how we can write a simple "Hello, world!" program in WAT. ```wat (module (func $hello_world (param $lhs i32) (param $rhs i32) (result i32) local.get $lhs local.get $rhs i32.add) (export "helloWorld" (func $hello_world))) ``` Okay, you might be saying "What the hell is this? I thought this is a 'Hello, world!' example!". Well, the point is that WASM wasn't created to print strings and interact with APIs, its purpose is to help JavaScript handle heavy computations by providing an interface to fast, low level instructions. ### But what does the code do? - `i32` is one of the four primary types of WebAssembly, it's a 32-bit integer type. - `func` declares a function - `$hello_world` is a compile time name/label we give the function (we'll see more about that later) - `(param $lhs i32)` and `(param $rhs i32)` tell that this function accepts two parameters, the first one labeled $lhs for left-hand side (notice the `$`) with a type of `i32`, the second one labeled $rhs for right-hand side with a type of `i32`. - `(result i32)` says that the function return type is an `i32`. - `local.get $lhs` pushes the value of the parameter labeled as $lhs into the stack. - `local.get $rhs` does the same as above, but instead it pushes the value of $rhs. - `i32.add` pops two values of type `i32` from the stack and pushes the result of their addition back into the stack. - `(export "helloWorld" (func $hello_world))` exports the function labeled as `$hello_world` to the host with the name "helloWorld". ### Where is the `return` statement? WebAssembly has a `return` instruction, but it is only used when you need to immediately return and stop executing the function any further. Otherwise, there will be an implicit return at the end of the function that pops the last value on the stack and returns it. ### What does a "label" actually mean? All the function calls, parameter and local access are done by index. Labels are just compile-time annotations to make code easier to read, write and understand. ### Are `local.get` and `i32.add` the only instructions? Of course not. The WebAssembly instruction set is *huge*. Just the MVP (minimal viable product) has around 120 instructions. Most of them start with the type, which can only be `i32`, `i64`, `f32`, `f64`, followed by a dot and the name of the operation. There are also instructions for other purpouses, like control flow (decision making, branching/jumping and looping). A list of all the instructions and their explanation can be found [here](https://github.com/sunfishcode/wasm-reference-manual/blob/master/WebAssembly.md#instructions). ## Writing something useful The example below shows how you can write a function that calculates the distance between two points using Pythagorean theorem. ```wat (module (func $distance (param $x1 i32) (param $y1 i32) (param $x2 i32) (param $y2 i32) (result f64) local.get $x1 local.get $x2 i32.sub ;; calculate the X axis distance (a) call $square ;; (a ^ 2) local.get $y1 local.get $y2 i32.sub ;; calculate the Y axis distance (b) call $square ;; (b ^ 2) i32.add ;; (c^2 = a^2 + b^2) f64.convert_s/i32 ;; convert to f64 so we can square root f64.sqrt) (export "distance" (func $distance)) (func $square (param $i32) (result i32) local.get 0 local.get 0 i32.mul)) ``` ### Breaking down the code (again) - `;;` is a the start of a single line comment - `i32.sub` pops two numbers of type `i32` from the stack and pushes back the result of their subtraction - `call $square` calls a function - `f64.convert_s/i32` pops an `i32` from the stack and pushes back its value as a signed number converted to an `f64` (if it would have been `f64.convert_u/i32` it would convert it from its unsigned form). If you don't understand the difference between signed and unsigned numbers, I suggest you read [this](https://dev.to/aidiri/signed-vs-unsigned-bit-integers-what-does-it-mean-and-what-s-the-difference-41a3) - `f64.sqrt` pops a number of type `f64` from the stack and pushes the number's square root. - `i32.mul` pops two numbers of type `i32` from the stack and pushes back the result of their multiplication. (we multiply a number with himself to get its power of 2 in the `$square` function). ## Globals Globals are just "variables" that every function can access. They can be exported but can only be mutated if they are declared as mutable. The code below shows a way how they can be used: ```wat (module (; declare a mutable global with type `i32` and initial value 0 ;) (global $counter (mut i32) (i32.const 0)) (export "counter" (global $counter)) ;; export it with the name "counter" (func $countUp (result i32) global.get $counter ;; push the value of $counter into the stack i32.const 1 i32.add ;; increment the value by 1 global.set $counter ;; assign $counter to the incremented value global.get $counter) ;; return the new incremented value (export "countUp" (func $countUp))) ``` ## Decision making Although WebAssembly is a low level bytecode format, it supports higher level concepts like if statements and loops. The code below shows a function that recives two `i32` as parameters and returns the largest of them. ```wat (module (func $largest (param $0 i32) (param $1 i32) (result i32) local.get $0 ;; pushing $0's value into the stack local.get $1 ;; pushing $1's value into the stack i32.gt_s ;; comparing if $0 is greater than $1 if (result i32) local.get $0 else local.get $1 end) (export "largest" (func $largest))) ``` The 3 first instructions are simple enough to be explained with comments, but if you still don't understand what they do: - `local.get $0` and `local.get $1` push the values of the parameters labeled `$0` and `$1` - `i32.gt_s` pops two values of type `i32` from the stack and pushes back the result of their comparison (value of `param $0` is greater than the valule of `param $1`). `1` if true, `0` if false. It has the `_s` suffix because it compares the numbers as signed ones (i.e. they can be negative). ### To return `$0` or to return `$1` When an `if` instruction occurs, the last item in the stack (the condition) is popped. It must be an `i32`. If the condition is not `0`, the instructions inside the `if..else/end` block are executed, otherwise the `else..end` is executed. If there isn't an `else`, nothing happens. You might notice that the if "statement" has a result. `if` is a **block**, which means it is able to return a result. ### Select For simple decisions like picking a number, `if` might be a bit overkill. There is also a `select` instruction which works like a ternary operator. To use `select` instead of `if`, we would do it like this. ```wat local.get $0 local.get $1 ;; push value of $0 and $1 into stack for selection local.get $0 local.get $1 ;; push value of $0 and $1 into stack for comparing i32.gt_s ;; doing the comparison between the two last `i32` items on the stack select ``` The `select` instruction pops 3 values from the stack. It decides which of the two first values to push back according to the third one (the condition). (if the condition is not `0`, push the first value, otherwise the second). ## Looping and branching WebAssembly supports loops, but not the kind of loops you might be thinking of. Take this code for example: ```wat (module (func $fac (param $0 i32) (result i32) (local $acc i32) (local $num i32) local.get $0 local.tee $acc local.set $num block $outer loop $loop local.get $num i32.const 1 i32.lt_u ;; we check if $num is lower than 1 br_if $outer local.get $acc local.get $num i32.const 1 i32.sub local.tee $num i32.add local.set $acc br $loop end end local.get $acc) (export "factorial" (func $fac))) ``` This code shows how to write a function that finds the factorial of a number using a loop. It could have been easier to use recursion, but the point here is to understand loops and branching. ### Breaking down the code Most of the things in this code have been explained. The only new things here are `(local $acc i32)`, `block`, `loop`, `br_if $outer` and `$br $loop`. - `(local $acc i32)` is similar to what they call a local variable in higher level languages. It is accessed the same way as the parameters. The first local's index is 1 more than the last parameter's index. - `block $outer` does nothing special, it just encapsulates the code between it and the corresponding `end` instruction. It is used when you need to branch out at different levels. - `loop $loop` is a special kind of **block**, if you do a branch on a loop, you don't go at the `end`, you go to the beginning of the loop. (branches behave like `break` on a `block` and `if` and like `continue` on a `loop`) - `br $loop` is an unconditional branch. The label operand is a `loop`, so it will jump to the top where the `loop` instruction occurs. If you know C/C++ you know about jumping using `goto`, but by using `goto` you can jump anywhere, from anywhere in the code. WebAssembly is more restrictive in the way that you can branch only outwards and by label / index. The innermost **block** has the smallest index (`0`) and the outermost has the largest. We do the branch so we can continue looping instead of dropping out. - `br_if $outer` is a conditional branch/jump instruction. It will pop the last item from the stack (the condition, has to be an `i32`) and if it is different from `0`, it will execute the branch, otherwise it won't. ## Linear memory WebAssembly offers another way to store data other than the stack, the linear memory. It can be seen as a resizable JavaScript `TypeArray`. Its main purpose is to store complex and/or continous data. There are 14 load and 9 store instructions, and 2 other instructions for manipulating and getting its size. With what we have learned until now, let's implement a function that generates the fibonacci sequence. ```wat (module (memory 1) (export "memory" (memory 0)) (func $fib (param $length i32) (local $offset i32) i32.const 8 local.set $offset ;; assign offset to 8 (see below) i32.const 0 i32.const 1 i32.store offset=4 ;; store 1 at offset 0 eith static offset 4 block $block loop $loop local.get $offset i32.const 4 i32.div ;; divide by 4(read below) local.get $length i32.gt_u ;; compare the requsted length br_if $block ;; break out if false (`9`) local.get $offset ;; get the offset for storing local.get $offset ;; --------- i32.const 8 i32.sub i32.load local.get $offset i32.const 4 i32.sub i32.load i32.add ;; load the two previous numbers from memory and add them ;; --------- i32.store ;; store the number at the current offset local.get $offset i32.const 4 i32.add local.set $offset ;; increment the offset by 4 br $loop end end) (export "fibonacci" (func $fib))) ``` ### Explaining a few things The code above uses almost all the knowledge that you have gained while reading this article. A few things to note: - `store offset=4` stores an `i32`. It pops two items from the stack. The first one is the address/offset and the second is the value that will be stored. `offset=4` is a static offset, which means it will add that to the offset that it gets from the stack, without you needing to do `i32.const 4` and `i32.add` on the offset. In this example, `offset` was only used for demonstrative purposes. - We use the offset instead of the index, because we don't have fancy things like arrays. Each `i32` takes 4 bytes and we have to increment the offset by `4` instead of `1`. - The offset variable points at the address in memory where the number that will result from the addition of the two previous numbers will be stored, thats why it is initially 8. Two `i32`s take 8 bytes in memory. - We divide the offset by 4 when comparing it, to have it like an "index". ## The end. I hope that this article gave you a deeper understanding on how WebAssembly works. A big thanks to [rom](https://github.com/romdotdog) who edited and corrected my article a few times.
fabriciopashaj
792,603
[TECH] Unity で iOS/Android アプリの設定値をセキュアに扱う方法 🔑
はじめに iOS/Android でユーザーの情報をセキュアに扱う必要があったので、調査したところ Android には EncryptedSharedPreferences...
0
2021-08-15T15:46:39
https://zenn.dev/nikaera/articles/unity-ios-android-secret-manager
unity3d, ios, android
# はじめに iOS/Android でユーザーの情報をセキュアに扱う必要があったので、調査したところ Android には [EncryptedSharedPreferences](https://developer.android.com/reference/androidx/security/crypto/EncryptedSharedPreferences) が存在することを知りました。iOS には [Keychain Services](https://developer.apple.com/documentation/security/keychain_services) が存在します。 今回は Unity の iOS/Android プラットフォーム上で設定値を保存するための実装を行う必要があったので、Unity から扱えるようネイティブプラグインを作成しました。今後もこういった要望はありそうでしたので、記事として手順や内容を書き記しておくことにしました。 本記事内で紹介しているコードは下記にアップ済みです。 https://github.com/nikaera/Unity-iOS-Android-SecretManager-Sample # 動作環境 - MacBook Air (M1, 2020) - Unity 2020.3.15f2 - Android 6.0 以上 - [EncryptedSharedPreferences](https://developer.android.com/reference/androidx/security/crypto/EncryptedSharedPreferences) が使用可能なバージョン # Android のネイティブプラグインを作成する Android 環境ではまず [External Dependency Manager for Unity](https://github.com/googlesamples/unity-jar-resolver) を利用して、Unity の Android ネイティブプラグインで `EncryptedSharedPreferences` 利用可能にします。 ## (追記) Gradle を利用したライブラリのインストール方法 [shiena](https://twitter.com/shiena) さんにご教授いただいたのですが、[こちらの記事](https://zenn.dev/shiena/articles/unity-sqlcipher#gradle%E3%82%92%E5%88%A9%E7%94%A8)のように Gradle を利用することでも簡易にライブラリの取り込みが可能なようでした。 手順は上記の記事をご参照いただくとして、Gradle を利用する方法で外部ライブラリを取り込む際の `Assets/Plugins/Android/mainTemplate.gradle` および `Assets/Plugins/Android/gradleTemplate.properties` は下記になります。 ```diff gradle dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) + implementation 'androidx.security:security-crypto:1.1.0-alpha03' **DEPS**} android { ``` ```diff properties org.gradle.jvmargs=-Xmx**JVM_HEAP_SIZE**M org.gradle.parallel=true android.enableR8=**MINIFY_WITH_R_EIGHT** + android.useAndroidX=true unityStreamingAssets=.unity3d**STREAMING_ASSETS** **ADDITIONAL_PROPERTIES** ``` **Gradle を利用した方法でライブラリを利用される際は、次の `External Dependency Manager for Unity で必要なパッケージをインストールする` の手順はスキップ可能です。`EncryptedSharedPreferences を利用するためのネイティブコードを追加する` のステップから進めてください。** `External Dependency Manager for Unity` を利用する方法だと、取り込み先プロジェクト内でライブラリの競合が発生する恐れがあります。Gradle を利用する方法であれば回避が可能です。[^1] [^1]: 逆に `External Dependency Manager for Unity` を利用する方法のメリットは、UnityPackage などでライブラリとして配布する際に、ライブラリを動作させるのに必要な外部パッケージも同梱した状態で配布が可能になるなどがあります。(当然ライセンスには気を付ける必要がありますが...) ## External Dependency Manager for Unity で必要なパッケージをインストールする `External Dependency Manager for Unity` をインポートするため [unitypackage](https://github.com/googlesamples/unity-jar-resolver/blob/master/external-dependency-manager-latest.unitypackage) をダウンロードして、**`EncryptedSharedPreferences` を導入したい Unity プロジェクトを開いてから `unitypackage` をクリックすることで、`External Dependency Manager for Unity` を Unity プロジェクトにインポートします。** ![ダウンロードした `unitypackage` をクリックして Unity プロジェクトに External Dependency Manager for Unity をインポートする](https://i.gyazo.com/1af7cdf4d7d5749e59e151eef1ca5493.png) Unity プロジェクトの `Build Settings` からプラットフォームは Android に切り替えておきます。`Enable Android Auto-resolution?` というダイアログの選択肢はどちらを選んでも構いません。[^2] [^2]: パッケージの依存関係を自動で解決するかどうかという選択肢になります。本記事では明示的に Resolve を実行するため `Disable` でも `Enable` でも進行上の問題はありません。 External Dependency Manager for Unity で各種パッケージを管理する方法は [README](https://github.com/googlesamples/unity-jar-resolver#android-resolver-usage) に記載がある通り、**`*Dependencies.xml` というファイルを `Editor` フォルダに配置することで可能になります。** 今回は `EncryptedSharedPreferences` を導入するため、下記の xml ファイルを `Editor` フォルダ内に配置します。 ```xml <?xml version="1.0" encoding="utf-8"?> <dependencies> <androidPackages> <!-- 本記事ではバージョン 1.1.0-alpha03 を利用している --> <androidPackage spec="androidx.security:security-crypto:1.1.0-alpha03"> <androidSdkPackageIds> <!-- Google の Maven リポジトリからインストールするため、 extra-google-m2repository を指定する --> <androidSdkPackageId>extra-google-m2repository</androidSdkPackageId> </androidSdkPackageIds> </androidPackage> </androidPackages> </dependencies> ``` その後、**Unity メニューから `Assets -> External Dependency Manager -> Android Resolver -> Force Resolve` を選択して、`Assets/Editor/AndroidPluginDependencies.xml` の内容を元に `EncryptedSharedPreferences` を利用するのに必要なパッケージを自動で `Assets/Plugins/Android` フォルダにダウンロードします。** ![1. Unity メニューから `Assets -> External Dependency Manager -> Android Resolver -> Force Resolve` を選択する](https://i.gyazo.com/df394e15149e54dae3e9a81848512ee9.png) **1. Unity メニューから `Assets -> External Dependency Manager -> Android Resolver -> Force Resolve` を選択する** ![2. 実行に成功すると EncryptedSharedPreferences を利用するのに必要なライブラリ群が `Assets/Plugins/Android` フォルダに配置される](https://i.gyazo.com/f6d2ec95ef9c2afdc857fecef2b165e5.png) **2. 実行に成功すると EncryptedSharedPreferences を利用するのに必要なライブラリ群が `Assets/Plugins/Android` フォルダに配置される** ここまで来ればあとは Android ネイティブコードを `Assets/Plugins/Android` フォルダ内に配置して Unity 側から叩けるようにするだけです。 ## EncryptedSharedPreferences を利用するためのネイティブコードを追加する 早速下記の Android ネイティブコードを `Assets/Plugins/Android/SecretManager.java` に配置します。 ```java package com.nikaera; import com.unity3d.player.UnityPlayerActivity; import java.lang.Exception; // External Dependency Manager for Unity によって、 // 必要な jar が含まれているため EncryptedSharedPreferences の利用が可能になる import androidx.security.crypto.EncryptedSharedPreferences; import androidx.security.crypto.MasterKey; import android.content.Context; import android.content.SharedPreferences; import android.content.SharedPreferences.Editor; import android.os.Bundle; import android.util.Log; public class SecretManager { private SharedPreferences sharedPreferences; public SecretManager(Context context) { try { // EncryptedSharedPreferences で設定値を保存する際に用いる、 // 暗号鍵を扱うためのラッパークラスをデフォルト設定で作成する MasterKey masterKey = new MasterKey.Builder(context) .setKeyScheme(MasterKey.KeyScheme.AES256_GCM) .build(); // EncryptedSharedPreferences のインスタンスを生成する // コンストラクタで作成した masterKey を指定している this.sharedPreferences = EncryptedSharedPreferences.create( context, context.getPackageName(), masterKey, EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV, EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM ); } catch (Exception e) { e.printStackTrace(); } } /** * 指定したキーで値を保存する関数 * @param key 値を保存する際に用いるキー * @param value 保存したい値 * @return boolean 値の保存に成功したかどうか */ public boolean put(String key, String value) { SharedPreferences.Editor editor = sharedPreferences.edit(); editor.putString(key, value); return editor.commit(); } /** * 指定したキーで保存した値を取得する関数 * `put` 関数で保存した値を取得するのに利用する * @param key 取得したい値のキー * @return string キーに紐づく値、存在しなければ空文字が返却される */ public String get(String key) { return sharedPreferences.getString(key, null); } /** * 指定したキーで値を削除する関数 * @param key 削除したい値のキー * @return boolean 値の削除に成功したかどうか */ public boolean delete(String key) { SharedPreferences.Editor editor = sharedPreferences.edit(); editor.remove(key); return editor.commit(); } } ``` その後、上記を Unity スクリプトから実行可能にするための C# クラスを作成します。本記事ではファイルを `Assets/Scripts/EncryptedSharedPreferences.cs` に配置します。 ```csharp using UnityEngine; /// <summary> /// 利用するネイティブコードは <c>Assets/Plugins/Android/SecretManager.java</c> に記載 /// </summary> /// <remarks> /// <a href="https://developer.android.com/reference/androidx/security/crypto/EncryptedSharedPreferences">EncryptedSharedPreferences</a> /// </remarks> class EncryptedSharedPreferences { private readonly AndroidJavaObject _secretManager; public EncryptedSharedPreferences() { // コンストラクタで com.nikaera.SecretManager のインスタンス生成を行う var activity = new AndroidJavaClass("com.unity3d.player.UnityPlayer") .GetStatic<AndroidJavaObject>("currentActivity"); var context = activity.Call<AndroidJavaObject>("getApplicationContext"); _secretManager = new AndroidJavaObject("com.nikaera.SecretManager", context); } public bool Put(string key, string value) { return _secretManager.Call<bool>("put", key, value); } public string Get(string key) { return _secretManager.Call<string>("get", key); } public bool Delete(string key) { return _secretManager.Call<bool>("delete", key); } } ``` あとは用途に応じて下記のようなコードで設定値の保存や取得などを行えます。 ```csharp // ... var _sharedPreferences = new EncryptedSharedPreferences(); // name をキーとして値を nikaera で保存する _sharedPreferences.Put("name", "nikaera"); // name をキーとして値を取得する var name = _sharedPreferences.Get("name"); // "nikaera" が出力される Debug.Log(name); // name をキーとして値を削除する _sharedPreferences.Delete("name"); // ... ``` # iOS のネイティブプラグインを作成する iOS の場合は外部ライブラリを利用しないため、`External Dependency Manager for Unity` は利用しません。**本来であれば Swift で信頼できる外部フレームワークを取り込み利用できると良さそうですが、今回は Objective-C でネイティブプラグインを書いていきます。**[^3] [^3]: [CocoaPods](https://cocoapods.org/) もサポートされているようなので、iOS でも Android 同様、外部ライブラリを取り込むのは簡単にできそうでした。例えば [KeychainAccess](https://github.com/kishikawakatsumi/KeychainAccess) とか使いたい。 ## Keychain Services を利用するためのネイティブコードを追加する 早速下記の iOS ネイティブコードを `Assets/Plugins/iOS/KeychainService.mm` に配置します。 ```objc // Keychain Services を利用するために Security フレームワークを利用する #import <Security/Security.h> extern "C" { // 指定したキーで値を保存する関数 // - param // - dataType: 値を保存する際に用いるキー // - value: 保存したい値 // - return // - 保存時のステータスコードを返却する (0 以外は失敗) int addItem(const char *dataType, const char *value) { NSMutableDictionary* attributes = nil; NSMutableDictionary* query = [NSMutableDictionary dictionary]; NSData* sata = [[NSString stringWithCString:value encoding:NSUTF8StringEncoding] dataUsingEncoding:NSUTF8StringEncoding]; [query setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass]; [query setObject:(id)[NSString stringWithCString:dataType encoding:NSUTF8StringEncoding] forKey:(id)kSecAttrAccount]; OSStatus err = SecItemCopyMatching((CFDictionaryRef)query, NULL); if (err == noErr) { attributes = [NSMutableDictionary dictionary]; [attributes setObject:sata forKey:(id)kSecValueData]; [attributes setObject:[NSDate date] forKey:(id)kSecAttrModificationDate]; err = SecItemUpdate((CFDictionaryRef)query, (CFDictionaryRef)attributes); return (int)err; } else if (err == errSecItemNotFound) { attributes = [NSMutableDictionary dictionary]; [attributes setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass]; [attributes setObject:(id)[NSString stringWithCString:dataType encoding:NSUTF8StringEncoding] forKey:(id)kSecAttrAccount]; [attributes setObject:sata forKey:(id)kSecValueData]; [attributes setObject:[NSDate date] forKey:(id)kSecAttrCreationDate]; [attributes setObject:[NSDate date] forKey:(id)kSecAttrModificationDate]; err = SecItemAdd((CFDictionaryRef)attributes, NULL); return (int)err; } else { return (int)err; } } // 指定したキーで値を取得する関数 // - param // - dataType: 値を取得する際に用いるキー // - return // - キーに紐づく値、存在しなければ空文字が返却される char* getItem(const char *dataType) { NSMutableDictionary* query = [NSMutableDictionary dictionary]; [query setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass]; [query setObject:(id)[NSString stringWithCString:dataType encoding:NSUTF8StringEncoding] forKey:(id)kSecAttrAccount]; [query setObject:(id)kCFBooleanTrue forKey:(id)kSecReturnData]; CFDataRef cfresult = NULL; OSStatus err = SecItemCopyMatching((CFDictionaryRef)query, (CFTypeRef*)&cfresult); if (err == noErr) { NSData* passwordData = (__bridge_transfer NSData *)cfresult; const char* value = [[[NSString alloc] initWithData:passwordData encoding:NSUTF8StringEncoding] UTF8String]; char *str = strdup(value); return str; } else { return NULL; } } // 指定したキーで値を削除する関数 // - param // - dataType: 値を削除する際に用いるキー // - return // - 保存時のステータスコードを返却する (0 以外は失敗) int deleteItem(const char *dataType) { NSMutableDictionary* query = [NSMutableDictionary dictionary]; [query setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass]; [query setObject:(id)[NSString stringWithCString:dataType encoding:NSUTF8StringEncoding] forKey:(id)kSecAttrAccount]; OSStatus err = SecItemDelete((CFDictionaryRef)query); if (err == noErr) { return 0; } else { return (int)err; } } } ``` `Keychain Services` は `Security` フレームワークを利用するため、**`KeychainService.mm` に対して `Security` フレームワークの依存関係を設定する必要があります。** ![`KeychainService.mm` で `Security` フレームワークの利用を可能にする](https://i.gyazo.com/ba82aaced24b83b37bf8c63e1ee7142f.png) **`KeychainService.mm` で `Security` フレームワークの利用を可能にする** その後、上記を Unity スクリプトから実行可能にするための C# クラスを作成します。本記事ではファイルを `Assets/Scripts/KeychainService.cs` に配置します。 ```csharp using System.Runtime.InteropServices; /// <summary> /// 実装は <c>Assets/Plugins/iOS/KeychainService.mm</c> に記載 /// </summary> /// <remarks> /// <a href="https://developer.apple.com/documentation/security/keychain_services">Keychain Services</a> /// </remarks> class KeychainService { #if UNITY_IOS [DllImport("__Internal")] private static extern int addItem(string dataType, string value); [DllImport("__Internal")] private static extern string getItem(string dataType); [DllImport("__Internal")] private static extern int deleteItem(string dataType); #endif public bool Put(string key, string value) { #if UNITY_IOS // 返却されるステータスが 0 なら成功 return addItem(key, value) == 0; #endif } public string Get(string key) { #if UNITY_IOS return getItem(key); #else return null; #endif } public bool Delete(string key) { #if UNITY_IOS // 返却されるステータスが 0 なら成功 return deleteItem(key) == 0; #endif } } ``` あとは用途に応じて下記のようなコードで設定値の保存や取得などを行えます。 ```csharp // ... var _keychainService = new KeychainService(); // name をキーとして値を nikaera で保存する _keychainService.Put("name", "nikaera"); // name をキーとして値を取得する var name = _keychainService.Get("name"); // "nikaera" が出力される Debug.Log(name); // name をキーとして値を削除する _keychainService.Delete("name"); // ... ``` # (余談) インターフェースで iOS/Android のふるまいを共通化する このままだとプラットフォームを切り替える毎にコードを書き直さないとならないので、インターフェースを利用して共通化を行います。 ```csharp public interface ISecretManager { /// <summary> /// 指定したキーで値を保存する関数 /// </summary> /// <param name="key">キー</param> /// <param name="value">値</param> /// <returns>保存に成功したかどうか</returns> bool Put(string key, string value); /// <summary> /// 指定したキーの値を取得する関数 /// </summary> /// <param name="key">キー</param> /// <returns>指定したキーで設定された値、無ければ null</returns> string Get(string key); /// <summary> /// 指定したキーの値を削除する関数 /// </summary> /// <param name="key">キー</param> /// <returns>削除に成功したかどうか</returns> bool Delete(string key); } ``` その後、`Assets/Scripts/EncryptedSharedPreferences.cs` および `Assets/Scripts/KeychainService.cs` を下記の通り `ISecretManager` の実装に紐付けます。 ```csharp using UnityEngine; /// <summary> /// 利用するネイティブコードは <c>Assets/Plugins/Android/SecretManager.java</c> に記載 /// </summary> /// <remarks> /// <a href="https://developer.android.com/reference/androidx/security/crypto/EncryptedSharedPreferences">EncryptedSharedPreferences</a> /// </remarks> class EncryptedSharedPreferences: ISecretManager { private readonly AndroidJavaObject _secretManager; public EncryptedSharedPreferences() { var activity = new AndroidJavaClass("com.unity3d.player.UnityPlayer") .GetStatic<AndroidJavaObject>("currentActivity"); var context = activity.Call<AndroidJavaObject>("getApplicationContext"); _secretManager = new AndroidJavaObject("com.nikaera.SecretManager", context); } #region ISecretManager public bool Put(string key, string value) { return _secretManager.Call<bool>("put", key, value); } public string Get(string key) { return _secretManager.Call<string>("get", key); } public bool Delete(string key) { return _secretManager.Call<bool>("delete", key); } #endregion } ``` ```csharp using System.Runtime.InteropServices; /// <summary> /// 実装は <c>Assets/Plugins/iOS/KeychainService.mm</c> に記載 /// </summary> /// <remarks> /// <a href="https://developer.apple.com/documentation/security/keychain_services">Keychain Services</a> /// </remarks> class KeychainService: ISecretManager { #if UNITY_IOS [DllImport("__Internal")] private static extern int addItem(string dataType, string value); [DllImport("__Internal")] private static extern string getItem(string dataType); [DllImport("__Internal")] private static extern int deleteItem(string dataType); #endif // KeychainService.mm に定義した関数を呼び出す #region ISecretManager public bool Put(string key, string value) { #if UNITY_IOS return addItem(key, value) == 0; #else return false; #endif } public string Get(string key) { #if UNITY_IOS return getItem(key); #else return null; #endif } public bool Delete(string key) { #if UNITY_IOS return deleteItem(key) == 0; #else return false; #endif } #endregion } ``` あとは上記をよしなに利用可能な `SecretManager` クラスを作成します。 ```csharp using UnityEngine; /// <summary> /// <em>Editor 利用時のみ PlayerPrefs を利用する</em> /// </summary> /// <remarks><see cref="KeychainService" />, <see cref="EncryptedSharedPreferences" /></remarks> public static class SecretManager { #if UNITY_EDITOR #elif UNITY_ANDROID private static ISecretManager _instance = new EncryptedSharedPreferences(); #elif UNITY_IOS private static ISecretManager _instance = new KeychainService(); #endif public static bool Put(string key, string value) { #if UNITY_EDITOR PlayerPrefs.SetString(key, value); PlayerPrefs.Save(); return true; #elif UNITY_IOS || UNITY_ANDROID return _instance.Put(key, value); #else Debug.Log("Not Implemented."); return false; #endif } public static string Get(string key) { #if UNITY_EDITOR return PlayerPrefs.GetString(key); #elif UNITY_IOS || UNITY_ANDROID return _instance.Get(key); #else Debug.Log("Not Implemented."); return null; #endif } public static bool Delete(string key) { #if UNITY_EDITOR PlayerPrefs.DeleteKey(key); PlayerPrefs.Save(); return true; #elif UNITY_IOS || UNITY_ANDROID return _instance.Delete(key); #else Debug.Log("Not Implemented."); return false; #endif } } ``` これでプラットフォーム間の実装差異を気にすることなく、下記のような記述で設定値の保存や取得などを行えます。**iOS/Android 以外のプラットフォームで追加実装したい場合は [プラットフォーム依存コンパイル](https://docs.unity3d.com/ja/2021.1/Manual/PlatformDependentCompilation.html) と `ISecretManager` の実装クラスを新たに作成することで簡単に追加できます。** ```csharp // ... // name をキーとして値を nikaera で保存する SecretManager.Put("name", "nikaera"); // name をキーとして値を取得する var name = SecretManager.Get("name"); // "nikaera" が出力される Debug.Log(name); // name をキーとして値を削除する SecretManager.Delete("name"); // ... ``` # おわりに 今回は iOS/Android で設定値をセキュアに扱うための方法についてまとめてみました。実際は `Keychain Services` 周りは実装が大変なので、`External Dependency Manager for Unity` とか使って [KeychainAccess](https://github.com/kishikawakatsumi/KeychainAccess) のような外部ライブラリを利用する構成のほうが良いと思われます。 本記事の内容に誤りがあったり、実際にはセキュアな実装ができていない等々あれば是非コメントでご指摘いただけますと幸いです。 # 参考リンク - [Android デベロッパー  \|  Android Developers](https://developer.android.com/topic/security/data?hl=ja) - [EncryptedSharedPreferences  \|  Android デベロッパー  \|  Android Developers](https://developer.android.com/reference/androidx/security/crypto/EncryptedSharedPreferences?hl=ja) - [Keychain Services \| Apple Developer Documentation](https://developer.apple.com/documentation/security/keychain_services) - [SharedPreferencesを自前で難読化するのはもう古い?これからはEncryptedSharedPrefenrecesを使おう \- Qiita](https://qiita.com/masaki_shoji/items/6c512c7ebb30a13cda1d) - [iOSのキーチェーンについて \- Qiita](https://qiita.com/sachiko-kame/items/261d42c57207e4b7002a) - [UnityでIOSにセキュアに値を保存するにはKeyChainを使おう \- Qiita](https://qiita.com/nyhk-oi/items/189236d0627d43e7d658) - [googlesamples/unity\-jar\-resolver: Unity plugin which resolves Android & iOS dependencies and performs version management](https://github.com/googlesamples/unity-jar-resolver)
nikaera
792,656
Can anyone take me with him/her any C++/python/JS project ... ? Want to learn the things actually ... please ...
just a mail ... sarkar99ratul@gmail.com
0
2021-08-15T17:01:29
https://dev.to/noobdev/can-anyone-take-me-with-him-her-any-c-python-js-project-want-to-learn-the-things-actually-please-4pci
python, javascript, cpp
just a mail ... sarkar99ratul@gmail.com
noobdev
792,696
Frontend Mentor - Order Summary Component
Order Summary Component design from the website Frontend...
0
2021-08-15T18:41:49
https://dev.to/aituos/frontend-mentor-order-summary-component-3ffi
frontendmentor, webdev, css
![The finished design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38wfqn7195baj1z4m08j.jpg) Order Summary Component design from the website Frontend Mentor. https://www.frontendmentor.io/challenges/order-summary-component-QlPmajDUj You can see my finished version here: [Github repo](https://github.com/Aituos/FM--Order-Summary-Component) | [Live version](https://aituos.github.io/FM--Order-Summary-Component/) This design wasn't that difficult to create, but it was definitely fun. The only problem I had was with the background, more specifically I didn't know how to position it to look exactly like the original design. So I looked at some of its' properties - `background-position` and `background-size` - more closely. I'm used to setting the position to `center`, and the size to `cover`, because it always did what I wanted :laughing:. Turns out it's possible to fine-tune the background's position. You can read more about it here: https://developer.mozilla.org/en-US/docs/Web/CSS/background-position In the end, I set the two properties like this: `background-position: top;` `background-size: 100%;` And it worked like a charm. I think. One more thing I wanted to mention is that I recently found out you can set `aria-hidden="true"` on any HTML elements that you want to hide from screen readers, such as decorative images, so I did just that. If you use it on images, you still have to include the alt text, but set it to an empty string: `alt=""`
aituos
792,700
Backend shorts - Use database transactions
Database transactions have many use cases, but in this post we will be only looking at the simplest...
14,128
2021-08-15T19:38:41
https://dev.to/hbgl/backend-shorts-use-database-transactions-36fj
webdev, backend, laravel, shorts
Database transactions have many use cases, but in this post we will be only looking at the simplest and most common one: all or nothing semantics aka atomicity. If you issue **multiple related** database queries that modify data (INSERT, UPDATE, DELETE), then you should most likely use a transaction. Here is a quick example using Laravel: ```php function submitOrder(Request $request) { // .... $order = new Order($someOrderData); $payment = new Payment($somePaymentData); DB::transaction(function () use ($order, $payment) { $order->save(); $payment->save(); } // ... } ``` Maybe for a better understanding, here is what is going on: ```php function submitOrder(Request $request) { // .... // YOU: Hey DB, I want to save a bunch of data. DB::transaction(function () { // DB: Ok, tell me everything you want to save. // YOU: This order right here. $order->save(); // YOU: And this payment right there. $payment->save(); // YOU: And that's all. Just those two. }); // DB: Alright, order and payment have been saved. // ... } ``` By wrapping the `save()` calls in a `DB::transaction`, we make sure that either all models are saved or none of them. Or said differently, our data is always in a consistent state, which is one of the primary tasks of a backend developer. There will never be an order without a payment. ## What could go wrong? Let's play through the scenario in which we did not use a transaction. Say the order was successfully saved but just when calling `$payment->save()` the DB goes offline, or the PHP process times out, or it crashes, or the whole machine crashes. You now have an order without a payment, which is probably not something that your application was designed to handle correctly. The consequences will depend on the exact circumstances, but it is not something that you want to get an angry phone call at 8pm on a Sunday for.
hbgl
792,786
The HNG internship: My goals and aspirations
"You learn so much from taking chances, whether they work out or not. Either way, you can grow from...
0
2021-08-15T19:51:07
https://dev.to/dcwhitesnake/the-hng-internship-my-goals-and-aspirations-4pl7
internship, programming, hng, zuri
&quot;*You learn so much from taking chances, whether they work out or not. Either way, you can grow from the experience and become stronger and smarter.*&quot; ~ John Legend For the past 4-months, I have been asking myself, &quot;what next?&quot; Well, as luck would have it, that question got answered by a post I saw while walking the streets of Twitter, [HNG internship](https://internship.zuri.team/). An 8-week long, fast-paced deathmatch, giving people from different creative backgrounds a foot in the door into the professional environment and most importantly, a t-shirt as a badge of honour to really (in Nigerian lingo) &quot;loud it&quot;. Let me introduce myself. My name is David Okeke, a software developer and an undergraduate of the Systems engineering department of the University of Lagos. Although I started programming in 2018 first with the [C# programming language](https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/introduction) succeeded by [Git and GitHub](https://www.youtube.com/watch?v=SWYqp7iY_Tc), [HTML](https://www.youtube.com/watch?v=UB1O30fR-EE), [Python](https://www.youtube.com/watch?v=rfscVS0vtbw), [CSS](https://www.freecodecamp.org/learn/responsive-web-design/) and [Figma] (https://www.youtube.com/watch?v=jk1T0CdLxwU) in chronological order, I haven&#39;t built enough projects to develop a rock-solid portfolio. My goal is to create decent projects and strong relationships in the course of this internship. The first few days in have been eye-opening and a tad bit overwhelming, plus having to put me out there and scour the channel by the hour for important information. The benefits however have outweighed the discomfort, such as the opportunity to increase my social network, access to resources that brighten my blind spots and premium access to mentors for free.
dcwhitesnake
792,954
What is Cloud native?
A post by jmbharathram
0
2021-08-15T21:44:28
https://dev.to/jmbharathram/what-is-cloud-native-1gmn
kubernetes, cloudnative, cloud
{% youtube PTnVHDXJ-sk %}
jmbharathram
792,965
My tech journey so far (HNG8)
My journey into tech started on twitter, when I saw a post from Zuri internship, offering to train...
0
2021-08-15T22:05:05
https://dev.to/toogood208/my-tech-journey-so-far-hng8-479m
flutter, kotlin, android
My journey into tech started on twitter, when I saw a post from Zuri internship, offering to train people transitioning into tech for the first time. I registered for the training with a lot of expectations. I chose mobile development because I have always been curious about how mobile apps work. After 3 months of rigorous training, I learnt to: • Create ui with XML, • Work with Recycler views, • MVVM design patterns, • Persisting data with Room DB, • Consuming API with retrofit, I have also built wonderful projects which includes: • A sales kit for Ajocard sales team, • An app that generates random phone numbers for cold calling, • An app that compare prices of products from two different vendors(zuri final project) I am currently learning about flutter and I am fascinated more projects will be rolling in. I want to send a special shout out to sammybloom and ehmagbugo of the Zuri team, you guys took out time to teach us about mobile development with kotlin. I am now in the hng internship. My expectation is that at the end of the 8 weeks of training, I will start practising as a mobile developer, I hope to build a lot of projects and also share my progress along the way. In a few years, I want to be the grand commander of the android battalion. If you a a beginner like me and want to start your journey I will advice the following: • Join the Zuri internship: https://internship.zuri.team/ • Learn figma: https://trydesignlab.com/figma-101-course/introduction-to-figma/ • Learn kotlin, Google codelab is a good place to start:https://developer.android.com/courses/kotlin-android-fundamentals/toc • Learn GIT: https://opensource.com/article/18/1/step-step-guide-git • Most importantly, follow me on twitter: @TooGood208. Cheers and see you at the top Ps: if you are in Nigeria check out this cool sites for cars: https://cars.ng/
toogood208
792,971
MY GOALS FOR HNG INTERNSHIP
The HNG internship is a 3-month remote internship designed to find and develop the most talented...
0
2021-08-15T22:26:52
https://dev.to/pertrick/my-goals-for-hng-internship-1oh
The HNG internship is a 3-month remote internship designed to find and develop the most talented software developers in which everyone is welcome to participate (there is no entrance exam). Interested Interns can access the internship using their laptop. Tasks are given to interns weekly. Interns who complete the tasks advance forward. The intern coders are introduced to complex programming frameworks, and get to work on real world software. The finalists are connected to the best companies in the tech ecosystem and get full time jobs and contracts immediately. My Goals for HNG Internship includes: 1. To be driven to harness my skills and potentials. 2. Getting to work with tasks focused with real life experience. 3. Connection and Networking with fellow Interns. 4. Learn as much as possible from the available Tutorials and resources. Tutorials and learning resources Figma tutorial https://m.youtube.com/watch?v=FTFaQWZBqQ8 Github tutorial https://m.youtube.com/watch?v=SWYqp7iY_Tc Html and Css tutorials https://m.youtube.com/watch?v=vQWlgd7hV4A Php tutorials https://m.youtube.com/watch?v=2eebptXfEvw HNG link: https://internship.zuri.team/
pertrick
792,975
Tracking The Flow of Information in React.js
A major advantage of React is that it facilitates the overall process of writing components which can...
0
2021-08-15T22:55:31
https://dev.to/davidnnussbaum/tracking-the-flow-of-information-in-react-js-2589
A major advantage of React is that it facilitates the overall process of writing components which can then in turn be reused. It can become challenging to track the flow of information from one component to another, especially when one wants to keep track of the props. Having a flow chart to be used as an available reference would allow a person to easily view the progression of information among the components. In this example, an actual React project will be utilized. One would start in the parent component (which many times is App.js) and look at the components that are rendered within it. These components probably have props which are included to pass down. Here is an example: src/componentsApp.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44iifwul4n905hctv3jv.png) Login, Signup, and Medications are rendered. We therefore start with the following connections: App → Login App → Signup App → Medications src/components/Login.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8si9rlhprebnieln3oev.png) LoginForm is rendered. The flow chart is updated to the following. App → Login → LoginForm(End) App → Signup App → Medications LoginForm is opened and looking at the code reveals that it does not have any children so that line is complete. Any component without children are not shown in this post. We now go to Signup. When there are no children, (End) is entered to clarify that no component was left out of that progression. src/components/Signup.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fk3omrnkiuk01crfepru.png) SignupForm is rendered and therefore added to the list. App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications SignupForm does not have any children so that line is complete. We now go to back to Medications. For the remainder of this posting, the logic will be continued. A progression will be followed and when a new progression is advanced, that will be an indication that the previous progression's last component had no children. If a branching occurs, every component up to that branching will be repeated. src/components/Medications.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/by9j14pvz6kqmuepar6y.png) App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications → MedicationList App → Medications → CreateMedication src/lists/MedicationList.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33dunl3p65klgthy6n89.png) App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications → MedicationList → Medication(End) App → Medications → MedicationList → ComplicationList App → Medications → CreateMedication src/lists/ComplicationList.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5vwzztkm33mgxirvcmuz.png) App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications → MedicationList → Medication(End) App → Medications → MedicationList → ComplicationList → Complication App → Medications → MedicationList → ComplicationList → CreateComplication App → Medications → CreateMedication src/components/Complication.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4j2qghnms2hiupjlj95o.png) App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications → MedicationList → Medication(End) App → Medications → MedicationList → ComplicationList → Complication → EditComplicationForm(End) App → Medications → MedicationList → ComplicationList → CreateComplication App → Medications → CreateMedication src/components/CreateComplication.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk0ew9hbjqln2a3b6o1v.png) App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications → MedicationList → Medication(End) App → Medications → MedicationList → ComplicationList → Complication → EditComplicationForm(End) App → Medications → MedicationList → ComplicationList → CreateComplication → ComplicationForm(End) App → Medications → CreateMedication src/components/CreateMedication.js ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xto97gvo3veh62g45a44.png) App → Login → LoginForm(End) App → Signup → SignupForm(End) App → Medications → MedicationList → Medication(End) App → Medications → MedicationList → ComplicationList → Complication → EditComplicationForm(End) App → Medications → MedicationList → ComplicationList → CreateComplication → ComplicationForm(End) App → Medications → CreateMedication → MedicationForm(End) There is now a complete list of the progression of information which can be referenced during the building or usage of React.js programming.
davidnnussbaum
792,982
HNG Internship
I am currently in the HNG i8 internship which will last for a period of eight(8) weeks. The goal is...
0
2021-08-15T23:01:19
https://dev.to/jesmanto/hng-internship-5fa2
I am currently in the HNG i8 internship which will last for a period of eight(8) weeks. The goal is to test yourself and compete with great minds as well. It's gonna be a smooth and as well rough experience, even though the goal is to get a T-Shirt and brag about the completion of the internship... Lolz For me, I hope to connect with other developers in the internship and emerge one of the finalists. I also hope to gain more confidence in myself as a developer at the end of the internship. Most importantly, I hope to secure a very nice high paying tech job after a successful completion of the internship. By the way, you too can be part of the internship by registering with the either of the following link: https://zuri.team or https://internship.zuri.team or https://training.zuri.team . **_If you are new to programming and don't know where to begin, follow any of the links below to watch free tutorials on the track you are interested in._** FIGMA (For UI designers) - https://www.youtube.com/watch?v=jk1T0CdLxwU GIT (For version control) - https://www.youtube.com/watch?v=8JJ101D3knE HTML - https://www.youtube.com/watch?v=qz0aGYrrlhU PYTHON - https://www.youtube.com/watch?v=_uQrJ0TkZlc FLUTTER - https://www.youtube.com/watch?v=x0uinJvhNxI KOTLIN - https://www.youtube.com/watch?v=F9UC9DY-vIU
jesmanto
807,068
Finding Types at Runtime in .NET Core
One of the best features of .NET has always been the type system. In terms of rigor I place it...
0
2021-08-29T14:34:46
https://dev.to/bobrundle/finding-types-at-runtime-in-net-core-1lpg
dotnet
One of the best features of .NET has always been the type system. In terms of rigor I place it midway between the rigidness of C++ and the anything-goes of JavaScript which in my view makes it just right. However one of my frustrations over the years has been finding types at runtime. At compile time you find the integer type like so… ``` Type t00 = typeof(int); ``` But at runtime this doesn't work… ``` Type t01 = Type.GetType("int"); // null ``` You need to do this… ``` Type t02 = Type.GetType("System.Int32"); ``` Similarly for other system types such as DateTime… ``` Type t10 = typeof(DateTime); Type t11 = Type.GetType("DateTime"); // null Type t12 = Type.GetType("System.DateTime"); ``` Let's say you created your own local type… ``` public class ClassA : IClassAInterface { public string Hello() { return "In main program"; } } ``` Which references this interface in a separate assembly… ``` namespace ClassAInterface { public interface IClassAInterface { public string Hello(); } } ``` Again it is simpler to find it at compile time than run time. ``` Type t20 = typeof(ClassA); Type t21 = Type.GetType("ClassA"); // null Type t22 = Type.GetType("TypeSupportExamples.ClassA"); ``` To find it at runtime you need to specify the full name of the type which includes the namespace. This seems wrong because the code in this case is being executed in the namespace which contains the type. Finally if the user defined type you are looking for is defined in a different assembly you need to provide the assembly name… ``` Type t30 = typeof(IClassAInterface); Type t31 = Type.GetType("IClassAInterface"); // null Type t32 = Type.GetType("ClassAInterface.IClassAInterface"); //null Type t34 = Type.GetType("ClassAInterface.IClassAInterface, ClassAInterface"); ``` What is happening of course is that the reason the compiler can find types so easily is because of the using statements at the top of the file… ``` using ClassAInterface; using System; ``` The using statements provide a scope that guides the compiler to finding the right type. No such scoping mechanism exists at runtime. Instead, at runtime, scoping is provided by the container the type is in. Types are contained in assemblies which in turn are contained within load contexts which in turn are contained within app domains. This strict top down hierarchy is not required of namespaces which can span multiple assemblies. The same type name might be used in multiple assemblies and the same assembly name might be used in multiple load contexts. Even though the same type name might be used in multiple assemblies they are seen by the runtime as distinct types even though they might be identical. I explored the ramifications of this in my previous post https://dev.to/bobrundle/forward-and-backward-compatibility-in-net-core-3c52 . A review of type names. There are 3 for each type: simple name, full name and assembly qualified name… ``` // 3 Names of a type Console.WriteLine(t22.Name); Console.WriteLine(t22.FullName); Console.WriteLine(t22.AssemblyQualifiedName); ``` ``` ClassA TypeSupportExamples.ClassA TypeSupportExamples.ClassA, TypeSupportExamples, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null ``` I set out to make finding types at runtime easier, so I explored implementing a kind of runtime "using" statement. First I create a global dictionary of types, GlobalTypeMap, that allowed simple names for built-in types and system types and requires full types names for all others. ``` Console.WriteLine(); var globalTM = new GlobalTypeMap(); Type t0 = globalTM.FindType("int"); Console.WriteLine(t0.FullName); Type t1 = globalTM.FindType("DateTime"); Console.WriteLine(t1.FullName); Type t2 = globalTM.FindType("TypeSupportExamples.ClassA"); Console.WriteLine(t2.FullName); ``` ``` System.Int32 System.DateTime TypeSupportExamples.ClassA ``` Then create a child type map, ScopedTypeMap, where I apply using statements. ``` var scopedTM1 = new ScopedTypeMap(globalTM); scopedTM1.UsingNamespace("TypeSupportExamples"); Type t3 = scopedTM1.FindType("ClassA"); IClassAInterface d0 = Activator.CreateInstance(t3) as IClassAInterface; Console.WriteLine(); Console.WriteLine(d0.Hello()); // In main program ``` ``` In main program ``` If new assemblies are loaded, the global type map is updated and the change is reflected in the scoped type map. ``` var scopedTM2 = new ScopedTypeMap(globalTM); string apath0 = Path.Combine(Directory.GetCurrentDirectory(), "AssemblyA.dll"); Assembly a0 = AssemblyLoadContext.Default.LoadFromAssemblyPath(apath0); scopedTM2.UsingNamespace("NamespaceA1"); Type t4 = scopedTM2.FindType("ClassA"); // This is NamespaceA1.ClassA in AssemblyA IClassAInterface d1 = Activator.CreateInstance(t4) as IClassAInterface; Console.WriteLine(d1.Hello()); ``` ``` This is NamespaceA1.ClassA in AssemblyA ``` The global type map also works across multiple load contexts… ``` var scopedTM3 = new ScopedTypeMap(globalTM); string apath1 = Path.Combine(Directory.GetCurrentDirectory(),@"AssemblyB.dll"); AssemblyLoadContext alc0 = new AssemblyLoadContext("alc0"); Assembly a1 = alc0.LoadFromAssemblyPath(apath1); scopedTM3.UsingNamespace("NamespaceB1"); Type t5 = scopedTM3.FindType("ClassA"); // This is NamespaceB1.ClassA in AssemblyB IClassAInterface d2 = Activator.CreateInstance(t5) as IClassAInterface; Console.WriteLine(d2.Hello()); ``` ``` This is NamespaceB1.ClassA in AssemblyB ``` Finally we might apply namespace scope to types that have identical simple names. This is also supported. ``` var scopedTM4 = new ScopedTypeMap(globalTM); scopedTM4.UsingNamespace("TypeSupportExamples"); scopedTM4.UsingNamespace("NamespaceA1"); scopedTM4.UsingNamespace("NamespaceA2"); scopedTM4.UsingNamespace("NamespaceB1"); Type[] tt = scopedTM4.FindTypes("ClassA"); Console.WriteLine(); foreach (var t in tt) Console.WriteLine(t.FullName); ``` ``` NamespaceA1.ClassA NamespaceA2.ClassA NamespaceB1.ClassA TypeSupportExamples.ClassA ``` ## Summary and Discussion What I have demonstrated is a runtime type facility to allow types to be more easily found. All the code for this facility including the examples above can be found at https://github.com/bobrundle/TypeSupport The reason I built this facility is that I want to use it for serializing types in a very lightweight way. This type serialization mechanism will be the subject of a future post. This runtime type facility is definitely not lightweight. A Hello World program contains over 2000 types. For certain applications, however, I think it will be useful. I did not support all of the capabilities of the .NET type system. For example load contexts can be unloaded and this properly should remove all the relevant types from the global type map. I will add that later if I need it. I did not address generics but the runtime type facility will support them. You simply need to understand how the grammar of the type name system works. This is documented in https://docs.microsoft.com/en-us/dotnet/framework/reflection-and-codedom/specifying-fully-qualified-type-names. I did struggle with the issue of ambiguous types. At compile time if you try to use a simple type name that is scoped in multiple namespaces, you get an ambiguous type error. For the scoped type map, I considered throwing an exception if you tried to find a single type by name and there were more than one defined. In the end I decided not to throw an exception and to simply return the first type in a sorted list. The sort for the type list moves types in the default load context to the front of the list. Perhaps I will change my mind on this later. I hope this post is useful. .NET types should be thoroughly understood and I was surprised how much I learned about various aspects of the type system that I already thought I thoroughly understood.
bobrundle
793,073
Measuring Developer Relations
DevRel is hot but nobody knows how to measure it. That's because we don't agree on what effective DevRel *is*, and we don't agree on the tradeoffs of lagging vs leading metrics for a creative, unattributable, intimately human endeavor.
0
2021-08-16T01:06:41
https://www.swyx.io/measuring-devrel
dx, devrel, content
--- title: Measuring Developer Relations published: true description: DevRel is hot but nobody knows how to measure it. That's because we don't agree on what effective DevRel *is*, and we don't agree on the tradeoffs of lagging vs leading metrics for a creative, unattributable, intimately human endeavor. tags: DX, devrel, content slug: measuring-devrel canonical_url: https://www.swyx.io/measuring-devrel cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6b2x3y2oe6v1o7eu053.png --- > "Games are won by players who focus on the playing field –- not by those whose eyes are glued to the scoreboard." - Warren Buffett ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6b2x3y2oe6v1o7eu053.png) A few years ago the key theme of one of the DevRelCons was "metrics". People breathlessly tweeted about how this was **the most important topic in DevRel**, to general agreement. The 2 day schedule was packed with 48 speakers with impressive titles stack ranked by follower count. Hundreds of DevRels flew in to hear DevRels thoughtlead DevRel. Drinks were drunk, hands were shook, empty promises made. Then they went home. **Nothing changed.** Today in 2021 people continue to talk about how hard it is to measure DevRel. The full videos of talks to this highly anticipated paid event worth hundreds of dollars per ticket were released on YouTube, with a total view count of 601 (not a typo). The organizers did an incredible job garnering 18 sponsors, all duly ignored. My company was one of them, spending thousands on a Gold slot for which we got 4 tickets, presumably to learn DevRel best practices and hire great talent. ***We sent none.*** I did cherry pick an egregious anecdote, but it helps illustrate the typical state of DevRel accountability today. Lots of sizzle, questionable steak. **Loudly performing "DevRel", yet indistinguishable from "[Con](https://dev.to/swyx/my-life-as-a-con-man-6b8)"**. Yet despite lack of visibility, companies continue to invest! Because we *do* know that through all the noise and gladhanding, *some* value does get created and it *is* unique to all the other user acquisition channels available. I of course don't have the perfect answer. But I've lived it for a while and wanted to write down my evolving thoughts for others who are going down this same path. Instead of solutions, I'll offer structure. Instead of answering your questions, I hope to offer you better ones. *Note: I write for new DevRels as well as people running DevRel programs. This is a "201" blogpost rather than a "101" — I'm aiming to cover what introductory blogposts don't say. I will also have inconsistent capitalization and voice because I am condensing a massive amount of thoughts and trust you are smart enough to figure it out. Sometimes "DevRel" is one person (who actually holds the title "Developer Advocate" or "DX Engineer"), sometimes "Devrel" is an industry. If this upsets you, please read something else.* ## Bottom Line Up Front ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrvcdbq2vg48z492aber.png) Stop looking: **Your North Star metric is "monthly active developers"**. If growth is accelerating, good. If growth is constant, fine. If growth is 0% or worse then whatever you are doing isn't working. - Every other company measuring devrel eventually settles on this, so just cut right to the chase. You could try "monthly active clusters" but you'll want each cluster's usage to grow as well so you end up indexing on "developers" anyway. - However MAD *is* multi-causal and a lagging indicator so you need leading indicators which you have more direct control over. ## What kind of DevRel are you? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijj8os3spy3q2zggzlg2.png) **Unhappiness arises when there is an expectations mismatch between what the company wants out of DevRel and what each DevRel is naturally good at.** People have natural affinities — you are more likely to do well with community if you already organize meetups, you are more likely to be great at content if you are a regular on the speaker circuit, you are more likely to understand how to plug user gaps with product feedback if you already built one, etc. Don't judge a fish by its ability to climb trees. There are three emerging sub-specialities of developer relations: **community**-focused, **content**-focused, and **product**-focused. How you are measured depends on what your strengths are and what your company *actually* thinks devrel is. We'll explore each in turn, but first some context: - A lot of companies will proclaim that their approach to developer *relations* is different from developer *evangelism* because it is a "two way street", meaning **you put product in front of people but also bring people's opinions back into product**. - The reality is often the ratio of this two way street traffic is 99% outbound and 1% inbound, because the product/engineering orgs haven't set aside any bandwidth for "shadow PM-ing" from devrel and all of devrel's metrics are outbound focused. Totally understandable, but could be better. - If you are in "devrel" but you walk like marketing, talk like marketing, and are measured like marketing, you are in marketing. **In fact, you are very expensive affiliate marketing**. - If you measure devrel by views, the established industry metric is "cost per mille" aka cost per thousand people reached. Indicative numbers: a typical developer content creator's CPM is $5, a normal devrel's CPM is $50-$100. In other words **you are paying 10-20x more to make content "in-house"** than just paying a "professional" to make something about you (often of higher quality, to a bigger audience). - Dev content agencies like [Draft](https://draft.dev/about) and [Ironhorse](https://ironhorse.io/developer-marketing/) exist but are an awkward half solution where they don't bring audience and don't have a deep relationship with your product. - So for a company devrel investment to make sense, **you have to provide some other form of value than raw anonymous reach**. This can be: depth, breadth, consistency, access, insight, community, email list building, etc. - Devrel specifically coexists as a separate discipline from marketing because developers hate marketing fluff and bottom-up + open source sales is so important for devtool adoption. - Career marketers have trouble relating to developers because so much of the conventional wisdom is inverted - you have to [sell features, not benefits](https://twitter.com/swyx/status/1361279902889086980), and be ready to dive into just-enough-technical-detail over using vague superlatives. However, marketers can still be very useful selling to other parts of the org. ## Bad Metrics I've been asked to measure all these in my work: GitHub stars on my demos (yuck), traffic attributed to my Google Analytics UTM tag (yuck yuck), number of badges I could scan at a conference (yuck yuck yuck). All well intentioned but ultimately not meaningful, because they value quantity over quality, breadth over depth, free-and-superficial over paid-and-indicating-serious-interest. - Sometimes these are justified as "something is better than nothing", but once in place, metrics have a strange hold on the imagination: I've seriously had a CTO carelessly reject my genuine idea out of hand because "it doesn't help OKRs", the same OKRs we previously agreed should not describe all that we do. I agree with Amir Shevat that we should "[do the right things over the easy to measure things](https://www.swyx.io/a16z_devrel_amir_shevat/#qa-with-amir-and-mikeal-rogers)": - People are too flippant about using NPS. NPS is easily gamed and rating product on a 1-10 scale is meaningless to developers who know how this works. Try [the Sean Ellis question](https://review.firstround.com/how-superhuman-built-an-engine-to-find-product-market-fit#anchoring-around-a-metric-a-leading-indicator-for-productmarket-fit) instead: “How would you feel if you could no longer [use the product/be part of the community/read this content]?” and use it as a way to **segment/understand your audience** rather than being satisfied just keeping the NPS number forever high. - You are hereby banned from suggesting to A/B test anything if you do not have the traffic or the infrastructure to easily implement an A/B test like 90% of devtools startups. **A tight user feedback loop beats anonymous data.** You find out much more just showing a preview to users 1:1 or in small groups than keeping them at arms length. - [Will Klein suggests](https://twitter.com/willklein_/status/1420139597112037379?s=20) "How quickly new releases are adopted" for OSS devrel. This sounds good until you realize that **upgrade adoption is just a proxy for active usage** and the curves pretty much all look like this and **no matter what you do**, you can only budge them a few %: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhfgj2fs5t3kua9pjjcp.png) ## Community-focused Developer Advocacy People are waking up to the lasting power of community and now [Technical Community Builder is the Hottest New Job in Tech](https://dev.to/dx/technical-community-builder-is-the-hottest-new-job-in-tech-8cl). Community metrics that I like (pick 1-3): - Number of active members in Slack/Discord - Number of weekly new topics in Discourse/StackOverflow/GitHub Discussions - Number of user contributions (whether it is PRs, questions, answers, blogposts, meetup talks, etc) - Number of [Orbit level 1](https://github.com/orbit-love/orbit-model) users - Number of events and number of attendees - Number of engaged "superusers" Things to look out for: - Beware the optics of regularly **asking your users to do free work for you**, and measuring yourself by metrics that are at best only loosely correlated with your community building work. - For example, most company "communities" are really just support forums, and increased support load is not always good nor does it have anything to do with your stewardship of it. Also beware of using devrel in place of a proper support team. - You need to give people a cause to rally around other than getting help. Having **great content is often the "minimal viable community"**. - A "big tent" serves your community over the company — invite your competitors to your event or conference, present together or let them do friendly trash talk, whatever makes your community an actual place where people get together to learn about your topic is good for you. This is especially effective when attempting [Category Creation](https://codingcareer.circle.so/c/devtools/category-creation-netlify-slack-datastax#comment_wrapper_2653625). - "True" community metrics are extremely hard to measure. For example, one definition of a good community is that people expect their association with your project/product to outlast their current employment. **The natural frequency for measuring this spans years** but no metrics system is this patient. - **Events** are underrated for fostering company community. To [paraphrase Marshall McLuhan](https://en.wikipedia.org/wiki/Understanding_Media), forums and chats are "cold", events are "hot". From company office hours, to social hours, to full-on conferences and hackathons, to user-organized meetups. Measure attendance and frequency of these events. I am dead serious about this — we had a Principal DevRel whose primary job was to help organize our 3 conferences (together with Marketing) and their associated community. If he did nothing else but that, he would already have been worth his weight in gold to us. - Live hackathons are appropriate if you have good docs, short time to value, fast iteration times (<1min to change and redeploy) and people can get to a tangible wow moment *that wins them the hackathon* (particularly in a multi-vendor hackathon). So you should develop a sense of what kinds of things win hackathons — anything realtime (because a live demo can include the audience), anything phone based (everyone has a phone), anything AI based (people like seeing machines act like people), anything visual design. Remote/async hackathons may be a better option if your product has a longer time to value. Work with [Major League Hacking](https://mlh.io/), [DevTo](https://dev.to/t/hackathon), or [FreeCodeCamp](https://www.freecodecamp.org/news/tag/hackathons/), or collab/sponsor with YouTubers in your niche. - Having an engaged and growing superuser community, e.g. AWS Heroes, Java Champions, GitHub Stars and Stripe Community Experts, can help reward your most ardent advocates with access and status. **A small group of highly engaged fans can make a community feel more alive than a large group of passive users.** - If your core product is open source, how much community involvement do you really want? [Runa Capital has a definition of Active Contributors](https://medium.com/runacapital/open-source-analysis-and-os-databases-1eb1fe840719), and the big dogs go up to 200 regular contributors a year. This is a *lot* of external community to manage - is this something your engineering/product team is equipped/prepared/willing to handle? - Bonus Community questions: - Does devrel or marketing handle the swag program? Do you [tie it to open source like Gatsby](https://store.gatsbyjs.org/)? Do you build your swag/CRM with your product as a demo or is that Not-Invented-Here syndrome? - How can you measure community-generated content and community-organized meetups? Devrel scales by how it *enables users* to tell (even brag to) each other about their usage. You can borrow ideas from non-devtool "user community" efforts like [Notion](https://gettogether.fm/episodes/notion). - How do you bring your community closer in touch with your engineers? Kelsey Hightower ran [empathy sessions](https://www.businessinsider.com/google-cloud-kelsey-hightower-customer-empathy-sessions-2021-8) where the Kubernetes team were just expected to use Kubernetes and failed. Obviously these interactions are valuable but how do you measure? - How can you help your users **hire each other**? At Temporal we set up a [careers page for our users](https://temporal.io/careers#external-jobs). Unfortunately most of these attempts will be too sporadic to seriously measure. ## Content-focused Developer Advocacy Content is the bread and butter of DevRel. If you have no idea where to start I recommend producing at least 1 piece of content on your company a week and just experimenting until you find your groove. Rewriting your blog once a year is fun, but not everyone has put in the reps to learn to regularly produce interesting content. Content metrics that I like (in rough order): - Number of Newsletter subscribers - Number of YouTube subscribers - Number of Twitter follows - Number of Workshop completions - Number of Conference/meetup appearances - [SEO Domain Authority](https://moz.com/learn/seo/domain-authority) Nuances to consider: - As a content creator you have the choice between [TOFU, MOFU, and BOFU content](https://swyx.transistor.fm/episodes/tofu-mofu-and-bofu-content): - Top of Funnel - never heard of you (Awareness) - Middle of Funnel - comparing you to others and learning basic features/concepts (Evaluation) - Bottom of Funnel - deciding to buy and put you into production (Conversion) - If you only measure website traffic then you naturally incentivize DevRel to create clickbait TOFU that doesn't have to convert at all and alienate your biggest fans. - Most companies create and measure the blogposts as their main metric and the newsletter is the afterthought. If you are building a company you should see this is ass backwards. Newsletter signup (incentivized by great company blogposts and updates) keeps you honest. Most marketers and professional creators view **1 email subscriber to be worth between 100-1000 social media followers**. Even though many developers don't like giving their email and you'll need other ways to reach them, you are not exempt from universal rules of media. - In fact, the more seriously you view yourself as "**building a media-company-within-a-company**" rather than "doing some content marketing", the better you will do. - This is how seriously a16z treats its media effort: its third GP was [Margit Wennmachers](https://en.wikipedia.org/wiki/Margit_Wennmachers), a PR/Comms exec, and just started [Future.com](http://future.com) with Sonal Chokshi to create its own alternative to tech journalists. - You also don't have to build this on your own — [HubSpot bought The Hustle](https://techcrunch.com/2021/02/04/hubspot-acquires-media-startup-the-hustle/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAGOLdJAZwMKTEwvtahBFLftY0UHRa4em-NlWREigwDiYW3cpNo-lqtWCsANBwgHpSc21kuUaKzib5xOBc8wvwwQf4byzrmU-sZDzDuPOglJPCoiJQda8j6I4HIwS4zVBNstuyd7xQ4EbvpRyWdOgjYYFwvfINtXQefmyrPOgn3Dx) for exactly this reason. - When you mash Alex Rampell's "[The battle between every startup and incumbent comes down to whether the startup gets distribution before the incumbent gets innovation](https://a16z.com/2015/11/05/distribution-v-innovation/)" with Justin Kan's "[second time founders are obsessed with distribution](https://twitter.com/justinkan/status/1059989657218248704?lang=en)", it just makes business sense to take distribution as seriously as product. - Keep in mind **the [half life of your content](https://epipheo.com/learn/what-is-the-lifespan-of-social-media-posts/)** — how long until a piece of content gets half of the views it will ever get in its lifetime. Twitter's half-life is hours, YouTube's half-life is months, blog is years. Create accordingly. - You can test things on more ephemeral media before developing them further on more permanent media. - Most managers think they are helping by establishing content calendars. I have never seen a DevRel operation stick to these beyond month 1. Turns out it is much easier to say you will do things than to actually do things. Ultimately you will create content because you are inspired to, not because some calendar said so. Keep a list of topics that you want to cover and reguarly pick them off based on subjective interest vs user need. Exception for YouTube - the recommendation algorithm does reward weekly output. See [Dan Luu on best practices for engineering blogs](https://danluu.com/corp-eng-blogs/). - **Community sizing**: There are between 5-10 million developers on YouTube (my estimate). There are between 1-2 million developers on Twitter (my estimate). There are about [6 million developers on Hacker News](https://twitter.com/swyx/status/1422043347011600386?s=20). Each have their own pet topics and sub communities. Consider/gauge/prioritize accordingly. - **Default to Consistency**: If you are new to professional content creator life, best way to get started is just [**default to consistency**](https://www.swyx.io/quality-vs-consistency/). That said, [10x content](https://sparktoro.com/blog/resources/10x-content-by-rand-fishkin/) has incredible half-life when you have the right idea. - **Workshops** are extremely underrated forms of content for achieving depth. Instead of 5,000 people watching your 5 minute video and then disappearing forever, you could have 50 people go through your 3-5 hour workshop and put it in production at work. - Record it and put it online - you're not going to get huge views on a 3-5 hour video that is so company specific, but for the people who prefer fastforwarding thru video or desperately need to solve some problem, it is hugely appreciated. - Most workshops can also be converted into self guided text based versions: see AWS Workshops like [Amplify](https://amplify-photo-sharing.workshop.aws/), [ECS](https://ecsworkshop.com/), and [EKS](https://www.eksworkshop.com/), Google [Colabs](https://colab.research.google.com/github/firebase/quickstart-python/blob/master/machine-learning/Firebase_ML_API_Tutorial.ipynb), GitHub [Learning Labs](https://lab.github.com/githubtraining/github-actions:-hello-world). - **Conference** appearances are probably overrated by most companies, by the simple fact that most developers don't go to conferences. Companies have historically overspent on devrel travel budgets if you just take into account attendee reach. However, millions of developers do watch good conference videos after the fact. You can abstractly view conference expenses as production costs for social proof and a really good YouTube video (on a channel you don't own). It's well worth producing 1-2 great conference talks a year at high profile events, and sometimes to get there, you need to practice and iterate at 5-10 smaller venues, but anything beyond that probably has diminished returns compared to anything else you could be doing. - **Meetups** are a different matter: high touch, low intensity. Being able to give the same elevator pitch over and over again at different meetups can be great for capturing the attention of a small set of developers. Use their questions and feedback to improve your elevator pitch ([2 word](https://www.swyx.io/two-words/), 1 sentence, 2 minute, 7 minute, 25 minute, 55 minute versions) as an infinite game. Raw # of meetup appearances is probably a good enough metric here. - [I agree with Steph Smith](https://twitter.com/swyx/status/1408896221456986114) that **most companies focus too much on social channels**, with the twist that most developers discover solutions more by hearing about them from friends and thoughtleaders than by searching for them. Auth0 and Digital Ocean famously prioritized an SEO based approach, however your devtool/brand should be general enough for this to work. - Bonus "Content" things to consider: - who owns example repos and code samples and keeps them up to date? - are you creating content because your docs aren't clear enough? are you creating docs because your product isn't intuitive? All content ultimately has a half-life — sometimes you have to go downstream to fix root cause instead of measuring someone on how well they apply bandaids. - how can you encourage real-engineer-content and user-generated content? Devrel scales by how it *enables others* to create content, not just by having a sole monopoly on content. ## Product-focused Developer Advocacy Product metrics that I like: - Number of launch day users, and positive launch day mentions - Number of prioritized user issues from DevRel - Number of monthly users of integrations/tooling - [the Sean Ellis question](https://review.firstround.com/how-superhuman-built-an-engine-to-find-product-market-fit#anchoring-around-a-metric-a-leading-indicator-for-productmarket-fit) (over NPS) for integrations/tooling managed by devrel - [Gustaf Alstromer's PMF measure](https://youtu.be/T9ikpoF2GH0?t=958): Value metric, and % retention at ideal recurring frequency - Something that measures [Developer Exceptions](https://www.swyx.io/developer-exception/), I haven't worked it out yet Devrels can provide tremendous value in the lead up to any product launch by beta testing (either personally or with users), or creating eyecatching/inspiring demos, blogposts, and videos for launch day/annual conference. - Sometimes the best product feedback can be negative: "Hey, you need to postpone this launch, you aren't ready." Negative feedback is better coming from within than without, but the company culture needs to be receptive to criticism. - If you habitually shoot the messenger, however mistaken they are, don't be surprised that you stop receiving bad news. - Yes, people really do hire DevRels, tell them to give product feedback, and then ignore that feedback. This is the *norm*. Devrel are also often responsible for maintaining non-core integrations and helper tooling. - Examples: Netlify has an entire [Integrations Engineering](https://www.netlify.com/blog/2021/01/06/developer-experience-at-netlify/) team. Currently it just works on Next.js integrations, but it could also own, for example, the VS Code extension. In the past I helped build out [Netlify Dev](https://news.ycombinator.com/item?id=19615546) and [react-netlify-identity](https://github.com/netlify-labs/react-netlify-identity) as part of this function. [Sourcegraph's Integrations](https://docs.sourcegraph.com/integration) are not core product, but help adoption under their [7 stage SDLC framework](https://beyang.org/time-to-build-dev-tools.html). Popular quick start tooling like [Docker Compose](https://github.com/temporalio/docker-compose) and [Helm Charts](https://github.com/temporalio/helm-charts) also fall under this function. - A more involved version of this bleeds into "Solutions Engineering" where you help out with customer-specific custom demos and integration work. For large enough customers and small enough startups this is fine, but the bulk of the DevRel effort should focus on work that scales. Most devrels spend most of their time advocating TO developers than FOR them. The user-to-product feedback loop is the most underdeveloped element of developer advocacy right now basically because it is unclear who owns this function between Devrel and PM. - It can be like grooming a "top 3 user asks" list like [Amir Shevat did at Slack and Twitch](https://www.swyx.io/a16z_devrel_amir_shevat/#tldr) - or it can be building prototypes and "hacky MVPs" to validate ideas that you then hand off to engineering - or it can be collecting data on what messaging resonates with developers based on how they explain it to others - all of which is really hard to measure ## Conclusion Ultimately I hope for more intellectual honesty within the industry that acknowledges the complex tradeoffs: - between the need for communicating value and quantifying progress vs the dehumanizing effect of metrics on what is ultimately an extremely human endeavor - between lagging metrics that are more meaningful but where you have less control, and leading metrics where you have more control but little accountability, and the primary link between leading and lagging is "merely" anecdotal and qualitative but ultimately your job - where content is a personal, creative process which needs a culture that accepts risk taking and failure, hot streaks and lulls, where consistency, quality, and scope are valued differently by different segments of your audience, and too many backseat drivers guarantees death of personality and ownership. I'll leave you with one anecdote: When I first hear about a technology I keep it on a mental backburner simply to see if it sticks around and makes progress. I typically end up only trying out a technology at least one year after first hearing about it and seeing consistent progress. It's not just me; many CTOs and VP Engs recommend a **multiyear adoption path** especially for core tech. Traditional marketing advice says people have to hear about you at least 7 times before they decide to buy; for developer tools, Matt Biilmann from Netlify says it's more like 14. Try fitting *that* into your quarterly OKR review cycles. > If you are interested in more DevRel thoughts, podcasts, and books, check [the DX Circle guide](https://codingcareer.circle.so/c/dev-communities/developer-relations-wip-guide?) for more! ## Resources and Further Reading - [My Notes from Amir Shevat on Measuring & Managing Developer Relations](https://twitter.com/swyx/status/1306303342071615488), Head of Developer Platform at Twitter and previously running DevRel at Twitch, Slack, Microsoft, and Google - [The Golden Age of DevRel](https://twitter.com/swyx/status/1500968247893856256) from Chris Trag (running DevRel at Stripe) has some good level-sets on metrics from his point of view - https://bitergia.com/ Software Development Analytics for understanding, reporting, and decision making process regarding community health, project sustainability, development efficiency, talent retention & acquisition, content creation, developer audience analysis, and more. - https://dots.community/ Scale your community while keeping it personal. Automate routine tasks, and understand member behaviors on Slack, Discord, and more! - https://www.commonroom.io/ Common Room gives you a unified view of what’s happening across platforms — Twitter, Slack, GitHub, Discourse, Discord, Intercom, Meetup, and more. - https://orbit.love/ Grow and measure your community across any platform with Orbit, mission control for your community.
swyx
793,079
Python + Flask Pt 2 ... adding React
The last post was about creating a Python + Flask application and deploying on Heroku. This post we...
0
2021-08-16T02:02:43
https://dev.to/roadpilot/python-flask-pt-2-adding-react-4j8d
The last post was about creating a Python + Flask application and deploying on Heroku. This post we are going to take that same Python + Flask application and add a React frontend to it. First some unfinished touches on the Python + Flask backend. To make it run locally, we'll need to make some changes: In our app.py, we are going to add CORS functionality and we are going to change our output from a simple string to a JSON object. ``` from flask import Flask from flask_cors import CORS #comment this on deployment app = Flask(__name__) CORS(app) #comment this on deployment @app.route("/") def hello_world(): return { 'resultStatus': 'SUCCESS', 'message': "Hello, World!" } ``` We can now test the Flask server: ``` // ♥ flask run * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) ``` You may run into a lot of errors at this point. You may need to adjust your Python installation or other modules. When you get things finalized, what you see above will confirm that your Flask service is running. When we view the output from the browser, we will see The JSON object output. This is how we are going to display output from our backend through the React frontend. It will look like this in the browser: ``` { "message": "Hello, World!", "resultStatus": "SUCCESS" } ``` Now it's time to create the React front end. You can create a separate repository or just a new subdirectory in the existing one. That's all up to personal preference. Once you decide what your "root" folder will be, you will use the "create react app" command from that directory: ``` npx create-react-app . ``` This will create a new React app in that root directory. This could take some time and troubleshooting. See the end message below. Once the installation is successful, find 'app.js' in the 'src' subdirectory and make the following changes: (some of this is referenced from the https://towardsdatascience.com/build-deploy-a-react-flask-app-47a89a5d17d9 tutorial) ``` import logo from './logo.svg'; import './App.css'; import React, { useEffect, useState } from 'react'; import axios from 'axios' function App() { const [getMessage, setGetMessage] = useState({}) useEffect(()=>{ axios.get('http://localhost:5000').then(response => { console.log("SUCCESS", response) setGetMessage(response) }).catch(error => { console.log(error) }) }, []) return ( <div className="App"> <header className="App-header"> <img src={logo} className="App-logo" alt="logo" /> <p>React + Flask Tutorial</p> <div>{getMessage.status === 200 ? <h3>{getMessage.data.message}</h3> : <h3>LOADING</h3>}</div> </header> </div> ); } export default App; ``` The "stock" app.js file will already have some of this. What we are going to add is: ``` import React, { useEffect, useState } from 'react'; import axios from 'axios' ``` "useEffect and useState" are called "React Hooks" and you can Google to read about them. They essentially add functionality that allow you to use "state" and other React features without writing a class. "axios" is a component for React that allows "in render" async requests. It can request information from and send information to other electronic sources and turn their output into React usable objects. In this case we will get our "Hello, World" output from our backend server. ``` const [getMessage, setGetMessage] = useState({}) useEffect(()=>{ axios.get('http://localhost:5000').then(response => { console.log("SUCCESS", response) setGetMessage(response) }).catch(error => { console.log(error) }) }, []) ``` "useState" is going to create for us a pair of "setter / getter" functions. "getMessage" will call to and collect the response from the axios request (axios.get) and "setGetMessage" will return the response. ``` return ( <div className="App"> <header className="App-header"> <img src={logo} className="App-logo" alt="logo" /> <p>React + Flask Tutorial</p> <div>{getMessage.status === 200 ? <h3>{getMessage.data.message}</h3> : <h3>LOADING</h3>}</div> </header> </div> ); } ``` Next you will start the React server (yes, even though React is a "frontend framework" it runs in a separate server.) ``` npm start ``` The React app "return" renders the output based on the response from the backend server. There is a conditional (ternary) view: ``` <div>{ getMessage.status === 200 ? <h3>{getMessage.data.message}</h3> : <h3>LOADING</h3> } </div> ``` where until the response returns a status code of 200, the React component will only show the word "LOADING". ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pjpee9chjakz6g9yl4ui.png) Once the response returns the 200 status code, then the React component will re-render the "getMessage.data.message" that is returned from the "getMessage" function. Since the response object has a "key" of "message": ``` { "message": "Hello, World!", "resultStatus": "SUCCESS" } ``` the "value" of "message" is what is displayed, in this case "Hello, World!". ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gnxra3l3m29jr1oa4ki.png) "npm" is a Node package manager and installs are rarely without bumps so be ready to troubleshoot any "missing component" errors or other deprecation warnings. Google is your friend and Node is very complicated. You will end up with a "node_modules" folder that has anywhere from 20,000 to 30,000 resources that Node uses to make React work. When you get installation errors, sometimes just repeating the command you just sent - that gave you those errors - is all you need to do. Other times, it's more complicated. Read the errors closely and try to find competent resources for steps to take for corrections. Good luck!
roadpilot
793,136
Basics of Object Detection (Part 1)
This article is originally from the book "Modern Computer Vision with PyTorch" ...
0
2021-08-16T06:20:38
https://dev.to/sally20921/basics-of-object-detection-part-1-1i52
deeplearning
*This article is originally from the book "Modern Computer Vision with PyTorch"* ## Introduction Imagine a scenario where we are leveraging computer vision for a self-driving car. It is not only necessary to detect whether the image of a road contains the images of vehicles, a sidewalk, and pedestrians, but it is also important to identify *where* those objects are located. Various techniques of object detection that we will study in this article will come in handy in such a scenario. With the rise of autonomous cars, facial detection, smart video surveillence, and people-counting solutions, fast and accurate object detection systems are in great demand. These systems include not only object classification from an image, but also location of each one of the objects by drawing appropriate bounding boxes around them. This (drawing bounding boxes and classification) makes object detection a harder task than its traditional computer vision predecessor, image classification. ![Screen Shot 2021-08-16 at 2.10.18 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qgc1cav8ch27bdqxo9n.png) To understand what the output of object detection looks like, let's go through the following diagram. In the preceding diagram, we can see that, while a typical object classification merely mentions the class of object present in the image, object localization draws a bounding box around the objects present in the image. Object detection, on the other hand, would involve drawing the bounding boxes around individual objects in the image, along with identifying the class of object within a bounding box across multiple objects present in the image. Training a typical object detection model involves the following steps: 1. Creating ground truth data that contains labels of the bounding box and class corresponding to various objects present in the image. 2. Coming up with mechanisms that scan through the image to identify regions (region proposals) that are likely to contain objects. In this article, we will learn about leveraging region proposals generated by a method named *selective search*. Also, we will learn about leveraging anchor boxes to identify regions containing objects. Moreover, we will learn about leveraging positional embeddings in transformers to aid in identifying the regions containing an object. 3. Creating the target class variable by using the IoU metric. 4. Creating the target bounding box offset variable to make corrections to the location of region proposal coming in the second step. 5. Building a model that can predict the class of object along with the target bounding box offset corresponding to the region proposal. 6. Measuring the accuracy of object detection using mean Average Precision (mAP). ## Creating a bounding box ground truth for training We have learned that object detection gives us the output where a bounding box surrounds the object of interest in an image. For us to build an algorithm that detects the bounding box surrounding the object in an image, we would have to create the input-output combinations, where the input is the image and the output is the bounding boxes surrounding the objects in the given image, and the classes corresponding to the objects. To train a model that provides the bounding box, we need the image, and also the corresponding bounding box coordinates of all the objects in an image. In this section, we will learn about one way to create the training dataset, where the image is the input and the corresponding bounding boxes and classes of objects are stored in an XML file as output. We will use the *ybat* tool to annotate the bounding boxes and the corresponding classes. Let's understand about installing and using *ybat* to create (annotate) bounding boxes around objects in the image. Furthermore, we will also be inspecting the XML files that contain the annotated class and bounding box information. ### Installing the image annotation tool Let's start by downloading *ybat-master.zip* from the following [github](https://github.com/drainingsun/ybat) and unzip it. Post unzipping, store it in a folder of your choice. Open *ybat.html* using a browser of your choice and you will see an empty page. The following screenshot shows a sample of what the folders looks like and how to open the *ybat.html* file. ![Screen Shot 2021-08-16 at 2.25.01 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4tjwg8i3fj893vvv9xz5.png) Before we start creating the ground truth corresponding to an image, let's specify all the possible classes that we want to label across images and store in the *classes.txt* file as follows: ![Screen Shot 2021-08-16 at 2.25.58 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8nryuirlg2m9dvrm28u.png) Now, let's prepare the ground truth corresponding to an image. This involves drawing a bounding box around objects and assigning labels/classes to the object present in the image in the following steps: 1. Upload all the images you want to annotate 2. Upload the *classes.txt* file. 3. Label each image by first selecting the filename and then drawing a crosshair around each object you want to label. Before drawing a crosshair, ensure you select the correct class in the classes region. 4. Save the data dump in the desired format. Each format was independently developed by a different research team, and all are equally valid. Based on their popularity and convenience, every implementation prefers a different format. For example, when we downlaod the PASCAL VOC format, it downloads a zip of XML files. A snapshot of the XML files after drawing the rectangular bounding box is as follows: ![Screen Shot 2021-08-16 at 2.33.05 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nggc8hy4aapsip1dodbh.png) From the preceding screenshot, note that the *bndbox* field contains the coordinates of the minimum and maximum values of the *x* and *y* coordinates corresponding to the object of interest in the image. We should also be able to extract the classes corresponding to the objects in the image using the *name* field. Now that we understand how to create a ground truth of objects (class labels and bounding box) present in an image, in the following sections, we will dive into the building blocks of recognizing objects in an image. First, we will talk about region proposals that help in highlighting the portions of the image that are most likely to contain an object. ## Understanding region proposals Imagine a hypothetical scenario where the image of interest contains a person and sky in the background. Furthermore, for this scenario, let's assume that there is little change in pixel intensity of the background and that there is considerable change in pixel intensity of the foreground. Just from the preceding description itself, we can conclude that there are two primary regions here-one is of the person and the other is of the sky. Furthermore, within the region of the image of a person, the pixels corresponding to hair will have a different intensity to the pixels corresponding to the face, establishing that there can be multiple sub-regions within a region. **Region proposal** is a technique that helps in identifying islands of regions where the pixels are similar to one another. Generating a region proposal comes in handy for object detection where we have to identify the locations of objects present in the image. Furthermore, given a region proposal generates a proposal for each region, it aids in object localization where the task is to identify a bounding box that fits exactly around the object in the image. We will learn how region proposals assist in object localization and detection in a later section on *Training R-CNN based custom object detectors*, but let's first understand how to generate region proposals from an image. ### Leveraging Selective Search to generate region proposals Selective Search is a region proposal algorithm used for object localization where it generates proposals of regions that are likely to be ground together based on their pixel intensities. Selective Search groups pixels based on the hierarchical grouping of similar pixels, which, in turn, leverages the color, texture, size and shape compatibility of content within an image. Initially, Selective Search over-segments an image by grouping pixels based on the preceding attributes. Next, it iterates through these over-segmented groups and groups them based on similarity. At each iteration, it combines smaller regions to form a larger region. Let's understand the *selective search* process through the following example: ```bash ## dependencies pip install selectivesearch pip install torch_snippets ``` ```python from torch_snippets import * import selectivesearch from skimage.segmentation import felzenszwalb img = read('Hemanvi.jpeg', 1) ## extract the felzenszwalb segments (which are obtained based on the color, texture, size and shape compatibility of content within an image) from the image segments_fz = felzenszwalb(img, scale=200) ## scale represents the number of clusters that can be formed within the segments of the image. The higher the value of scale, the greater the detail of the original image that is preserved. subplots([img, segments_fz], titles=['Original Image', 'Image post \nfelzenszwalb segmentation'], sz=10, nc=2) ``` The preceding code results in the following output: ![Screen Shot 2021-08-16 at 2.51.38 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t722n9ve56fny0b72fs9.png) From the preceding output, note that pixels that belong to the same group have similar pixel values. ### Implementing Selective Search to generate region proposals In this section, we will define the *extract_candidates* function using *selectivesearch* so that it can be leveraged in the subsequent sections on training R-CNN and Fast R-CNN-based custom object detectors: ```python from torch_snippets import * import selectivesearch # define the function that takes an image as the input parameter def extract_candidates(img): # fetch the candidate regions within the image using the selective_search method available in the selectivesearch package img_lbl, regions = selectivesearch.selectivesearch(img, scale=200, min_size=100) # calculate the image area and initialize a list (candidates) that we will use to store the candidates that pass a defined threshold img_area = np.prod(img.shape[:2]) candidates = [] # fetch only those candidates (regions) that are over 5% of the total image area and less than or equal to 100% of the image area and return them for r in regions: if r['rect'] in candidates: continue if r['size'] < (0.05 * img_area): continue if r['size'] > (1 * img_area): continue x, y, w, h = r['rect'] candidates.append(list(r['rect'])) return candidates img = read('Hemanvi.jpeg', 1) candidates = extract_candidates(img) show(img, bbs=candidates) ``` The preceding code generates the following output: ![Screen Shot 2021-08-16 at 3.01.11 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl7cz9ccrvtsnu4cco0o.png) The grid in the preceding diagram represent the candidate regions (region proposals) coming from the *selective_search* method. Now that we understand region proposal generation, one question remains unanswered. How do we leverage region proposals for object detection and localization? A region proposal that has a high intersection with the location (ground truth) of an object in the image of interest is labeled as the one that contains the object, and a region proposal with a low intersection is labeled as background. In the next section, we will learn about how to calculate the intersection of a region proposal candidate with a ground truth bounding box in our journey to understand the various techniques that form the backbone of building an object detection model. ### Understanding IoU Imagine a scenario where we came up with a prediction of a bounding box for an object. How do we measure the accuracy of our prediction? The concept **Intersection over Union (IoU)** comes in handy in such a scenario. *Intersection* measures how overlapping the predicted and actual bounding boxes are, while *Union* measures the overall space possible for overlap. IoU is the ratio of the overlapping region between the two bounding boxes over the combined region of both the bounding boxes. This can be represented in a diagram as follows: ![Screen Shot 2021-08-16 at 3.07.25 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d0d0zn6d8j85okg7jf5p.png) In the preceding diagram of two bounding boxes (rectangles), let's consider the left bounding box as the ground truth and the right bounding box as the prediction location of the object. IoU as a metric is the ratio of the overlapping region over the combined region between the two bounding boxes. In the following diagram, you can observe the variation in the IoU metric as the overlap between bounding boxes varies: ![Screen Shot 2021-08-16 at 3.09.26 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ynbi6rt8yj18j1hm59ba.png) From the preceding diagram, we can see that as the overlap decreases, IoU decreases and, in the final one, where there is no overlap, the IoU metric is 0. Now that we have an intuition of measuring IoU, let's implement it in code and create a function to calculate IoU as we will leverage it in the sections of training R-CNN and training Fast R-CNN. Let's define a function that takes two bounding boxes as input and returns IoU as the output: ```python # specify the get_iou function that takes boxA and boxB as inputs where boxA and boxB are two different bounding boxes def get_iou(boxA, boxB, epsilon=1e-5): # we define the epsilon parameter to address the rare scenario when the union between the two boxes is 0, resulting in a division by zero error. # note that in each of the bounding boxes, there will be four values corresponding to the four corners of the bounding box # calculate the coordinates of the intersection box x1 = max(boxA[0], boxB[0]) y1 = max(boxA[1], boxB[1]) x2 = min(boxA[2], boxB[2]) y2 = min(boxA[3], boxB[3]) # note that x1 is storing the maximum value of the left-most x-value between the two bounding boxes. y1 is storing the topmost y-value and x2 and y2 are storing the right-most x-value and bottom-most y-value, respectively, corresponding to the intersection part. # calculate the width and height corresponding to the intersection area (overlapping region): width = (x2 - x1) height = (y2 - y1) # calculate the area of overlap if (width < 0) or (height < 0): return 0.0 area_overlap = width * height # if we specify that if the width or height corresponding to the overlapping region is less than 0, the area of intersection is 0. Otherwise, we calculate the area of overlap (intersection) similar to the way a rectangular's area is calculated. # calculate the combined area corresponding to the two bounding boxes area_a = (boxA[2] - boxA[0]) * (boxA[3]- boxA[1]) area_b = (boxB[2] - boxB[0]) * (boxB[3] - boxB[1]) area_combined = area_a + area_b + area_overlap iou = area_overlap / (area_combined + epsilon) return iou ```
sally20921
793,146
HNG Internship - HNGi8: My coding goals for the next 8 weeks.
Zuri training is a free beginner training for complete tech novices handled by veteran tech experts...
0
2021-08-16T06:02:01
https://dev.to/austinug8/hng-internship-hngi8-311f
Zuri training is a free beginner training for complete tech novices handled by veteran tech experts with track records in the tech industry. HNG internship is an internship outfit of zuri training that its interest is aimed at providing training, support, mentorship and a collaborative environment to its participants to help them overcome the hurdles and challenges at the entry level into tech industry. HNG internship is well underway in its version number eight tagged HNGi8. Fortunately, I am among the participant in HNGi8 that just kicked off, with goals that I would like to meet on or before the end of week 8 of HNGi8. According to Bruce Lee, “a goal is not always meant to be reached, it often serves as something to aim at”. The goal is not to be perfect but to be making a significant progress in life and in the course of HNGi8, knowing fully well that achieving my goals is important but it is not the most important about goal setting. The most important thing is that you acquire values that would make you a better person in your craft. In the course of HNGi8, I would be looking out to building strong relationships and networks. HNGi8 is a pool of people with different potentials and skills, from different fields, stacks and backgrounds, bambinos and veterans of diverse caliber. I would leverage on the team work and collaboration to build relationships and networks. Secondly, I would leverage on HNGi8 as a platform to increase, enhance and sharpen my tech skills and knowledge in FRONTEND WEB DEVELOPMENT. This I will do by been intentional in understanding the concepts behind any tech tool I would be expose to in HNGi8. Furthermore, personal website/portfolio gives one an opportunity to showcase his skills through projects he has worked on and show prospective employers what he can do. Among my goals in HNGi8 is create my portfolio where I would show my skills acquired and projects I would work on in the course of HNGi8. In either case, perfection is not necessarily the benchmark but progress. One step at a time, I want to be able to complete each task that would move me to the next stage till the final stage. I want to be a finalist by week 8 in HNGi8. You can follow any of the tutorial link below to catch up on some technologies used in this internship: JavaScript: https://youtube.com/playlist?list=PLillGF-RfqbbnEGy3ROiLWk7JMCuSyQtX. Figma: https://trydesignlab.com/figma-101-course/introduction-to-figma/ or https://www.headway.io/event-series/figma-tutorials-for-beginners. GIT: https://www.freecodecamp.org/news/the-beginners-guide-to-git-github/ HTML: https://youtu.be/UB1O30fR-EE. Python: https://www.freecodecamp.org/news/the-python-guide-for-beginners/ GO Lang: https://www.educative.io/blog/golang-tutorial. NodeJS: https://www.cloudbees.com/blog/node-js-tutorial. If you would like to know about this internship, you can click on any of the following links: https://zuri.team or https://internship.zuri.team or https://training.zuri.team. Thank You for reading through, for your time and attention.
austinug8
793,290
Astro recipe collection website - Part 5 Hosting on Netlify
We finished our Astro recipe website, and now it's time to publish our fantastic website on the World...
0
2021-08-16T07:03:16
https://daily-dev-tips.com/posts/astro-recipe-collection-website-part-5-hosting-on-netlify/
astro, netlify
We finished our Astro recipe website, and now it's time to publish our fantastic website on the World Wide Web. We'll be using [Netlify](https://www.netlify.com/) as our hosting provider, as it's a super simple system and setup. ## Fixing our Astro source code Before we do anything, let's make sure we add two steps to make our lives easier. Make sure your master branch is up to date and add a `netlify.toml` file. ```text [build] command = "npm run build" publish = "dist" ``` This file will make sure Netlify takes the default configuration for this project. Next up, create a `.nvmrc` file. This tells Netlify which node version to use, as it by default will use Node 12, and we want to use 14+ with Astro. ```js v14.15.1 ``` ## Hosting Astro on Netlify 1. Header over to Netlify and create an account 2. Press the "New site from Git" button 3. Click on your Git provider and follow the steps 4. In the build settings, use the following settings: ![Netlify Astro build settings](https://cdn.hashnode.com/res/hashnode/image/upload/v1628525316876/8n63l7Yt4.png) Then click the deploy button, and watch the magic happen! You will now have your website available at the domain provided by Netlify. [Check out the Astro recipe website on Netlify](https://modest-galileo-019727.netlify.app/) From here, you can even change this to be your domain. ## Updating Astro code on Netlify But what if we need to update anything? Don't worry, Netlify makes this super easy, literally super easy! To get a new release online, all you have to do is push your changes to the master branch! (Or whichever branch you setup) I happen to like Netlify since it's super easy, and their free tier is massive. But if you like to explore other options, check out [Astro's documentation for hosting](https://docs.astro.build/guides/deploy). ### Thank you for reading, and let's connect! Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on [Facebook](https://www.facebook.com/DailyDevTipsBlog) or [Twitter](https://twitter.com/DailyDevTips1)
dailydevtips1
793,300
BGP (Border Gateway Protocol)
BGP, genellikle Internet Service Provider (İnternet Servis Sağlayıcıları) tarafından kullanılan...
0
2021-08-16T07:19:37
https://dev.to/etkirac/bgp-border-gateway-protocol-6np
bgp
BGP, genellikle Internet Service Provider (İnternet Servis Sağlayıcıları) tarafından kullanılan gelişmiş bir yönlendirme protokolüdür. BGP’de routerlara otonom sistem (AS) numarası tanımlanır. BGP, yönlendirme tablosunu oluşturmak için metrik hesaplar, bu hesaplama hedefe giderken üzerinden geçilen otonom sistem sayısıdır. Yani, BGP Path Vector (Yol Vektörü) algoritmasını kullanır. Fakat, Distance Vector algoritmasını kullanan EIGRP’nin aksine farklı otonom sistemler arasında da çalışabilmektedir. Ayrıca BGP, IP adreslerinin özetlenmesini sağlayan CIDR’i (Classless Inter Domain Routing) destekler. Path Attributes (Yol Özellikleri) Path Attributes kısaca PAs, BGP içindeki yönlendirme politikalarının ayrıntı düzeyini ve denetimini sağlar ve BGP ilişkili her network yolu için PAs kullanır. PAs aşağıdaki gibi sınıflandırılır: Well-known mandatory Well-known discretionary Optional transitive Optional non-transitive Well-konown mandatory, tüm BGP uygulamaları tarafından tanımlanması zorunludur. Well-known discretionary, her prefix advertisementlarına (reklam) dahil edilmelidir. Optional diye belirtilen isteğe bağlı özellikler prefix advertisementlara dahil edilebilir veya edilmeyedebilir. Non-transitive olan PAs otonom sistemden otonom sisteme advertisementlarla paylaşılmaz. BGP’de belirli bir yol için prefix, prefix uzunluğu ve BGP PA’larından oluşan yönlendirme güncellemesine NLRI yani Network Layer Reachability Information denir. Loop Prevention (Döngü Önleme) BGP, link-state yönlendirme protokellerinde olduğu gibi ağın tam bir topolojisini içermez. BGP distance vector protokolleri gibi davranarak bir yolun döngü içermediğini garanti eder. BGP özelliklerinden AS_Path bir well-known mandatory özelliktir. AS_Path, BGP’de döngü önleme mekanizması olarak kullanılır. Kaynaktan hedefe giderken geçtiği otonom sistemlerin (AS’lerin) otonom sistem numaralarını (ASN’larını) taşıyan bir prefix advertisement üretilir. Bu advertisement sayesinde daha önce geçilen otonom sistem üzerinden tekrar geçilmek istenirse, router bu advertisementın bir döngü olduğunu düşünür ve düşündüğü için bu prefixi atar. Address Families BGP, normalde IPv4 prefixlerini rotalamak için keşfedilmiştir. Fakat daha sonra, address family identifier (AFI) denilen uzantı ile birlikte Multi-Protocol BGP (MP-BGP) özelliği eklendi. Her address family BGP’deki her protokol için ayrı veritabanı ve konfigürasyon sağlar. Böylelikle farklı address family kullanarak aynı BGP oturumu içerisinde farklı policyler uygulamaya izin verir. Routerlar Arasındaki İletişim BGP, IGP protokolleri gibi komşuları keşfetmek için “hello” paketlerini kullanmaz, komşuları dinamik olarak keşfedemez. BGP komşuları IP adresleriyle tanımlanır. BGP komşuluğunun kurulabilmesinin birinci şartı, underlayde ağdaki cihazların birbirine erişimi olması gerekmektedir. Buna kavramsal olarak “next hop reachability” denir. İkinci şartı ise, TCP 179.portunun açık olması gerekir. Çünkü BGP, iletişim için TCP 179.portu kullanır. BGP kendi arasında ikiye ayrılır: 1.EBGP (Exterior Border Gateway Protocol) : Farklı otonom sistemdeki routerların birbirleriyle komşuluk kurabilmesi için kullanılmaktadır. EBGP sayesinde öğrenilen rotalar, yönlendirme tablosuna administrative distance değeri 20 olacak şekilde eklenir. 2.IBGP (Interior Border Gateway Protocol): Aynı otonom sistemdeki routerların birbirleriyle komşuluk kurabilmesi için kullanılmaktadır. IBGP sayesinde öğrenilen rotalar, yönlendirme tablosuna administrative distance değeri 200 olacak şekilde eklenir. BGP Mesajları BGP iletişim için 4 mesaj tipi kullanır. Bunlar: 1.OPEN: OPEN mesajları BGP komşuluğu kurmak için kullanılır. Bu mesaj BGP versiyon numarası, otonom sistem numarası, hold time(bekleme süresi) vb. içerir. 2.UPDATE: UPDATE mesajları, olası rotaları bildirir, önceden ilan edilen rotaları geri çekebilir veya her ikisini birden yapabilir. 3.NOTIFICATION: NOTIFICATION mesajları, BGP oturumunda hold time değişmesi, komşuluk bilgilerinin değişmesi veya BGP oturumu sıfırlama talep edilmesi gibi bir hata algılandığında gönderilir. Bu, BGP bağlantısının kapanmasına neden olur. 4.KEEPALIVE: KEEPALIVE mesajları, BGP komşuluğunun devam ettiğinden emin olmak için 60 saniyede gönderilen mesajlardır. BGP Komşuluk Durumları BGP komşu routerlarla bir TCP oturumu oluşturur, buna “peer” denir. BGP peer’larını ve onların operasyonel durumlarını belirten bir tabloyu tutmak için finite-state machine (FSM) kullanılır. BGP oturumlarında aşağıdaki durumlar mevcuttur: 1.IDLE: IDLE durumu, BGP FSM’nin ilk aşamasıdır. BGP, bir başlangıç olayı algılar ve BGP peer’ına bir TCP bağlantısı başlatmaya çalışır. 2.CONNECT: CONNECT durumunda, BGP, TCP bağlantısını başlatır. Eğer three-way TCP handshake tamamlanırsa, BGP OPEN mesajını komşuya gönderir ve daha sonra OPENSENT durumuna geçer. Eğer belirlenen sürede bu handshake tamamlanmazsa yeni bir TCP bağlantısı denenir ve durum ACTIVE olur. 3.ACTIVE: ACTIVE durumunda, yeni bir three-way TCP handshake başlatılır. Bir bağlantı kurulursa OPEN mesajı gönderilir ve durum OPENSENT olur. Bu TCP bağlantısı başarısız olursa CONNECT durumuna geri döner. 4.OPENSENT: OPENSENT durumunda, kaynak routerdan bir OPEN mesajı gönderilmiştir ve diğer routerdan OPEN mesajı beklenmektedir. Bu mesaj alındığında, her iki OPEN mesajı karşılıklı kontrol edilir. Aşağıdaki öğeler incelenir: BGP versiyonları eşleşmelidir. OPEN mesajındaki otonom sistem numarası komşu için yapılandırılanla eşleşmelidir. Güvenlik parametreleri (şifre ve TTL gibi) uygun şekilde ayarlanmalıdır. Eğer OPEN mesajlarında herhangi bir hata yoksa, KEEPALIVE mesajı gönderilir ve daha sonra bağlantı durumu OPENCONFIRM’e çekilir. Eğer OPEN mesajında bir hata bulunursa NOTIFICATION mesajı gönderilir ve bağlantı durumu IDLE konumuna geri alınır. Bir de TCP bağlantı kesme mesajı alırsa bağlantı durumu ACTIVE olarak tanımlanır. 5.OPENCONFIRM: OPENCONFIRM durumunda, BGP, KEEPALIVE veya NOTIFICATION mesajını bekler. Bir komşudan KEEPALIVE mesajı alınması durumunda bağlantı durumu ESTABLISHED olur. Hold time süresi dolarsa veya bir NOTIFICATION mesajı alınırsa durum IDLE konumuna taşınır. 6.ESTABLISHED: ESTABLISHED durumunda, BGP oturumu kurulur. BGP komşuları UPDATE mesajlarını kullanarak rotaları değiştirir. BGP Multihoming Yedekliliği sağlamanın en iyi ve en basit yollarından biri ikinci bir yol sağlamaktır. İkinci bir yol eklemek ve bu peer bağlantısı üzerinden ikinci bir BGP oturumu kurmaya “Multihoming” denir. Çünkü yolları öğrenmek ve bağlantı kurmak için birden fazla oturum vardır. BGP varsayılan olarak sadece en iyi yolu yayınlar. Bu, ağ trafiğinin hedefe iletilmesi için sadece bir tane yol kullanması demektir. Regular Expressions (REGEX) Regular expression, Türkçe anlamıyla Düzenli veya Kurallı İfadeler, genellikle harf ve işaretlerden oluşan karakterler dizisinin bazı kurallar çerçevesinde daha kısa bir ifadeyle belirlenmesini sağlayan yapıdır. Regular Expression ifadeleri ve anlamları aşağıdaki tabloda belirtilmiştir. (Period) -> Herhangi bir karakteri ifade etmek için kullanılır. [] (Brackets) -> Dizideki karakterlerden biri ile eşleşmesi için kullanılır. ^ (Caret) -> Dizinin başlangıcındaki karakteri ifade etmek için kullanılır. ? ( Question Mark) -> Karakterin var olup olmadığından emin olunmadığı zaman kullanır.(Karakter ya yok ya da 1 kere var) $ (Dollar Sign) -> Dizinin sonundaki karakteri ifade etmek için kullanılır. *(Asteriks) -> Karakterin sıfır veya sıfırdan fazla kere kullanıldığından emin olunmadığı zaman kullanılır. (Karakter ya yok ya da 1,2,3,.. kere olabilir.) +(Plus Sign) -> Karakterin bir kereden fazla var olup olmadığından emin olunmadığı zaman kullanılır. (Karakter 1,2,3,4,.. kere olabilir.) _(Underscore) -> Karakterle direkt eşleştirmek istediğimiz zaman kullanılır. |(Pipe) -> OR fonksiyonu gibi çalışır. -(Hyphen) -> Parantez içinde sayı aralığını belirtmek için kullanılır. () (Parentheses) -> Bir dizi belirtmek için kullanılır. [^] (Caret in brackets) -> Parantez içinde listelenen karakterleri içermez. Regular Expression, BGP’de filtreleme yaparak gözlem yapılabilmesi için ya da policy uygularken kullanılır. BGP özelinde bu ifadelerden bazılarının nasıl kullanıldığını örneklerde gösterilmiştir. Örneğin; Bir policy uygulanırken “ permit ^200_ “ komutu kullanılırsa, sadece otonom sistem 200’den gelen rotalar görülür. Bir policy uygulanırken “ permit _200$ “ komutu kullanılırsa, sadece otonom sistem 200’den üretilen prefixler görülür. Route Maps Route map, yönlendirme protokellerine çok sayıda farklı özellikler sağlar. Kısaca Route map, aynı ACL (Access List) gibi ağı filtrelemek için kullanılır fakat ACL’ye göre ek özellikler barındırır. Route map’ler BGP için kritik bir önem taşır. Bir route map’te 4 bileşen vadır. Bunlar: Sequence Number: Route map’in işlem sırasını belirler. Conditional Matching Criteria: Belirli bir sıra için prefix özellikleri tanımlar. Processing Action: Prefix’e izin verir veya reddeder. Optional Action: Router’da route-map’e nasıl başvurulduğuna bağlı olarak manipülasyonlara izin verir. Aksiyonlar, rota özelliklerinin değiştirilmesi, eklenmesi veya kaldırılmasını içerebilir. Bir route-map oluşturmak için “ route-map {route-map-name} [permit|deny] [sequence number] “ komutu kullanılır. Bazı kurallar uygulanır. Bunlar: Eğer Processing Action belirtilmediyse, varsayılan olarak prefix’lere izin verir. Eğer Sequence Number belirtilmediyse, varsayılan olarak 10’dan itibaren otomatik arttırılır. Conditonal Matching (Koşullu Eşleştirme) Route-map bileşenleri ve işlem sırası belirlendikten sonra, bu bölümde rotanın nasıl eşleştirilebileceği ifade edilir. Koşullu eşleştirme için bazı örnekler tabloda açıklamalarıyla birlikte belirtilmiştir. match as-path -> Prefix’ler regex isteklerine bağlı olarak eşleşir. match ip address -> Prefix’ler ACL’de tanımlanan ip adresleriyle eşleşir. match ip address prefix-list -> Prefix’ler prefix-list ile tanımlanan ip adresleriyle eşleşir. match local-preference -> Prefix’ler BGP özelliği olan local-preference’a göre eşleşir. match metric -> Prefix’ler metric değerine bağlı eşleşir. match tag -> Prefix’ler sayısal bir etiketlemeye bağlı eşleşir. Bu komutlar çoklu eşleşme izin verir. Optional Action (İsteğe Bağlı Aksyionlar) Ek olarak, prefix’in geçmesine izin verildiğinde, route map rotaların bazı özelliklerini değiştirebilir. İsteğe bağlı aksiyonlar için bazı örnekler aşağıda tabloda verilmiştir. set as-path prepend -> Prefix için otonom sistem yolunu öne ekler. set ip next-hop -> Herhangi bir eşleşme için bir sonraki adımda olan IP adresini ekler. set local-preference -> BGP özelliği olan local preference ekler. set metric -> Rota için belirlenen metric değerini değiştirir. set tag -> Prefix’ler sayısal bir etiketlemeye bağlı eşleşir. set weight -> BGP özelliği olan weight değerini ekler. Varsayılan olarak bir route map davranışı, route map’te sequence number’a göre işler ve ilk eşleşmede belirtilen aksiyonları yürütür, daha sonra işlemi durdurur. Bu, birden fazla sequence number olduğunda işlenmesini engeller. Bu durumu önlemek için “continue” komutunun eklenmesi gerekmektedir. BGP Rota Filtreleme (BGP Route Filtering) Rota filtreleme, komşu routerlardan tanıtılan ve alınan rotaları seçerek belirleme yöntemidir. Rota filtreleme, trafik akışlarını değiştirmek, bellek kullanımını azaltmak veya güvenliği artırmak için kullanılır. BGP rota filtreleme, spesifik olarak gelen ve giden trafik için kullanılabilir. Bu filtrelemeyi uygularken bazı metotlar kullanılır. Bunlar: Distribute list: Distribute list, standart veya genişletilmiş bir Access list ile prefixlerin filtrelenmesini içerir. Prefix list: Prefix list, Access list’e benzer şekilde yukardan aşağıya prefixlere izin verir veya reddeder. AS path ACL/filtering: Regex komutları kullanılarak belirtilen otonom sistemlerinin prefixlere izin verilmesine veya reddedilmesine izin verir. Route map: Route map, çeşitli prefix nitelikleri üzerinde koşullu eşleştirme ve çeşitli aksiyonlar gerçekleştirme yöntemi sağlar. Bu aksiyonlar, basit bir izin veya reddetme olabilir; veya BGP yol niteliklerinin değiştirilmesini içerebilir. BGP Toplulukları (BGP Communities) BGP community, rotaları etiketlemek ve routerlarda BGP yönlendirme politikasını değiştirmek için ek yetenek sağlar. Bir rota routerdan routera ilerlerken, BGP community her bir niteliğe eklenebilir, kaldırılabilir veya değiştirilebilir. Well-Known Communities (İyi Bilinen Topluluklar) -Internet: İnternette reklamı yapılması gereken yolları belirlemek için standartlaştırılmış bir topluluktur. -No_Advertise: Bu topluluğa sahip rotalar, herhangi bir BGP komşusuna (iBGP veya eBGP) tanıtılmamalıdır. -No_Export: Bu toplulukla bir rota alındığında, rota herhangi bir eBGP eşine bildirilmez. Bu topluluğa sahip rotalar iBGP eşlerine ilan edilebilir. BGP Yol Seçimi (BGP Path Selection) BGP en iyi yol seçimi, trafiğin otonom sisteme nasıl girdiğinden ve çıktığından etkilenir. Bazı routerlarda BGP nitelelikleri modifiye edildiğinde gelen ve giden trafik de etkilenir. Routerlar en iyi yol olarak her zaman prefix uzunluğunun en çok eşleştiği rotaları seçer. Buna “longest prefix match” denir. Eğer bu rotalarda eşleşme eşitse, sırayla aşağıdaki kriterlere bakılarak seçim yapılır. 1.Weight (Yüksek olan tercih edilir.) 2.Local Preference (Yüksek olan tercih edilir.) 3.Lokal kaynaklı rotalar 4.AS_Path ( En kısa otonom sistem yolu tercih edilir.) 5.Rotanın kökeni (Tercih sırası IGP, Incomplete) 6.MED değeri (En düşük metrik tercih edilir.) 7.eBGP, iBGP’e göre tercih edilir. 8.En düşük IGP metric 9.Eğer ikisi de eBGP ise, eski olan rota tercih edilir. 10.En düşük router-id 11.En düşük IP adresine sahip komşu
etkirac
793,340
How to prepare for front-end interview for 2022?
As a front-end developer, 80% of the interview is based on JavaScript, then 15% of HTML and CSS. 5%...
0
2021-08-16T08:28:38
https://dev.to/codewithnithin/how-to-prepare-for-front-end-interview-for-2022-4mnm
javascript, webdev, codenewbie, discuss
As a front-end developer, 80% of the interview is based on JavaScript, then 15% of HTML and CSS. 5% on JS frameworks, am i right?
codewithnithin
793,350
Call To Action button with pure HTML CSS
Buttons are very important for any type of websites like static website, dynamic website, eCommerce...
0
2021-08-16T08:45:10
https://atulcodex.com/call-to-action-button-with-pure-html-css/
html, css, codenewbie, todayilearned
Buttons are very important for any type of websites like static website, dynamic website, eCommerce website or any kind of website. Buttons are designed to make someone take action. If you have filled any online registration or signup form you have absolutely seen some button at the end of the form like submit or signup button. ##What is CTA buttons CTA(call to action) buttons are very much similar to normal buttons, actually call to action buttons are mostly use in landing pages like these similar examples. ![CTA landing page example](https://atulcodex.com/wp-content/uploads/2021/08/CTA-on-landing-page-1024x481.png) In this example images you can see CTA buttons on every landing page which encourage you to perform certain examples like “generate free invoice”, “Let’s Chat”, Book a demo”, etc, etc. Call to action buttons are mainly used to perform some specific tasks like “book now” “Register Now” “Book Your Seat” “Click to get the free ebook” etc etc. So let’s learn how to make CTA buttons with pure HTML and CSS no single line of javascript. ![CTA button snapshot](https://atulcodex.com/wp-content/uploads/2021/08/WhatsApp-CTA-Button-demo-1024x484.png) First of all open your favorite code editor to write HTML, CSS code and write this HTML boilerplate code. ![html boiler plate code](https://atulcodex.com/wp-content/uploads/2021/08/HTML-boiler-plate-code-1024x456.png) After the HTML boilerplate code writes your page title, import an external CSS file and one more thing import Box Icon CDN link which will gives us SVG icons. ![base html code for CTA](https://atulcodex.com/wp-content/uploads/2021/08/base-html-code-for-CTA-button-1024x513.png) Button HTML code ![HTML button code](https://atulcodex.com/wp-content/uploads/2021/08/HTML-button-code-1024x513.png) That’s it for the HTML part let’s start our CSS code. ![default CSS code](https://atulcodex.com/wp-content/uploads/2021/08/Default-CSS-code-1024x274.png) An HTML web page or document always takes default margin and padding, So first of all we will set the 0 zero to margin and padding of HTML body. ‘box-sizing: border-box;’ is used because sometimes when child element or div is bigger than parent div it overflow the size of parent div. That’s why border-box prevent child element to overflow from parent div. {% link https://dev.to/atulcodex/how-to-send-whatsapp-message-through-html-link-23f8 %} Now we will beautify our HTML body to show our CTA button at the center of the web page. We have used CSS flexbox, the height will be 100vh, the width is 100% and the background colour is white grey. ![CTA button web page styling](https://atulcodex.com/wp-content/uploads/2021/08/CTA-button-web-page-styling-1024x353.png) So we have designed a body of the web page now we will work on our CTA button element. ![CTA button CSS code](https://atulcodex.com/wp-content/uploads/2021/08/CTA-button-CSS-code-1024x566.png) All this code is easy to understand but don’t worry if you are stuck anywhere at any point of this code snippet you can simply comment your queries I will definitely solve it. After that, we will add some hover effect on our call to action button. ![CTA button hover effect](https://atulcodex.com/wp-content/uploads/2021/08/CTA-button-hover-effect-1024x247.png) We have just disabled the button box-shadow and the cursor will be a pointer on hover. And one more thing let’s add some font size in the button icon and padding from the right side because it is looking close to the text. ![box icon size](https://atulcodex.com/wp-content/uploads/2021/08/Box-icon-size-1024x247.png) So that’s how you can design and develop a CTA(call to action) button with just pure HTML and CSS. And this is the exact CTA button with source CodePen Source code. {% codepen https://codepen.io/atulcodex/pen/OJmpRzY %} Finally congratulations you have invested your priceless time and learned about how to make a CTA button with pure HTML & CSS. Again If you find any issues with the code and my explanation or you have any suggestion you just have to comment below your issues, suggestion and ideas. And I love to solve your queries! Take care and “KEEP CODING”.
atulcodex
807,062
CSS Silly button generator for creative developers
Buttons! Buttons! Buttons! You know them, right? When you are visiting a webpage there are lots (At...
0
2021-08-29T13:53:55
https://www.tronic247.com/css-silly-button-generator-for-creative-developers/
css, showdev, codenewbie
Buttons! Buttons! Buttons! You know them, right? When you are visiting a webpage there are lots (At least one) of buttons. But most of them look boring and the same. Here’s a Silly little button generator that might increase your creativity. {%codepen https://codepen.io/Tronic247/pen/MWoaGZv %} Each page load generates more than 500+ Styles for buttons. Some of them are nice while some are ugly. But, you can get inspiration from here on how to make a nice (or weird) button by looking at the generator. If you have any thoughts please share them below. 🙂
posandu
807,063
mysql database connection in php
Hi guys, Today, we will see about mysql database connection in php. In the world of web...
0
2021-08-29T13:57:47
https://dev.to/pavankumarsadhu/mysql-database-connection-in-php-lj
mysql, php, webdev, wordpress
Hi guys, Today, we will see about mysql database connection in php. In the world of web development, data plays a key role. To give best experience to the users, we need to keep track of user steps to make comfortable. Definitely we need to store data related to users. ```php <?php $servername = "localhost"; //your servername $username = "username"; // your db username $password = "password"; // your db password $database_name = "database_name"; // your db name // connect to the server $conn = new mysqli($servername,$username,$password,$database_name); //check whether it is connected or not. if($conn->connect_error) { echo "Not Connected"; } else { echo "Successfully Connected"; } ?> ``` Read: <a href="https://wordpress.org/plugins/increase-upload-limit/" target="_blank">How to Increase upload limit upto 1000GB in Wordpress using Increase Upload Limit Plugin for LIFETIME Free of Cost?</a> In this way, we will connect to the MySQL Server using PHP. Thanks for taking your valuable time for reading my post, if you have any queries , please leave a comment and will respond. If you find this post useful, please share it. Social Profile : <a href="https://www.linkedin.com/in/pavan-kumar-sadhu-47693818b/">LinkedIn</a> Have a Nice Day!
pavankumarsadhu
807,066
Animated no-element typewriter
After sharing a typewriter effect with CSS, ChallengesCss beat the drums of "CSS War" and created a different effect, and InHuOfficial hopped in and is preparing a "type-righter"... Here's another entry: an animated no-element cartoon of a typewriter
14,440
2021-08-29T14:22:33
https://dev.to/alvaromontoro/animated-no-element-typewriter-2835
css, html, webdev, art
--- title: Animated no-element typewriter published: true description: After sharing a typewriter effect with CSS, ChallengesCss beat the drums of "CSS War" and created a different effect, and InHuOfficial hopped in and is preparing a "type-righter"... Here's another entry: an animated no-element cartoon of a typewriter tags: css,html,webdev,art cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aoax5irkvxaqdoly6puo.png series: CSS Typewriter Effect --- After sharing a typewriter effect with CSS, @afif beat the drums of "CSS War" and created a different solution, and @inhuofficial hopped in and is preparing a "type-righter"... Here's another entry: an animated no-element cartoon of a typewriter. [And the demo on CodePen](https://codepen.io/alvaromontoro/full/yLXYYZY) (click the "re-run" button at the bottom right corner to see the animation again): {% codepen https://codepen.io/alvaromontoro/pen/yLXYYZY %} There is no HTML tab because there's no HTML code on this pen. The trick? CodePen adds a basic HTML structure, including the `<body>` tag so that we can add styles via CSS to the body, and it will look like there aren't any HTML elements, but there are some... they are just hiding!
alvaromontoro
807,242
Konversiyon Bozukluğunun Nedenleri ve Belirtileri
Konversiyon Bozukluğu nedir? Anksiyete, panik atak, fobiler ve obsesif kompulsif bozukluklar dahil...
0
2021-08-29T19:11:24
https://dev.to/insanpsikolojisi/konversiyon-bozuklugunun-nedenleri-ve-belirtileri-2hea
Konversiyon Bozukluğu nedir? Anksiyete, panik atak, fobiler ve obsesif kompulsif bozukluklar dahil olmak üzere çeşitli zihinsel sağlık koşullarının bir kombinasyonu olarak sınıflandırıldığından, dönüşüm bozukluğunun ne olduğu sorusu karmaşıktır. Bu bozukluğun gerçek nedeni bilinmemektedir ve yalnızca geniş bir şekilde tanımlanmıştır. Bazı uzmanlar, beynin stresli olaylara tepki verme şekliyle veya geçmiş travmatik deneyimleri nasıl hatırladığıyla ilgili olabileceğine inanıyor. Spesifik neden ne olursa olsun, dönüşüm bozukluğunun bir parçası olan ezici kaygı duygusu, hastalar arasında yaygındır. https://insanpsikolojisi.com/konversiyon-bozuklugunun-nedenleri-ve-belirtileri/
insanpsikolojisi
807,263
#100daysofcode [Day - 07]
Alhumdulillah today is the seventh day of my 100 days of coding challenge, and today i tried to make...
0
2021-08-29T20:05:13
https://dev.to/thekawsarhossain/100daysofcode-day-07-38a6
javascript, programming
Alhumdulillah today is the seventh day of my 100 days of coding challenge, and today i tried to make a site using sportsdb api. Where you can search clubs by name of club and you can get the details of the club by clicking a button. Here is the site Link : https://football-club-api.netlify.app/ And here is the code Link :https://github.com/thekawsarhossain/100-days-of-code
thekawsarhossain
807,507
Best Low Interest Personal Loans
Pretty much every advance – vehicle advance, home advance, business credit just as close to home...
0
2021-08-30T03:28:00
https://dev.to/jackasimpson1/best-low-interest-personal-loans-59kj
lowinterestpersonalloans, pinjamanperibadi, malaysiapersonalloans
<a href="https://dev.to/">Pretty much every advance</a> – vehicle advance, home advance, business credit just as close to home advance – utilizes a financing cost (or ‘benefit rate’ in case it is an Islamic advance). The financing cost is determined in a sectional request and is likewise charged to your primary advance aggregate. Interest can be determined based on customary interest or drifting plan. A few group favor the previous on the grounds that it will help them spending plan their month to month costs, while the last is normally valuable for the individuals who are on a side pay. Which individual credits in Malaysia convey low financing costs? Citibank is one of the banks in Malaysia that offers individual advances with a low loan cost of 5.33% each year, with a reimbursement time of as long as 5 years. For what reason do banks utilize low revenue on close to home advances? Cash loaning is a successful business since it doesn’t appear to be troublesome that you can get cash again in an opportune and strong way. Banks will normally charge revenue as a “cost” for working with you and furthermore risk a cash crash. The financing cost on an individual advance can be high or little relying upon the existence history of the borrower. In the event that you have an awful record as a consumer, you are probably going to be charged a higher financing cost than dependent on those with great advance evaluations since you are associated with being a lucrative borrower Best Low Interest Personal Loans What elements urge individuals to apply for Interest Personal Loans? The vast majority will follow low Interest Personal Loans on the grounds that it is reasonable. In truth, no one jumps at the chance to pay revenue – so anything near nadir loan cost is exceptionally invigorating for borrowers. A low interest perorangan advance gives you space to design your financial plan and costs. You could be investigating combining all obligations with a low revenue credit, or financing burial service costs, balance huge hospital expenses, paying for wedding costs or school charges. How to get a low Interest Personal Loans? As you realize that in case there are different <a href="https://162.0.233.253/">low Interest Personal Loans</a> (Links to an external site.) available, you must search for the advantages and qualities that you want. You additionally need to overlook the item’s delicate temper prior to consenting to anything. little interest ancestral individual advance. Another approach to get a low Interest Personal Loans is to concede resources for your credit. This assurance will be utilized to cover the excess advance in case you are presently not ready to satisfy your settlement. In the event that you don’t have an advance, you can guarantee the underwriter to endorse your credit guarantee, who will then, at that point be answerable for the installment. Individual advances with low financing costs won’t be given to borrowers by unfit credit rates. Hence, take as much time as is needed to build your portion number by setting up a stable monetary position. Low interest individual credits in 2021 Advancements are generally brief and furthermore accompany troublesome devices and choices. All things considered, you absolutely have the likelihood to deal with your accounts better. In this article, you will discover what the do’s and don’ts are in an individual credit application dependent on the best bits of knowledge and activities of normal Malaysians. Peruse on to perceive what an individual advance is, the means by which you can apply for an individual credit at the most minimal expense and most noteworthy amount, and what you ought to do after the advance is endorsed or dismissed. What individual credit model would it be advisable for me to apply for? Presently that you’re acquainted with a portion of the essentials of individual credits, it’s an ideal opportunity to address inquiries concerning inclinations. By doing an individual advance match, you can pick the best close to home credit to rehearse. The financing cost can’t be the possibly factor when looking at individual advances, yet in addition your inclinations, reasonable: “What is the variety among gotten and unstable credits?” “Do I need to get a formal or sharia advance?” “Do I need Takaful or Insurance pillar?” “Do I need to apply for an individual advance?” Everybody has an objective to reach in their life and generally, they need cash to begin an undertaking. You can bring in cash from a wide range of vocation related courses of action, exchange items or administrations, individual investment funds, and credits. A few group are sufficiently asian to gain by their objectives with at least one combinations of the above frameworks. However, what might be said about other people who have restricted inclinations or restricted choices are not plausible? Regularly, they avoid the last other option, to be specific credits. On the off chance that you take a gander at this sanely, applying for an individual credit is successful for a few reasons: Instruction Speculation Crisis cash Financing for business Purchasing property (house, vehicle, hardware and so on) Obligation combination. Individual advances will affirm overdue debts In the event that you have a ton of obligation with various measurements, bank, due date, tenor, and loan fee, it very well may be an intense assignment to follow finance. Unfulfilled obligations pooling advances are incredible for making it simpler to develop obligation into a solitary office. You can repay it by lower loan fees and more slow timeframes stressing regularly scheduled payments and having more discretionary cashflow. Individual credit application traveler, During an individual credit application (Links to an external site.), there are a few things that you need to contemplate, for example, the sum you can apply for, the sum that the bank can really loan, the reports required and where to apply for an individual advance low revenue. What amount would i be able to acquire a buck? The ordinary advance size that you can acquire from the bank goes from RM5,000 to RM200,000. This number is otherwise called the essential aggregate. After you have settled the measure of your fundamental advance, you should choose a span period that suits your financial limit. INrushTime has an individual credit adding machine where you can ascertain your month to month settlement as effectively as demonstrate your advance size, month to month pay and favored time.
jackasimpson1
807,553
example
A post by surajpatil510
0
2021-08-30T06:02:00
https://dev.to/surajpatil510/javascript-behind-the-scene-3p64
surajpatil510
807,562
Answer: How to see docker image contents
answer re: How to see docker image...
0
2021-08-30T06:26:25
https://dev.to/icy1900/answer-how-to-see-docker-image-contents-27d5
docker
{% stackoverflow 46526598 %}
icy1900
807,777
Thoughts On Types
This article was initially published on my website. Introduction I've used multiple...
0
2021-08-30T11:10:50
https://sayedhajaj.com/posts/thoughts-on-types
programming
This article was initially published on [my website](https://sayedhajaj.com/posts/thoughts-on-types). ## Introduction I've used multiple programming languages, each with their own ideas of how types should be handled. Each of these approaches created some issues and solved other issues, and my goal in this article is to describe these pros and cons. Rather than describe them in a list format, I will try to examine what we do as programmers and how various approaches to handling types can help or hinder this. I'm writing my own programming language. I'll announce it here later. One of my goals with it is to analyse what I do in practise, and add in what I feel is missing. The thoughts I've expressed in this article have gone into my own programming language. ## Every program has types, whether you write them or not If you pause the execution of a program whilst it's executing an expression, each expression will have a type. There will be valid things you can do to it and things you can't. If an expression evaluates to an integer, you can use it for arithmetic operations you could not perform on strings. You might think then that using static types is redundant in this case, but actually this makes it all the more valuable. It means that whether or not you're specifying types, you need to think about them to ensure that your program does not crash. If your compiler does not enforce this, then you will need to be extra vigilant to avoid run-time errors. Whenever you deal with a variable, you have to read the implementation and usage to make sure you are treating it in a valid way. By having static types, your IDE or editor can notify you if you are writing invalid code. Static types are also a good form of documentation, because the compiler makes sure it is consistent with your program. ## Data structures make algorithms obvious If you have an array you will probably have a loop. If you have a tree you will probably have a recursive function. For any data structure, there are more and less natural ways for processing that data structure. The first implication of this is that it's important for a programming language to make it easy to define different kinds of data structure. The second implication is that clearly communicating the data structure makes it easier to write the code to process it. If you can clearly see that a variable is an array, then you have a good idea of what kind of code you will write next. If you don't, and instead have to read implementation code from multiple files, then you will have a harder time figuring out what you should do with the variable. ## Some static types have a lot of boilerplate Java is, or was, a good example of this. A lot of Java code looks like this: ```java BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); ``` The compiler and IDE can easily tell that the variable refers to a BufferedReader instance from the fact that it instantiates a BufferedReader. There's no need to repeat that information. Some statically typed languages have type inference. This allows you to write something like this: ```kotlin val reader = BufferedReader(InputStreamReader(Sytem::in)) ``` (This is Kotlin). If you want to, you can still provide type annotations. The compiler still makes sure every variable and expression has a type and that only valid operations are called on the associated types. ## Many typed programming languages disallow programs that won't crash Will this program crash: ```javascript var item = "foo" print(item) item = 4 print(item * 2) // assume print converts automatically to string ``` No, it won't crash. By the time you multiply the variable by 2, the value contained in it is an integer so it is a valid operation. But many statically typed languages won't let this compile. Programmers who are used to dynamic types feel that valid useful options for writing a program are closed off for no good reason in statically typed languages. The way some statically typed languages have gotten around this is by having more flexible type systems, capable of representing more valid programs. For example, this can be dealt with by using union types: ```typescript var item: String | Int = "foo" // now the compiler will tell you to check it is an integer before multiplying it. ``` ## Powerful typed programs have issues Javascript is a very dynamic language. It's common for javascript users to iterate through all of the fields in an object to create a new one. In order to represent that in a typed programming langauge such as TypeScript, you often need to have a powerful type system. As a result, [the type system in TypeScript is Turing complete](https://github.com/microsoft/TypeScript/issues/14833). This is also the case for a few other languages. In TypeScript, for example, you can use the type system to make sure that a number is prime. This is cool, although it has problems. That means that you cannot tell ahead of time if type checking will complete (the halting problem). In my opinion, this just pushes the problem developers deal with back a step. Instead of being able to glance at the type declaration and understand how you should write your program, you now need to act like a computer and follow along. ## A smellier alternative What if you could tell your compiler that in this particular instance, you know better? You could stick to using simple type system features, and force the language to do what you think will work in certain circumstances. For example, in Kotlin and typescript you can assert that a type is not null, or force a cast. This is generally considered a bad practise, and should be used sparingly, but it can be useful. ## Maybe optional typing? What if invalid types generated warnings rather than errors, so you can still deal with them but you don't have to make your programs awkward in areas that aren't amenable to static types? I think this is helpful for transitioning from dynamic types, and still provides some of the benefits of static typing. ## Related rant on null The Java type system is lying to you. Every variable could be referring to null. Tony Hoare referred to this as his billion dollar mistake. As a result of this mistake, variables in Java often need null checks before they're used. One of the most common causes of runtime exceptions in Java is null pointer exceptions. As a result of this, people have all the extra work associated with static types, but they don't really get all of the benefits since they have to check if every variable really is the type it says it is manually. One of the ways to get round this is non-nullable types. This way null cannot be assigned to most variables by default, unless is null is specified as a valid option. If it is then the compiler requires null checks. This can also be achieved with union types. ## Engineers and tradeoffs So it looks like each avenue presents some issues. Some avenues try to get the best of both worlds, but nothing seems to be capable of avoiding all issues. So then the question is, what to pick and when. I think it depends on the kind of project. In a large project where you are working with multiple people, I think a slightly flexible statically typed language is the best option. This makes sure that everyone is on the same page regarding what every function expects, reduces runtime errors and cognitive load for developers. Whilst it might be tempting to go for something less stringent, I think it's ultimately not a good idea for a large project that's expected to last and work.
sayedhajaj