id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,188,940
What's a cool project to make?
Just finished a project(webpage) and I want to make something different. Does anyone have an idea?
0
2022-09-09T11:02:55
https://dev.to/pleasebcool/whats-a-cool-project-to-make-445
Just finished a project(webpage) and I want to make something different. Does anyone have an idea?
pleasebcool
1,188,993
Yet another implementation for Slack Commands
We're using slack for communication within our team. Using slack slash commands, we also handle...
0
2022-09-19T13:27:48
https://dev.to/wajahataliabid/yet-another-implementation-for-slack-commands-4bmd
python, devops, api
We're using slack for communication within our team. Using slack slash commands, we also handle common day-to-day tasks like triggering deployments on different servers based on requirements. For a long time, that remained our primary use case, so we had a single AWS Lambda function taking care of this job, however, with the growing team and business, we felt the need to handle several other tasks this way. Thus we created a few more lambda functions that handled different tasks but the behavior among different lambda functions wasn't consistent, since every lambda was maintained separately. ## General Idea We decided to create a single lambda function that will be solely used for handling all the slash command requirements. The commands will follow a similar structure as normal shell commands do. Some examples are as follows | Command | Description | | ----------- | ------------------------------------------------------------ | | help | List down all the slash commands with usage | | help deploy | List down help text for deploy commands, including all the options it takes and flags | | deploy | Deploy for a specific client | Some examples of slack slash commands are as follows (assuming our slash command is /example) ``` /example help /example help deploy /example help deploy --client test-client --branch master ``` We can limit users who can issue a command as well as channels where the command can be issued from by using a very simple configuration ```json { "command": "deploy" "users": [ "example.user" ], "channels": [ "channel_id" ] } ``` ## Implementation We implemented this solution in python with the help of some useful packages from the open source community. Some of these packages are - [Fast Api](https://fastapi.tiangolo.com/): For the implementation of the APIs - [Mangum](https://mangum.io/): For the integration of the Aws Lambda and the Api Gateway with the FastApi - [Python Jenkins](https://python-jenkins.readthedocs.io/en/latest/): For interacting with the Jenkins server that handles deployments - [Argparse](https://docs.python.org/3/library/argparse.html): For handling argument parsing The way our implementation worked was as follows - The User sends a command via slack - Lambda function verifies - If the command exists - If the user is allowed to run the command - If the command is allowed to be run from the specified channel - If the command has all the required parameters and responds appropriately - Execute the command and send the response back to the slack channel - Offload the long running commands to the Ecs container (e.g, database backup or restore) The general architecture of this solution is as follows ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8rczuzsg7f1cbaz0p22f.png) This helps us implement new slack commands really quickly. All the error handling is similar across the different commands, so we don't need to handle the errors separately for every new command.
wajahataliabid
1,189,222
Choosing a PHP Framework over Core PHP
Some of you may be saying, it is quite obvious that using a PHP framework is better than using core...
0
2022-09-12T11:46:26
https://dev.to/kansoldev/choosing-a-php-framework-over-core-php-4i7g
php, frameworks, programming
Some of you may be saying, it is quite obvious that using a PHP framework is better than using core PHP and trying to even debate on this is a waste of time. Yes you are right to an extent because it looks that way at first glance without thinking much about it. I want to discuss in this article some of the reasons why a lot of developers and companies choose to build web applications and websites with a PHP framework rather than core PHP. We software developers know that building software is all about choosing the right tools for the Job. It is important to compare and contrasts your options before going for what seems like the obvious best solution. I have noticed how some developers encourage people who are new to programming to learn a framework without learning the language the framework was built on. This is not right because the person would most likely find it hard to understand what the framework is doing. Writing code i really understand makes programming more fun and interesting, and i guess you agree with me because you can manipulate the code the way you want to. ### So who should you listen to? The choice is yours, Experienced developers encourage the new devs to learn frameworks because of the many benefits it provides when building an application, but the same can be done with coding from scratch, even though it might take more time. In fact there are developers who have built their own structure from the ground up and have been using it for years to build web applications and websites for people, but it takes time to reach this kind of level and if you plan on reaching this point, it would be better if you build your own code structure as a side project which gives you the opportunity to know what works and what doesn't. ### Reasons for choosing PHP Frameworks over Core PHP 1. Core PHP is the PHP language itself. With Core PHP, you have more control over your code, you create the structure for your project the way you want it, make helper functions, create config files etc without being tied to any particular style. Having this control is great but with great control comes great responsibility. One thing to keep in mind is that you are handling and managing everything yourself, which can be quite daunting when it comes to handling a large application. This is the reason why building projects with Core PHP is not recommended in a real world setting by experienced developers because it can be very hard to maintain, especially if there is no proper documentation. 2. Another issue with using Core PHP is the difficulty in collaborating with other developers. Developers would have to first understand your code structure, which can take time if the structure is really complicated. Having a well known code structure that even a new developer can understand makes the productivity of the whole team better. 3. **Having to reinvent the wheel**, Who really wants to take time to code their own database abstraction layer?, any hands?, didn't think so 😂😂. Writing a piece of code for a problem that has already being solved, tested and approved by other developers slows down the development of an application by a huge amount of time. For instance, trying to create a functionality that enables PDF files to be downloadable on a website is a waste of time when you have a package like [mPDF](https://github.com/mpdf/mpdf) that does that. Understand that **"YOU REALLY DON'T NEED TO CREATE EVERYTHING FROM SCRATCH!!"**, and you would be more productive as a developer. It took me a long time to understand this, i always thought i would look like an amateur if i couldn't build things out from scratch. From the experience i have had in this field, it is good to build something from scratch when you are learning something new or building a side project, that gives you the opportunity to understand what you are doing on a much deeper level but when it comes to building a real world project, please don't try to reinvent the wheel again. It would waste both your time and the time of the client or company, except if you have built a good code structure in place already. 4. The last reason is on PHP frameworks, they are the option most developers prefer and for good reasons. Frameworks provide the structure you need, the foundation for a standard web application has been layed out and the only thing you need to do is build on top of that foundation. With frameworks, you don't have to re-invent the wheel as common functions have been written such as helper functions, classes, database configurations and so on to give the developer a head start. Frameworks are easier to use and manage by many developers because of how they are structured and the support they provide. No matter how a developer writes code with a framework, any developer would be able to understand what is going on because the structure is common. It makes collaboration and prototyping easier, faster and more efficient. The only downside of some frameworks like Laravel is how heavy they can be, but not everything is a 100% perfect in the software industry, the most important thing is, does it get the Job done in the most efficient way possible?, if the answer is yes, then you should be okay with what you have. ## Conclusion Understanding which option to choose is very important so that you know why everyone goes for the option they go for. My advice is start from the basics and understand the language properly before learning how to use frameworks so that you appreciate frameworks more.
kansoldev
1,189,385
Usefull Tech Stack - App, Frontend, Backend, Server
This is my "complex" techstack and now in production. Principles of clean code, use of the best...
0
2022-09-09T22:54:08
https://dev.to/dennisglowiszyn/usefull-tech-stack-app-frontend-backend-server-1c4
techstack, webdev, programming, beginners
This is my "complex" techstack and now in production. Principles of clean code, use of the best thing in one area and a clear separation of frontend and backend is important to me. Server (Backend) - Apache2 - PHP - Mysql - Symfony Framework (Api) Frontend (Web) - Twig - HTML, SCSS, JS - Used API - Standalone Frontend (App) - Android (Native): Java -> WebView - iOS (Native): SwiftUI Now the ask: Why so complicated? Answer: I think it makes sense to choose the best tool for each task. Client and Server is in a perfect workflow.
dennisglowiszyn
1,189,470
從 3 - 5 談 C 語言無號整數的運算
如果說 3 - 5 有可能是正數, 你信嗎?在 C 語言裡, 如果是無號 (unsigned) 整數, 那 3 - 5 就會是正數, 這是因為無號整數會遵循除以 $2^n$ 取餘數的規則, 因此 3 -...
0
2022-09-10T04:04:28
https://dev.to/codemee/cong-3-5-tan-c-yu-yan-wu-hao-zheng-shu-de-yun-suan-400i
c, cpp
如果說 `3 - 5` 有可能是正數, 你信嗎?在 C 語言裡, 如果是無號 (unsigned) 整數, 那 `3 - 5` 就會是正數, 這是因為無號整數會[遵循除以 $2^n$ 取餘數](https://en.cppreference.com/w/c/language/operator_arithmetic#Overflows)的規則, 因此 `3 - 5` 實際運算結果雖然是 -2, 但若是無號整數, 因為不會有負數, 會依循上述規則取結果, 以下是[簡單的範例](https://www.online-ide.com/PsifvoE2hx): ```c #include<stdio.h> unsigned char a = 5, b = 3, c; int main() { c = b - a; printf("%d\n", c); return 0; } ``` 執行結果如下: ``` 254 ** Process exited - Return Code: 0 ** ``` 這是因為 `unsigned char` 是 8 位元, 因此依照規則就會是 `-2 % 256` 得到 254。 相同的道理, 如果無號整數溢位 (overflow) 了, 也是一樣, 例如[以下的範例](https://www.online-ide.com/JSfQlguMp0): ```clike! #include<stdio.h> unsigned char a = 5; int main() { a += 256; printf("%d\n", a); return 0; } ``` 執行結果如下: ``` 5 ** Process exited - Return Code: 0 ** ``` 因為 `(5 + 256) % 256` 仍然是 5 的關係。
codemee
1,189,648
Welcome to my creative space ! A space for nothing but fun programming projects.
This space is to document my voyage back into tech, and some more... Previously I worked in the tech...
0
2022-09-10T11:40:13
https://dev.to/masaix/welcome-to-my-creative-space-a-space-for-nothing-but-fun-programming-projects-1o6d
This space is to document my voyage back into tech, and some more... Previously I worked in the tech sector with past employment spent working at '[hosteurope](www.hosteurope.com)', '[ionos](www.ionos.de)' and '[iomart](www.iomart.com)'. These past employment were as a Network Engineer and System Administrator, maintaining Linux & Unix servers. Some of the other tasks included organizing, installing and supporting organizations' computer systems. The LAN, WAN and other data communication systems (firewalls, routers, etc.,). Then about 2 years ago I decided to take a break from daily grind and spend time travelling the world. I visited Tanzania, Ethiopia, UAE, Egypt and China. This blog is a space and somewhere to have fun, to explore intellectual curiosities, and, to try experience that creative exhilaration that we can feel when building something from scratch, see it develop from a simple idea, and into a something (hopefully!) useful. I understand software development is a vast field and one that is creative, demanding and extremely rewarding. It is something I am interested in and I like the field's cerebral nature. Mastering Full-stack Web development (I notice the sector has changed enormously) and will continue continue into software engineering. Learning and growing technically is humbling and something to look forward to. I also intend to assist others, it is by teaching others that we can effectively learn more. COPY COPY *It is by teaching that we teach ourselves - Henri Frederic Amiel* That's all for today !
masaix
1,189,656
N-Queen Visualization made with React as a tribute to Queen Elizabeth
I've a knack of making visualizations for data structures and algorithms to understand them...
0
2022-09-10T12:03:40
https://dev.to/ritamchakraborty/n-queen-visualization-made-with-react-as-a-tribute-to-queen-elizabeth-2ep9
react, algorithms, vite
I've a knack of making visualizations for data structures and algorithms to understand them correctly. Recently working on a visualization of N-Queen problem. I happened to complete the project on the same day when Queen Elizabeth passed away. So I dedicated the application to her name. Check the links out if interested. - [GitHub](https://github.com/RitamChakraborty/n_queen_visualization) - [Application](https://ritamchakraborty.github.io/n_queen_visualization)
ritamchakraborty
1,189,754
How to find react native developers?
How and whereto find react native developers? You might be a recruiter or a company owner...
0
2022-09-10T15:08:57
https://dev.to/developerbishwas/how-to-find-react-native-developers-4o2
javascript, reactnative, react, mobile
# **How and whereto find react native developers?** You might be a recruiter or a company owner and you want to find react native developers to hire them. Shortly, You can find react native developers by doing the following things: * Create a job post on a job board and wait for react native developers to apply for it. * Search for react native developers on Social media, GitHub, Linkedin, Reddit, Twitter, and StackOverflow. * You can find react native developers by searching for them on Google. * Fiverr, Upwork, Freelancer, PeoplePerHour, Guru, Toptal, Codeable, Freelance, and many more freelance websites. * You can find react native developers by searching for them on YouTube. # **What are the main things that a react native developer should know?** A React native developer should know the following things: # **Technical skills** 1. JavaScript, CSS, HTML, functional programming 2. React fundamentals: components, props, state, JSX, lifecycle methods, virtual DOM, and many more React fundamentals… 3. Should know how to use React Native CLI and Expo CLI 4. Understand how to use React Native components, APIs, and modules 5. Ability to use React Native debugging tools # **Soft skills** 1. Communication skills: React Native developers have to communicate with other developers, designers, and clients. 2. Teamwork skills: A developer that has teamwork skills can work well and perform better. 3. Problem-solving skills: React Native developers have to solve problems that they face while developing React Native apps. 4. Feedback skills: He/she should be able to give constructive feedback to other developers. 5. Tolerance and appreciation for other people’s opinions: He/she should be able to tolerate, understand and act accordingly to other people’s opinions and criticism. 6. Ability to work under pressure: React Native developers have to work under pressure to meet deadlines. # **Top 5 react native developers** Here are the top 5 react native developers that I’ve handpicked for you: # **[zeeshanaslam105](https://go.fiverr.com/visit/?bta=255123&brand=fiverrhybrid&landingPage=https%3A%2F%2Fwww.fiverr.com%2Fzeeshanaslam105%2Fcreate-react-native-mobile-application-for-ios-and-android)** zeeshanaslam105 is a React Native developer that has 340 numbers of ratings on Fiverr with all 5 stars. He uses technologies like React Native, Expo, Firebase, Rest APIs, Google Maps, and Google APIs. He is a Level 2 seller on Fiverr. You can hire him on Fiverr. He is available for $250 per project with a lot of features and offers included. # **[faheem25](https://go.fiverr.com/visit/?bta=255123&brand=fiverrhybrid&landingPage=https%3A%2F%2Fwww.fiverr.com%2Ffaheem25%2Ffix-wordpress-issues-errors-and-redesign-wordpress-site)** faheem25 is a React Native developer that has 81 numbers of ratings on Fiverr with all 5 stars. He is a Level Two Seller on Fiverr. He currently has 2 orders in progress. He says he has skill and expertise in Node JS, React Native, Javascript, Angular, and various frameworks and WordPress websites. He is Highly responsive in communication and always available to discuss the project. You can hire him on Fiverr. He is available for $225 per project developing a React Native app; backend server, mobile operating system, and database. # **[anjum_multiware](https://go.fiverr.com/visit/?bta=255123&brand=fiverrhybrid&landingPage=https%3A%2F%2Fwww.fiverr.com%2Fanjum_multiware%2Fcreate-android-and-ios-application-in-react-native)** anjum_multiware is a React Native Full stack mobile developer that has 24 Reviews on Fiverr with all 5 stars. His basic gig is just $5 and he is a growing Level 1 seller on Fiverr. He says he has more than 6 years of experience in the software industry focused on mobile-driven solutions, I can architect, design, develop, manage, and publish your app within days. He has mobile development skills like React-Native, Android, iOS, RealM, Smart Watches, Plugin Development, and IoT Devices via bluetooth and WIFI. With backend skills, like: Node.js, MongoDb, CosmoDB, MySql, he can build your backend server and APIs for your mobile app. Have a great time hiring him. # **Conclusion** The last thing that I want to say is that you can find react native developers by searching for them on Google, YouTube, and social media. That’s it for today. I hope you enjoyed this article. If you have any questions, feel free to ask me in the comments section below. I will be happy to answer your questions. If you want to learn more about web development, you can check out my other articles on [Web Dev](https://blog.webmatrices.com/tag/web-development/). I will be happy to see you in my next article. Bye!
developerbishwas
1,190,117
Podman (An alternative to Docker !?!) 🦭
Exploring new tech What is Podman? 🤔 Podman is a daemonless, open source,...
0
2022-09-11T09:16:43
https://dev.to/devangtomar/podman-an-alternative-to-docker--4n0e
--- title: Podman (An alternative to Docker !?!) 🦭 published: true date: 2022-07-30 11:00:56 UTC tags: canonical_url: --- #### Exploring new tech #### What is Podman? 🤔 ![Podman logo](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621354394/y2heDM_cb.jpeg) Podman is a daemonless, open source, Linux native tool for finding, running, building, sharing, and deploying applications that use Open Containers Initiative (OCI) Containers and Container Images. > It operates very similar to Docker and is simple to set up. > > Containers can run as root or in a rootless manner. Documentation : [https://podman.io/](https://podman.io/) ### How different from Docker? 🐳 ![Podman VS Docker](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621356180/UzyCvGA7M.jpeg) The primary distinctions between Docker and Podman (Pod Manager Tool) are as follows: - **Daemonless** _This characteristic distinguishes itself from Docker, which executes operations via a docker daemon. Podman is lightweight and does not require an instance to be active at all times in order to execute containers._ - **Rootless** _Podman can be operated as root or as a non-root user. We can run Podman containers as non-root users while being compatible with container running._ - **Pods** _The word Pods was coined by Kubernetes. Pods are groups of containers that run as close together as feasible. Podman includes this capability by default for running many containers concurrently._ ### Architecture 🏛: ![[Architecture] Docker VS Podman](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621357724/Aulls7TgO.png) > **_Many individuals still refer to a container as a Docker container._** This does not accurately represent the existing container ecosystem. Docker generates OCI container images that may be used with different runtimes. _Kubernetes is one such example, as is Podman._ As a result, the essential functionality of Podman and Docker overlaps. Both generate images that may be used to run containers by the other. On top of the basic containerization functionality, the two runtimes provide their own specializations. ### Setup And as its OCI-compliant, Podman can be used as a drop-in replacement for the better-known Docker runtime. Most Docker commands can be directly translated to Podman commands. > _Simply_ **_put alias as, docker=podman_** _on your machine and youre set_ Installation of Podman is pretty straightforward over a Mac. If youre on other OS, follow this [documentation](https://podman.io/getting-started/installation). WSL over Windows works. Request admin access from Self-service app and then open terminal. The Mac client is available through [Homebrew](https://brew.sh/). `brew install podman` ![Installing podman on MacOS](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621360144/_oZnhQx-8q.png) To start the Podman-managed VM: `podman machine init` And then `podman machine start` ![Starting podman machine](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621362263/XRFjIYp5B.png) Once the installation is complete, you can then verify the installation information using: `podman info` ![podman info](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621364957/eT3S6g3Q2.png) ### TLDR 📌 As stated before, Podman is just an alias for docker. Whatever commands you can execute with docker, you can execute with Podman. #### Building an Image `podman build -t containername ` #### Docker file : ![Dockerfile](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621367528/ZqaRJJE7f.png) #### Image building via the command : `podman build` #### Push Image `podman push registryname/imagename:tag` #### List images `podman images` > Command screenshot : ![podman images](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621370128/r3FrNemN0.png) #### Run container `podman run imagename` > Command screenshot : ![Podman run](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621372570/MfON-isSO.png) #### Remove image `podman rmi imagename` > Command screenshot : ![Podman container](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621375020/WOmGz_xTa.png) #### Show Podman process status `podman ps -a` > Rest of the support commands is shown as below : ![Podman help](https://cdn.hashnode.com/res/hashnode/image/upload/v1661621381320/kgGEzVmxU.png) If you use docker-compose , there is an alternative for this [podman-compose](https://github.com/containers/podman-compose). > **_Note :_** _Natively docker-compose is yet not supported by Podman yet at the time of my testing of Podman Release v4.0.0 version. And the one available is very unreliable._ The setup is straightforward. Install Python if you dont already have it, and then podman-compose. `brew install python@3.10pip3 install podman-compose` Podman in conjunction with podman-compose works well. Ive had trouble getting podman-compose over Mac because there isnt an official document for installation other than the one mentioned above. ### Conclusion 💁🏻 Podman wraps around Dockers capabilities to provide a lightweight container runtime that can be used in Daemonless and Rootless modes. ### Limitations of Podman 🥲 - Linux based. - No support for Windows OS based Containers. _(Supported now with the help of WSL)_ - Docker-Compose component is still non-reliant - No GUI unlike Docker Desktop, Rancher Desktop. - New product with bugs and still coming features. - No Docker Swarm. That being said, Podman is still a new technology that is improving, and it may be best to wait and see until we see community acceptance of Podman and it becomes a more developed and reliable tool. You can certainly experiment with it on your local workstations and learn more about it, but bringing it into your production system may take some time. Please share your opinions about Podman and this topic in the comments section. ### Lets connect and chat! Open to anything under the sun 🏖🍹 Twitter : [devangtomar7](https://twitter.com/devangtomar7) LinkedIn : [devangtomar](https://www.linkedin.com/in/devangtomar) Stackoverflow : [devangtomar](https://stackoverflow.com/users/8198097/devangtomar) Instagram : [be\_ayushmann](https://instagram.com/be_ayushmann) Medium : [Devang Tomar](https://medium.com/u/8f5e1c86129d?source=post_page-----e42119a306ca--------------------------------)
devangtomar
1,256,779
What is Decision Intelligence – Here’s What You Should Know
What is Decision Intelligence? Gartner defines decision intelligence as “a practical...
0
2022-11-15T03:49:24
https://www.purpleslate.com/thoughts/think-about-how-people-think-decision-intelligence/
ai, news
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8u5k3nisax7flvi7if6i.png) ## [What is Decision Intelligence?](https://www.purpleslate.com/thoughts/think-about-how-people-think-decision-intelligence/) Gartner defines decision intelligence as “a practical domain framing a wide range of decision-making techniques bringing multiple traditional and advanced disciplines together to design, model, align, execute, monitor and tune decision models and processes.” Decision Intelligence is one of the top three technology trends that is going to create a big impact on the markets by end of 2023 as per Gartner’s 2022 report. “It’s not a technology,” says Gartner, “It’s a discipline made of many different technologies.” The advent of big data and its allied science and technology paved the way to derive information from the humongous data that is made available from various avenues, in short, Big data. Again the question arises, with the quantitative data are we still able to get quality information? The jazzy dashboards, charts, and digital visualizations from business Intelligence (BI ) go overboard and become information overloading for strategic decision-makers. The requirement is to get the right data, again ‘right’ is a keyword and qualifier and it depends on the business, every changing ecosystem et al. ## Understanding the Importance of Data-Driven Decisions Even if we get the quality or right information available, the decision has to be finally made by one or more humans. The final choice or decision is made by human intervention. Though choice and decision are used interchangeably, they are not one and the same. ‘Choice’ is to pick from wide varieties that are available at your disposal with little or no information, while a decision is an informed choice. Again the question loops back to what is that ‘informed’ choice? Informed choice is alternatively called Decision Intelligence. It is a combination of science, technology, and humanity that contributes to making consistently high-quality informed decisions with the help of technology that derives information from the prevailing ecosystem. Decisions, decisions, and more decisions. In this fast-paced environment, being able to make quick decisions gets one ahead of competitors. Every day, in any organization across all the managerial and leadership positions, they are expected to make a multitude of decisions left, right, and center. The bigger the size of the organization, the challenges in data-driven decisions increases proportionally. The ability to respond quickly to the ever-changing environment is the primary skill for any successful leader. Here the requirement is to provide more comprehensive decision support platforms that augment the existing 2D dashboards from business intelligence and analytics by leveraging sophisticated tools enabled by artificial intelligence and machine learning. ## Overcoming Biases with AI-Powered Decision Making Every person has a circle of competence. Every decision made is the outcome of a person’s knowledge and circle of competence. If a decision is augmented with a wide variety of people’s circle of competence, the collective decision-making skill is going to be enhanced manifold. One of the biggest challenges in a collective decision-making process is the team running into cognitive and emotional biases. This will have a big impact on the outcomes of the decisions. People will seek out data that supports their biases, more like reverse engineering. When people analyze the data, they look at it through the lens of their experience which may result in flawed conclusions. The algorithms in the decision intelligence system are not swayed by those biases, it evaluates the available facts objectively and provides an informed choice. AI systems used in the Decision Intelligence system look at the data more closely than conventional data analytics systems and detect any unseen patterns or abnormalities. The identification and elimination of potential abnormalities will have a big impact on the decision-making process and enhance the skill with more unseen data patterns. AI decision-making systems enhance to speed up the process entirely by processing large amounts of data that are scattered. it helps the organization to make the right decision in a shorter period of time. ## Propelling 10X Organizational Growth with AI-enabled Decision Intelligence Traditionally we have the rule-based system that gives linear growth for any organization. Even when looped back, it will be an incremental change. But for exponential growth, we need to have the edge with AI-based decision-making systems. With the machine learning systems in place, they scour for a pattern across the data with the looped back attribute and get back with a multitude of opportunities to grow exponentially. Decision Intelligence systems backed by AI systems constantly upgrade their algorithm with machine learning. Most of the time, decision trees become complex, increasing the risks of getting decisions wrong. Decision Intelligence systems might not be able to replace all the decision-making, but they might be able to augment it. It concentrates on both the positive and negative attributes of success. While positive attributes enhance the success rate, negative attributes mitigate the failure rates. If a company does not have real historical data to start with for implementing a Decision Intelligence system, they can look at using synthetic data. It enables companies to identify black swan events or unusual scenarios. According to Gartner by 2024, 60% of the data used for the development of AI will be synthetically generated, now it stands at 1%. ## Understanding the Underlying Challenges in a Decision Intelligence System One has to understand that the quality of the decision outcomes and the quality of decisions are not one and the same. Sometimes, a bad quality decision might bring a successful outcome if one is lucky, but it is a one-off scenario. It can’t be looped back into the decision system and can not measure the decision and its outcome. It becomes unsustainable in the longer run. Collecting the outcomes of these decisions and linking them back to the decision-making system is challenging. But once the decision intelligence system is in place and the outcomes are looped back, the outcomes become more effective and efficient. Another facet to the problem is with the rapidly changing ecosystem, we can’t just rely on past data to predict the future trend. Covid pandemic, which came out of the blue, became a game-changer and the majority of the decision-making attributes got changed. Decisions purely taken by data-driven recommendations at certain times hit us hard and the need for AI-driven decision intelligence becomes paramount for an organization’s existence. But most of the time, even when AI predicts rightly, human instincts take the foreground and discard the recommendations, causing considerable loss to the organization. ## Decision Intelligence is the Future of Corporate Decision Making Is decision intelligence required for start-ups, and SMEs? The answer is obviously yes. Even to consider decision intelligence as a viable option itself is a half victory. Many might still be doing it without calling it by the name, Decision Intelligence. Recommendation engines are one miniature or subset of the DI system. They may not have the necessary attributes but comes closer to it. Let us not think of DI as a full-fledged system implemented across the organization. We can just start using DI in one of the crucial, say marketing department, and expand it to other departments. For a decision intelligence system to be successful, the underlying strategy needs to be right. The person who implements requires an in-depth understanding of the organization, the ecosystem it operates, the resolve to evaluate both successful and failure outcomes, and the most important of all is to loop back to the decision-making process. Originally posted: [What is Decision Intelligence — Here’s What You Should Know](https://www.purpleslate.com/thoughts/think-about-how-people-think-decision-intelligence/) Author: [Radha Srinivasan](https://www.linkedin.com/in/radha-srinivasan-1b69332/)
harinarayang
1,190,264
25 Customer Feedback Tools All Developers Need
As a indie maker, you know that your product is only as good as the feedback you get from your...
0
2022-09-11T13:29:40
https://dev.to/ayushjangra/25-customer-feedback-tools-all-developers-need-296m
productivity, showdev, news, webdev
As a indie maker, you know that your product is only as good as the feedback you get from your customers. The more you know about how your customers use your product and why they use it, the better equipped you are to develop a product that will meet their needs. There are many different types of feedback tools and [product strategies](https://supahub.com/blog/product-strategy-framework) that can help you gather information about what your customers need and want. The problem is that many customer feedback tools are expensive, difficult to use and poorly designed. This can make it hard for small businesses with limited resources to get started with customer feedback tools. Here are list of 25 customer feedback tools that will help you with your SaaS product: ## 1. Supahub ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p27lha84s8q5dqwy5aze.jpeg) With the Supahub feedback tool, you can easily collect, manage and prioritize your customer's feedback from different platforms in one place. It allows you to get real input from your users to stop you from assuming their needs. Instead, let your users share their needs in their own voices. **Read the list of all 25 feedback collection tools here: [25 exclusive customer feedback tools](https://supahub.com/blog/customer-feedback-tools)** --- ## Thanks for reading 🙏 I'm Ayush, co-founder of [Supahub](https://supahub.com). For more actionable tips and articles that will help you grow, follow me on [LinkedIn](https://www.linkedin.com/in/ayushjangra/) and [Twitter](https://twitter.com/_ayushjangra_)
ayushjangra
1,190,299
Finding remote work in 2022
A few years back I wrote how to find remote work in 2019 after a personal research of having to find...
0
2022-09-11T15:07:11
https://dev.to/k_ivanow/finding-remote-work-in-2022-26j
remote, interview, career, jobs
A few years back I wrote [how to find remote work in 2019](https://dev.to/k_ivanow/finding-remote-work-in-2019-6ce) after a personal research of having to find a remote work for the first time not by being approached by a headhunter, but on my own. That article did better than a lot of other things I've written. However, recently I had to hire people about of a very very early stage startup (the very first hires other than the co-founders), and noticed the list and feedback I had on my original article needed some updates, especially with the business world embracing the remote work more and more after the pandemic. So here they are - in this article I'll cover the most popular platforms I've noticed as being used and their pros and cons and in another one I'll cover the perspective of someone looking to hire and the pros and cons of the platforms then, as they more often than not are quite different. #### [We Work Remotely](https://weworkremotely.com) Still my favorite several years later! I've found most of my job opportunities through WWR. **Pros** +All companies I have contacted for job postings on this website were fast to +respond.  +All had a quickly moving hiring process.  +It has several types of jobs - marketing, programming, etc.  +It also has the benefit of showing where candidates must be located for the job.  +It has kept growing over the years. It now has 28,530 postings **Cons** -It doesn't allow for an easy apply like other entries on the list. -The increased number of companies leads to a bit of a dilution of how often companies reply and how quick their process is. -There is no streamlined way of applying for a job. The button for applying can lead to anywhere - outside platforms or even just email addresses without any guidance on what information is of interest to the company. #### [Remote Tech Jobs](https://www.remotetechjobs.com/) Built as a replacement of Stack Overflow jobs, based on their description. **Pros** +It has much better filtering capabilities than other options, to find what you are looking for +All jobs get taken down after 30 days at most, ensuring fresh listings. +Has a lot of open opportunities. Almost 5 000 at the time of writing, which when the previous point is considered is a lot of activity from companies looking for candidates. +Has an easy-apply option. **Cons** -They also aggregate jobs from 11 other platforms, aiming to be the go to place for finding remote work. However, when tested with several postings from We Work Remotely, they didn't appear in the results of Remote Tech Jobs. #### [LinkedIn](https://www.linkedin.com/feed/) **Pros** +The most obvious candidate for everyone.  +The number of companies looking to hire through LinkedIn has increased over the last several years. +Has an easy-apply for a lot of the open positions +Can filter by location as well (in case you need/want to go to an office as well or are looking for a simpler legal process (when both you and your employer are in the same country), or if you are just considering moving there) +Can easily research the company's profile right there in LinkedIn, including for active or past connections to get some inside information **Cons** -It still has mostly permanent positions -A lot of early-stage startups don't post on LinkedIn while they are still in stealth mode, so if you are interested to be one of the first hires at a company, other entries such as AngelList or HackerNews would be better options #### [AngelList](https://angel.co) **Pros** +Has a lot of active listings +Companies are required to list their salary range for the posting, so there is less of a chance of surprises +Companies also list expected bonuses in terms of stock options +Has a streamlined application process **Cons** -There is a lot of competition especially because of the easy application process so you'll need to stand out -Most of the companies on AngelList are early stage start ups. This can be viewed as a benefit for some people as well #### [Otta](https://otta.com) **Pros** +Each company posting jobs at Otta is screened by them so the quality feels a lot better than some other platforms +Extremely good filtering +Streamlined application process **Cons** -Lower number of overall job postings #### HackerNews This one is a bit more complicated as it has a few different parts: * https://news.ycombinator.com/jobs / https://www.ycombinator.com/jobs * [Ask HN: Who is hiring? (September 2022)](https://news.ycombinator.com/item?id=32677265) - new thread each month * [Ask HN: Who wants to be hired? (September 2022)](https://news.ycombinator.com/item?id=32677261) - new thread each month Searchers build on top of the information from those threads * [https://kennytilton.github.io/whoishiring/](https://kennytilton.github.io/whoishiring/) * [https://hnjobs.emilburzo.com/](https://hnjobs.emilburzo.com/) **Pros** +More personal approach for each position +You'll most likely get interviewed by the actual people you'll work with and not an HR, recruiter, or a headhunter +A lot of open positions and overall active community **Cons** -No easy filtering like other options on the list -Some listings might have expired and not been closed -No easy way to find still opened listings (from the end of last month for instance) -If you are posting your own profile to get contacted, you'll need to do it every month (if you don't find a job right away) #### **Other noteworthy mentions that have declined over the years** [No Desk](https://nodesk.co/remote-jobs) [Dice](https://www.dice.com) [Remote Work Hub](https://remoteworkhub.com/) [Pilot](https://pilot.co) [Upstack](http://Upstack.co) [Gun.io](http://gun.io) #### **Notes:** Always read the job posting in its entirety! Some companies would ask questions in their listing that they'd like to get an answer to when applying. A sort of a replacement for the usual stock cover letters. Prepare yourselves for a lot of companies not closing their job postings after finding a candidate. Especially on HackerNews. P.S. Everything written above is from my own limited perspective when I have applied for different jobs or noticed when deciding on which platform to post a job to get the maximal result. It may vary for different disciplines and/or cases. I would love to discuss what others have come across while searching for job opportunities in the comments below or on [Twitter](https://twitter.com/k_ivanow) _Last disclaimer - The image at the top of the article is from [Ostap Senyuk](https://unsplash.com/@kintecus) at [Unsplash](https://unsplash.com)_
k_ivanow
1,190,426
Artificial Intelligence
Hey guys!! Yay, you’ve made it to my first article! Let’s talk about Artificial Intelligence (Ai |...
0
2022-09-11T18:09:55
https://dev.to/agrimasharma/artificial-intelligence-341l
python, programming, pythondevlopment, ai
Hey guys!! Yay, you’ve made it to my first article! Let’s talk about Artificial Intelligence (Ai | Python). What is the importance of Artificial Intelligence in today’s modern era and what can we make out of Artificial Intelligence? Artificial Intelligence is a constellation of many different technologies working together to allow machines to perceive, understand, act, and learn at a human-like level of intelligence. One subset of AI is machine learning, which refers to the concept that computer programs can learn automatically and adapt to new data without the assistance of humans. Artificial intelligence is built around the principle that human intelligence can be defined so a machine can mimic it effortlessly and perform tasks, ranging from the simplest to the more complicated. AI allows computers and machines to emulate the perceptual, learning, problem-solving, and decision-making abilities of the human mind. Artificial intelligence (AI) is the sentience demonstrated by machines, as opposed to natural sentience demonstrated by humans and animals, which involves consciousness and emotion. Artificial intelligence (AI) is often applied to projects that design systems that are equipped with intellectual processes that are typical to humans, such as the ability to reason, discover meaning, generalize, or learn from past experiences. Strong AI, also known as Artificial General Intelligence (AGI), describes programming capable of replicating the cognitive abilities of a human brain. The Association for the Advancement of Artificial Intelligence (AAAI) (formerly known as the American Association for Artificial Intelligence) was founded in 1979 and is a non-profit, scientific society dedicated to the advancement of scientific understanding of the mechanisms that underlie sentient behaviors, as well as the implementation thereof in machines. The Artificial Intelligence Journal (AIJ) welcomes papers that address the broader aspects of AI constituting advances in the field as a whole, including, but not limited to, cognitive science and artificial intelligence, automated reasoning and inference, case-based reasoning, common sense reasoning, computer vision, restriction processing, ethical AI, heuristic retrieval, human interfaces, intelligent robots, knowledge representation, machine learning, multi-agent systems, natural language processing, planning and action, and reason-under-uncertainty. While these definitions might sound abstract to the average person, they serve to bring artificial intelligence into sharper focus as a branch of computing and offer a roadmap to integrating machines and programs with machine learning and other subsets of AI. AI systems may encompass everything from the expert system--a problem-solving app that makes decisions based on complicated rules or if/then logic--to something resembling the equivalent of Pixar’s fictional character Wall-E, a computer that evolves human-like intelligence, free will, and emotions. According to researchers Shubhendu and Vijay, these software systems make decisions that typically take a level of expertise that is [human] in nature, and help humans predict problems or handle problems when they arise. Some programs have achieved human-level experts and practitioners’ levels of productivity at performing some particular tasks, such that AI, in this narrow sense, is found in applications as varied as medical diagnoses, computer search engines, and speech or handwriting recognition. The argument from the artificial brain An argument asserts that brains can be simulated by machines, and since brains display intelligence, those simulated brains should display intelligence too--ergo, machines could be intelligent. Machines These simulated brains Whether or not it is possible for artificial general intelligence; if it is possible for machines to solve every problem a human can solve using intelligence, or whether there are strict limits on what machines can achieve. While Artificial Intelligence (AI) is the undisputed king of the tech world, several other related technologies are making their way into the world. These technologies, collectively known as “siblings of AI”, include Machine Learning (ML), Deep Learning (DL), and Natural Learning (NL). Each of these technologies has its own unique set of capabilities and applications. For instance, Machine Learning (ML) is widely used in predictive analytics and fraud detection, while Deep Learning (DL) is used in image and voice recognition. Natural Learning (NL), on the other hand, is used in human-computer interaction and natural language processing. In this article, we will take a closer look at each of these technologies and their applications in the real world. **_Ancestors Of Artificial Intelligence_** An AI system has identified a previously unidentified human ancestor who roamed the Earth tens of thousands of years ago, leaving genomic fingerprints on Asian individuals, scientists said. Now, researchers from the Institute for Evolutionary Biology (IBE), Centro Nacional de Analisis Genomico (CNAG-CRG) at the Center for Genomic Regulation (CRG), and the University of Tartu used deep learning algorithms to pinpoint a new, previously unknown ancestor of humans who would have interbred with modern human tens of thousands of years ago. By combining deep learning algorithms and statistical methods, researchers at the University of Tartu in Estonia, the Institute of Evolutionary Biology (IBE), and the Center for Genomic Regulation (CRG) in Spain found the extinct species was a neanderthal-Denisovans and interbred with modern humans across Asia. After analyzing models, AI confirmed the existence of the ghost population of the ancestral humans, which probably interbred with Denisovans and Neanderthals. The researchers used that computer’s brain to their advantage, working backward and feeding it models of ancient demography, including a Neanderthal-Denisovan hybrid until it came up with a genome resembling that of the modern human. Until now, the existence of an as-yet-undiscovered human species was just a theory, which would explain the origins of certain segments of the human genome, but it was the use of deep learning that allowed DNA to spill over into the ancestral population’s population demography. The still unknown human species, according to results from research published in the journal Nature, will be a fusion of Neanderthals and Denisovans, which would have, in turn, crossed over with early H**o Sapiens 40,000 years ago, when they left Africa for Asia. While there has been a lot of controversy over just how much of the modern human lineage is due to Neanderthals and Denisovans, the new research using AI has identified the tracks of an unidentified, third, ancient human relative who had interbred with Homo sapiens sometime in the past. Using AI, several European evolutionary biologists now think humans had an ancient ancestor, the identity of which is not known by current science. A recent study used machine learning techniques to analyze eight leading models for the origins and evolution of humans, and one programme identified evidence of an ancestor of the ghost in the human genome. A group led by the Lund University of Sweden, Lund University, Sweden, developed a technique using machine learning to identify genomes of dead organisms to establish how long ago the organisms were. Specifically, the team trained their model on a publicly available data set of human genomes, which were mostly dated using radiocarbon and archaeological methods. In a 2019 study analyzing humanity’s complicated prehistory, scientists used artificial intelligence (AI) to pinpoint the unidentified species of human ancestors modern humans encountered -- and shared sex with -- during their extended journey from Africa thousands of years ago. **_Future Of Artificial Intelligence_** The future of artificial intelligence and its implications for human lives, whether it is a wonderful technology or a threat to humanity. These are questions addressed in a recent report by the One Hundred-Year AI Study (AI100), a current project hosted by Stanford University that will examine the state of AI technology and its impact on the world in the coming 100 years. The One Hundred-Year AI Study (AI100), a current project hosted by Stanford University will examine the state of AI technology and its impact on the world in the coming 100 years. AI has been the primary driving force behind emerging technologies such as big data, robotics, and the Internet of Things, and will continue to serve as the technology innovator shortly. The future of AI is impacting the future of practically every industry, and every person. AI has already transformed nearly every industry, but the future of AI promises to transform more businesses. As artificial intelligence evolves, the world will witness new startups, a myriad of applications for businesses and consumers, as well as the displacement of some jobs and the creation of whole new ones. With all of these new AI use cases comes the scary question of whether machines will push humans to the point of obsolescence. Even beyond the persistent dream of human-like intelligence, the future of AI is expected to play a profoundly important role in the markets for consumers and businesses. Augmented human intelligence, artificial intelligence is poised to revolutionize the process of scientific inquiry, ushering in a new Golden Age of scientific discovery in the coming years. Artificial general intelligence, the idea of an actual, human-like AI brain, remains a subject of intense research interest, but one whose goals experts agree is still years away. The ultimate goal is AI, a self-teaching system capable of surpassing humans across a broad set of disciplines. AI is a branch of computing that seeks to design intelligent machines capable of imitating human behavior. Super-intelligent computers will be more capable than humans of doing everything that we are capable of. As discussed earlier, AI can be divided into three types, weak AI, which can do a particular task, like predicting the weather. Right now, AI can outperform humans in some specific tasks only, but it is expected that AI in the future will outperform humans on all cognitive tasks. No matter how smart the AIs of the future get even generic ones they will never match the intelligence of humans. Any discussion about the future of artificial intelligence inevitably turns to the idea of artificial intelligence recreating human-like patterns of learning and growth, or of reaching some version of sentience. Tech companies, industry observers, critics, and scholars are all grappling now with what AI’s rapid advances mean for humanity’s future. We spoke to Finale Doshi-Velez about the report, what it says about the role that AI is playing in our lives right now, and how that is going to change going forward. A topic currently being discussed about AI is the relevance it has to jobs. **_Applications Of Artificial Intelligence_** Now let's have a look at AI types, uses of AI, and applications of AI, like healthcare, finance, eCommerce, robotics, marketing, etc., in detail. Like smart applications, AI products are software applications that use big data and machine learning to allow businesses to enhance processes, understand their customers, and provide products and services which are delightful for users. Using various applications of AI, finance is incorporating adaptive intelligence, algorithmic trading, and machine learning into financial processes. Artificial Intelligence applications are also being used to help optimize the trading industry. AI has been used to grow and improve several fields and industries, including financial, health, education, transportation, and others. Specific applications of AI include expert systems, natural language processing, voice recognition, and machine vision. Other applications of AI include using online virtual healthcare assistants and chatbots to assist patients and healthcare customers in finding health information, scheduling appointments, understanding billing processes, and performing other administrative tasks. AI has applications in financial industries, too, where it is used to identify and flag activities within banking and finance, such as unusual use of debit cards and high deposits into accounts - all to aid the banks, fraud division. Information-intensive industries, such as marketing, health care, and financial services, are particularly likely to benefit from AI applications. In addition to using computing technologies for better and faster diagnosis of diseases, various meaningful applications of AI are possible, since sophisticated algorithms can be used to mimic human cognition to analyze and interpret complicated health and healthcare data. Artificial neural networks and deep learning Artificial intelligence technologies are rapidly evolving, mostly because AI processes vast amounts of data far faster and make predictions far more accurate than is possible by humans. AI is being proven a game-changer in health care, improving practically every aspect of the industry, from robotic-assisted surgery to protecting personal records against cyber criminals. Owing to increasing computing power, advances in techniques and technologies, and an explosion of data, AI has established itself as an enabler technology across several fields, ranging from industry to commerce and education [1, 2, 3, 4]. Now that we understand various aspects of AI Intelligence and its applications across various sectors, let us look at a list of the top 15 applications of AI. Algorithms usually play a very crucial role in structuring an AI, with simpler algorithms being used in simpler applications while the more complicated ones help to frame stronger AI. AI models trained using large volumes of data are capable of making smart decisions. AI systems learn to interact efficiently with customers, according to data and customer profiles, then deliver personalized messages at ideal times, with no marketing team need for intervention, which enables optimum performance. By using behavior analytics, pattern recognition, and other AI tools, marketers can serve highly targeted, tailored ads. AI has been combined with several sensor technologies, such as digital spectrometry from IdeaCuria Inc. that allow for several applications, such as at-home monitoring of water quality.
agrimasharma
1,190,865
Testing article
Lorem lipsome
0
2022-09-12T09:30:24
https://filecoinvm.hashnode.dev/testing-article
Lorem lipsome
truckerfling
1,190,869
Why ReactJS is an Ideal Choice for SaaS Product Development?
Most software developers and companies are familiar with the popular programming languages and...
0
2022-09-12T09:42:52
https://www.solutelabs.com/blog/reactjs-for-saas-product-development
react, saas, javascript
Most software developers and companies are familiar with the popular programming languages and frameworks such as Java, Python, PHP, etc. One of the most talked-about web frameworks at the moment is ReactJS, with over 40% developers preferring to use it. SaaS-based products commonly use it to build user interfaces. Big companies that use ReactJS include Facebook and Instagram. ReactJS is known for its large community constantly working on the framework to add more functionalities. It is best known for being an SPA framework, but you can also use it for server-side rendering. In this blog, we will discuss why ReactJS is the best fit for SaaS product development. ## Top Features of ReactJS ![Top Features Of ReactJS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbw0z96z0vbaauh29091.png) ### 1. Component structure React is a JavaScript library that helps you build web apps with data-binding and a component hierarchy. It's important to understand the basics of components to understand how React works and how you can use it in your projects. Components are the building blocks of React. They're reusable components that can be nested within other components to create a hierarchical structure (for example: "Header" contains "Navbar," which contains "Left Nav"). Components can also pass data down through their hierarchy using props (the equivalent of instance variables or properties). You can think of props as being like HTML attributes. They are immutable by default, but you can modify them by passing an object containing new values down from parent component(s) to child components. ### 2. JSX JSX is a JavaScript syntax extension. It allows you to write HTML-like code in JavaScript. With JSX, you can create components by composing them from other components and expressing their state, properties, and behavior as properties. This makes it easy for others to understand your codebase and make changes when required. ### 3. One-way Data Binding One-way data binding is a technique that uses force to update the view when the data changes. The advantage of one-way data binding over two-way data binding lies in its simplicity and performance. In two-way binding, when you change something in your model, you also need to bind it with a new value in your view. So, with one-way data flow, you only have to set up changes from your model to your view—not vice versa. This means if someone adds an item into an array or removes it from an array by calling the delete method on the same array object itself. Then, you won't see any effect in your view. There was no promise made during initialization time which ensures removal or addition of items into arrays etc. Hence, they have not reflected on the DOM automatically. ### 4. Virtual DOM As you may have noticed, ReactJS uses the concept of a Virtual DOM to do all its calculations. The Virtual DOM is a JavaScript object that represents the HTML DOM. It calculates the difference between the current and previous Virtual DOM. The reason for using this approach is that it allows ReactJS to avoid modifying the real DOM unless necessary. That way, any changes made by ReactJS can be undone if need be—or another part of your application could overwrite them if you choose to do so (e.g., by setting state). This makes writing components considerably easier since you only need to worry about how they render data, not how they interact with other parts of your application or manipulate their data over time. ### 5. Performance ReactJS is fast and efficient. It's one of the fastest frontend frameworks out there because it uses a virtual DOM to make your apps run faster. The virtual DOM means that ReactJS only updates the parts of the page that need to be updated, rather than repainting everything on your screen every time you act. This also helps reduce memory usage by storing only what's necessary at any given time in memory. It also achieves this feat by being optimized for mobile devices—even before they were popular (we're talking about 2010 here). Its optimizations allow React apps to run as smoothly as native apps while maintaining cross-platform compatibility across all major browsers and devices. ### 6. Native Approach The second feature that makes React so popular is the native approach to rendering. The "native approach" means that React can render HTML and other content without a browser, which means you can run React on the server. This has several benefits: You can share code between the client and server, making it easier to manage your application. Since React renders with JavaScript, it's easy to customize the user interface or create new components from scratch. ## Top Reasons for Using ReactJS for SaaS Product Development Talk about some startup scenarios where it becomes necessary to choose something like React. This is bland. ![Top Reasons for Using ReactJS for SaaS Product Development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yycaneowv6cxvtbe0uvz.png) ### 1. Open Source React is free to use and licensed under the MIT license, meaning you can use it for whatever you want. Facebook maintains the library and provides an ecosystem of tools and services around it. Still, the core library is open source and maintained by Facebook, individual developers in the community, and other companies like Instagram, Netflix, and Airbnb (all of which are on Github). ### 2. Easily scalable Scalability is a key aspect of any SaaS product, and ReactJS makes it easy for you to expand your offerings without overhauling the entire app. This is because ReactJS allows developers to create self-contained components that you can reuse throughout the application. For example, suppose you want to add a new feature that displays charts in real-time. In that case, you must build this charting feature once and then reuse it multiple times across your application. You can do it without having to rewrite any code or change existing functionality. With this approach, you can save time and increase scalability. You can ensure that each component has responsibilities and can be independently tested and maintained. It is easily attainable while providing seamless interaction with other parts of your system. ### 3. SEO To stay one step ahead of the competition, your Google rankings and user experience must be appealing to users. The faster rendering speed of React allows your webpage to load up more quickly and optimize based on the amount of traffic you are getting. One of the essential aspects of customer satisfaction is that a webpage will load entirely in a matter of seconds. This is a highly competitive area that many corporations try to achieve, and having a system that can manage high-volume websites will increase your chances of not being left behind. ### 4. Full flexibility with an awesome interface ReactJS is not a framework. It is a library. This means that you can use React without any additional libraries or frameworks. It is also very small and lightweight, so it does not cause any performance issues. ReactJS is the view layer, not the entire front end. React focuses on user interface and interaction with the user but doesn't include anything else in its core implementation like routing, state management, etc. You need to use other libraries or frameworks such as Redux or Mobx for these purposes (but again, those are just libraries). ReactJS is a pure JavaScript library developed at Facebook in 2011 by Jordan Walke, who worked on it along with Instagram's team. He made it an open-source platform on which to build after creating their version, Falcor (still used by them today). ReactJS works by setting up an initial component tree consisting of React components and other parts such as HTML elements or text contained within them (in case you need them). Once this setup has been created using JSX syntax, which allows you to mix HTML markup inside your JavaScript code. The next step would be to render this component tree into a DOM element using the "render()" method provided by the react-dom package, which takes care of rendering everything inside the browser. ### 5. Server side rendering compatibility React is compatible with server-side rendering, meaning you can render your components on the server. This way, when a user visits your site for the first time, their experience will be much better than waiting for Javascript to load. This is especially true when many people visit your website at once (e.g., during launch). Isomorphic rendering: Another option is to use an isomorphic architecture where both sides of your application execute in Node and communicate via events/data over a REST API (or GraphQL if you want). You can also use a SPA framework like Angular or React without worrying about whether it's being rendered on the front or backend because everything gets rendered on both sides. Universal Rendering: Universal rendering allows you to extend your apps across all devices—mobile phones, laptops, and even TVs—by creating one codebase that will work perfectly well everywhere, regardless of what device someone uses. ### 6. Virtual DOM The virtual DOM is a JavaScript object that represents the HTML document and is used to render it. It's also what you use to compare the old version of your HTML document with the new version of your HTML document. The library uses the concept of components to build the GUI. Components used in ReactJS are similar to multiple pages loaded in the browser. Hence it's called "Virtual DOM." Every change made in the UI after the initial page load takes place in this virtual DOM. Hence, only affected parts of the DOM are updated, which helps in the pages' faster rendering. The Virtual DOM makes ReactJS more efficient than its counterparts, such as AngularJS and VueJS. ReactJS uses one-way reactive data binding, whereas other frameworks use two-way data binding, which can be inefficient and slow down performance. It's because two-way communication requires data to travel back and forth between the client and server. ## Wrapping Up ReactJS is something that you should look into further. More and more companies are constantly using it. The benefits it provides shouldn't be ignored; new companies are developing apps around React almost daily. ReactJS offers numerous advantages for SaaS product development. It's a popular, ever-evolving platform supported by a large and talented community of developers. It combines the performance of a mature library with the ease of use of a framework, making it an attractive solution for quickly getting your new app off the ground. If your business is considering launching a SaaS product in 2022, you may want to consider ReactJS as your frontend framework seriously.
karishmavijay
1,190,889
What to expect from your first week as a junior web developer
The weeks leading up to my first Junior developer job were terrifying. I had no idea what to...
0
2022-09-12T10:49:59
https://scrimba.com/articles/first-week-as-a-junior-web-developer/
beginners, webdev, junior, career
The weeks leading up to my first Junior developer job were terrifying. - I had no idea what to expect - What if I don't know what to do? - **What they hired me by accident 😱?** In the end, it turns out I was worried about nothing. Teams are experienced at hiring people and they don't hire you unless they believe you will be successful in the role. Still, an idea about what to expect would have allowed me to prepare best and put my mind at ease. Now I have been on the job for ten months, I am drawing on my real-world experience to outline what you should expect from your first week on the job. Week 3 is a doozy. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e31nsqp9bb423uoiuup9.png"> <figcaption>Hi, I am Akwasi 👋🏽! In this post, I am going to tell you what to expect from your first week</figcaption> </figure> ## Day 1: Finding your way around When you start a new developer job, there is always much to learn. Fortunately, you don't have to learn it all at once. Your team will understand you're new and afford you the time you need to learn about the company, meet the team, and settle in. **If you work in an office,** you will likely meet with your manager to discuss the day and week ahead of you. They will give you a key card, show you around the office, and point out where different teams sit. This way, if you need help, you know where to go. If you are inclined, they will be happy to show you how the coffee machine works! **If you work remotely,** your manager will probably schedule a Zoom call. Every team has a unique approach to remote work, which you will learn in this call. Whether you are working in person or remotely, there will be a lot of introductions, new faces to learn, and information about the company to digest. Most companies have an internal documentation website, which your manager should give you access to right away. Typically you'll find information about the following: - People directory which tells you who works at the company and what they do (see an example below!) - Company values - Company internal blog - List of customers - Legal information - HR information like what day you are paid <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3i1gbb3r5ebgfpyy610v.png"> <figcaption>Example from Scrimba</figcaption> </figure> While the information outlined above is relevant to everyone in the company, you will probably find subpages specifically for the engineering team. This subpage will go into the nitty gritty detail like: - What tools to install - How to gain access to the company source code and servers - Who to go to for help - Common troubleshooting steps - Postmortems - Diagrams about how the services and apps connect and communicate with each other Your first week will involve onboarding onto the company's system. This may involve downloading relevant software, setting up development environments or just simply customizing your passwords! Maybe your company has access to some web development courses like on [Scrimba](https://scrimba.com/articles/first-week-as-a-junior-web-developer/scrimba.com) (or an education budget you an spend) Onboarding can be pretty challenging as while there will likely be instructions, it will be your first time using production-relevant software such as Nginx, Docker or SQL Server Management. Don't worry if you need to start again or if you mess up any installations. Anything you do will likely be able to be undone, and people are usually very understanding! Sound exhausting? It was for me, but you'll get lots of 'downtime.' For instance, I was given over an hour to set up my emails, fill in my details on the company's HR page, and organize my calendar. > 🗯️ Story time! Oh boy, do I have a story for you… My onboarding experience was very smooth, apart from having an issue downloading a package that all my colleagues used! When I would pair with them, I was used to hearing, 'you really should get this package.' The issue was I had to chase up my manager's manager, who I knew was super busy (very approachable but busy nonetheless). After plucking up the courage after a month of waiting, he was surprised I had waited so long to follow up with him! Learn from my mistake! Time box a reasonable time to follow up with someone and have the courage to do so! During this first day, try to take notes - for example, make your own notes about who is who and what they do. While your manager will always be a helpful point of contact, noting the who's who in your company will save you time and make your onboarding experience easier: - Who is best to contact if your laptop is faulty? - Who is best to contact if you lose your keycard? - Who is best to contact if you have questions about your employment contract This information may exist in the internal documentation already! If it doesn't, you could suggest to your manager that you update the documentation. This will leave a good impression on your manager for sure! More importantly, you are making the onboarding a bit better than you found it for the next new joiner. ## Day 2: Your first standup <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lnryww4qhw8q6yb0z3ew.png"> <figcaption>An example of a "stand" up. The idea is you stand while sharing team updates. This team did not get the memo, courtesy of Unsplash: https://unsplash.com/photos/ZT5v0puBjZI </figcaption> </figure> The famous standup! If you've done some research about web development, you've probably come across this term, but assuming this is the first time you have heard this term, standup is where a team will meet daily to discuss individual tasks. If your whole team comes in, you may all meet in a room, but it'll likely be online as so many people work from home now. Here you will be given a chance to meet your team, be up to speed on current projects and learn about your team's objectives for that sprint or quarter. Your team will likely have some 'board' (think of a virtual whiteboard) that is used as a hub to manage and see where everyone is at with their 'tickets' (a word often used to represent a task, a bug to fixed or an investigation to be conducted). Don't worry if you feel like you would struggle to pick up one of these tickets straight away. It takes a long time for mid- and senior-level engineers to orient their way around production-level codebases, and it will take some time to understand how things are done at your company! While you may not be able to tackle the most complex tickets, you will be able to contribute in due time, leading to my next point. You'll have lots of questions during your first few standups. Here's a tip, write down any question you have and note of any tickets that you find particularly interesting or want to learn about. Maybe you've not used Modals before. Perhaps you're confused about how Redux works. Make notes and organize a catch-up with someone senior or the person who picks up the ticket. It will give you an opportunity to pair program and learn! >💡 **Where do standups come from?** The idea of a standup comes from agile project management, which, simply put, is the industry standard for helping programmers and stakeholders achieve their goals fast and effectively. >A daily standup should: - Ideally, be no longer than 15 minutes - Be an opportunity to share and reveal any blockers or communicate how work may affect others ## Day 3: Understand the company's mission Mission may seem to be a dramatic word for the world but every company exists for a reason. Understanding the problems your company is trying to solve or the improvements they are making to your industry is vital for you to understand. Understanding why will help you see the role you play in fulfilling this mission. If someone asks you what's so special about your company's product or service on the Friday of your first week, if you can't give a good answer, you may not be in a job for long! Try in your first week to engage and understand your company's working culture as much as possible. Do people prefer to send messages on Slack or via email? Do your colleagues use emoji's when communicating with you? Do your teammates turn their cameras on during standup or have them off? Odds are, if they all turn their camera on, they'll expect you to as well! The closer you're tied into the company, its mission, its culture, its fabric, the better time you'll have long term! Understand how your company achieves this mission is really important. ## Day 4: One on one <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iv90wg3d8w12xrb6wmvt.png"> <figcaption>One to One meeting</figcaption> </figure> You'll likely have 1-to-1 with your manager during your first week. The number of 1-to-1s you have in web development will probably be higher than in other industries. Most developers have monthly 1-to-1s with their direct reports. Take advantage of this! Make sure to get an idea of the expectations your manager and the company will have of you and inform your manager of your goals. It's a two-way street! Try to get an idea of the working formalities of working at your company. - How much notice do you need to give to get the holidays you like? - What's the best way to contact your manager in the case of a family emergency? These questions may seem like overkill, but it's best to get them out earlier rather than later. Try and organize a few lunches and coffees with your team. A simple message such as 'Hey, I just joined the team this week and would love to introduce myself properly! When you get a few minutes free, I would love to have a quick chat' surprisingly will go a long way! Don't expect people to come to you! It's tempting in a new office to think people should welcome you, but they're busy, or maybe they don't think like that. Take responsibility for reaching out to people! ## Day 5: Getting stuck in Here's where my advice and experience will get less specific. Different companies do things differently. Some will assign you tickets that aren't time-sensitive on day one. Some will put you on an online course to upskill you. Others will have you shadowing senior developers to understand by watching. Things will be different across the board, but the important thing is that you start your first week keen to learn and read! A big part of web development is reading, and a smaller portion is writing code! Be ready to learn and make mistakes! - You'll commit your changes to the main branch by accident. - You'll forget to make frequent commits and pay the price for it later! - You'll face merge conflicts, lots of them, but it's all part of the process, and everyone has been through it, and you, too, will come out the other side! When you face an issue, try and follow DGC, an acronym I made up and followed, but it works for me! **D - Documentation!** Always check your company's internal documentation. It may be a known issue. It's worth the check! With time you may swap this step with the Google step! **G - Google!** Make sure to Google any questions you have. Learning how to Google effectively will help you down the line as it will help you understand how to search for things succinctly and to get the best answers! Always timebox this according to the urgency of the task you are on. StackOverflow can be super time-consuming! **C - Colleagues!** Make sure to let a colleague or buddy know of your issue and mention to them things you have tried. Having taken some initiative to try things out for yourself, you will leave a good impression on them and save them from trying something that you now know will not work! ## The verdict Try not to overthink your first week! Will you forget someone's name? Yes! Will they hate you for it? No! People and your colleagues are likely to be much more forgiving than you think, and your first week will not make or break you! The coming weeks and months will be your opportunity to affirm your company's decision to hire you for the skills and potential you showcased to them during your interview!
thatlondondev
1,190,966
The price of healthy eating
It’s March 2020 and suddenly we’re all forced to spend time at home and because restaurants and bars...
0
2022-09-12T13:00:10
https://dataroots.io/research/contributions/the-price-of-healthy-eating
--- title: The price of healthy eating published: true date: 2022-09-12 06:07:00 UTC tags: canonical_url: https://dataroots.io/research/contributions/the-price-of-healthy-eating --- It’s March 2020 and suddenly we’re all forced to spend time at home and because restaurants and bars are closed, all together we rediscover our kitchen and our passion for banana bread. We’re forced to learn how to cook and are eager to make healthy lifestyle changes by cooking healthy and exercising more. Fast forward to February 2022 and our passion for banana bread is being threatened by a war causing a sudden increase in the price of flour. What do we have to do now? Are there other alternatives to discover? Obviously, expensive banana bread is the least of our concerns compared to a worldwide pandemic and war. However, a less reported but apparent trend is also threatening our way of living. It’s the one of highly processed food, taking out all nutrients and replacing them by o so delicious fats, sugar and salt. This is causing more and more people to become overweight, leading to a variety of chronic diseases, putting a burden on our healthcare systems and overall quality of life. An easy remark is to say that obesity is simply a lack of discipline as there are many healthy food choices that can easily be added to one’s diet. Is it really that simple? Even for people with a military discipline, those healthy choices are not as accessible as one might think. It’s no secret that eating healthy is overall more expensive. But how expensive exactly? That’s what we’ll try to find out in this new data visualisation blog post. # About Healthy food This blogpost is not at all intended to be opinionated as is the case with most diet trends. The goal is to focus on quantifiable metrics to decide how accessible healthy food is and if we really need to grow some discipline. How do you define how healthy food is? There are still many unknowns about the human body and the impact of food on it. On the back of the package, you can find the nutritional label. This includes information like amount of (un)saturated fats, carbs, sugars, fibers, protein, salt and possibly more. How do all these interact with each other? How do they impact health? To answer this question in a straightforward way, some researchers came up with the Nutriscore which we’ll also use for the sake of this analysis. One metric to score a food by combining all nutritional metrics into one formula. A very good job when it comes to communicating data: the simpler the better! Will this be the perfect metric? Probably not, but what you give in on accuracy, you largely compensate by interpretability! We will thus not dive into details of the calculation of this score. The only thing you need to remember is that **A** is very healthy and **E** very unhealthy. ![](https://dataroots.ghost.io/content/images/2022/09/image1.png) We probably don’t have to tell you where to find healthy foods in the supermarket. We all know fresh vegetables and fruits are healthy and we probably shouldn’t buy that pint of ice cream. There’s however a whole range of products in between those extremes. To check our conception about healthy food, we’ve scraped the food items from several online supermarkets and categorised their products. Luckily, most websites nowadays also give you the nutritional information so we scraped those as well and calculated the Nutri score for every category. ![](https://dataroots.ghost.io/content/images/2022/09/image2.png) It’s nice to see our common sense seems to be right. The less processed the foods, the healthier usually. So why not just follow our common sense and buy these healthy foods? # Price of healthy eating We’ve established how to measure how healthy food is, now it’s time to measure “accessibility”. We’ll make the assumption you have physical access to a supermarket as well as time to go there. The only thing that might hold you back is your budget. We’ll thus take price as a proxy to measure how easy you have access to healthy foods. The general conception is that processed foods are usually cheaper than fresh and healthy alternatives. Let’s see if the healthy food categories identified earlier also correlate with higher prices. For every food category, we took the price per 100 kcal and took the median to represent that food category. Well, it turns out that many of the healthy food categories are also pretty damn expensive, requiring more than 1€ to consume as little as 100 kcal. At the bottom of the ranking, we see foods high in sugar and processed carbs, confirming its reputation as a cheap ingredient. If you want to eat enough veggies and some healthy protein, you require about 20€ per day. ![](https://dataroots.ghost.io/content/images/2022/09/image3.png) Of course, no diet should exist out of only one food category. What we are really interested in, is how accessible healthy and varied eating is for people with a tight budget. Let’s say we’re an average human being that needs to consume about 2000 kcal to fulfil their daily calorie requirement. The goal is to fill these 2000 kcal with food as nutritious as possible for a cost as low as possible. How much does our diet cost when we base it on the Nutri Score instead of food categories. We thus try to select healthy foods across multiple categories. What we see is that again, overall, healthy food choices are much more expensive. Filling your plate with only food items with a Nutri score of A, will cost you on average more than 15€ a day or thus about 450€ a month and that doesn’t even take into account the possibility of going out eating. If you’re a family of four, this means, 1800€ of your monthly budget would go to food. ![](https://dataroots.ghost.io/content/images/2022/09/image4.png) Available money is not something you can easily change, especially for less well-faring households, food can be something on which they prefer to save as much as possible. According to [research](https://vilt.be/nl/nieuws/145-procent-van-vlaams-gezinsbudget-gaat-naar-voeding) of the Flemish government, an average household in 2021 spent 5.268€ on food on a yearly basis or about 2.337€ per person. For the 25% households with the lowest income, this is reduced to 3.431€ per household, for the top 25% earners, this is as high as 7.217€ per household. That’s a factor 2.1 difference. There is thus some inequality when it comes to availability to healthy food. It's clear that fully relying on foods with Nutri label 'A' is pretty expensive which is only affordable for the top earners of our society. Add to this the increased calorie requirement to maintain an active lifestyle as well and you quickly realise that maintaining a healthy lifestyle is only for the lucky few. Nutrition has an incredibly important impact on our overall health which brings tons of other benefits like increased performances at school and work. These again allow you to earn more money. Do you notice the vicious circle you might get stuck into? How to break this vicious circle? Well that’s a political discussion about wages, taxes and subsidies in which we don’t want to take a stand. As always it will have to be a solution benefiting all parties, not just punishing one. After all, we’re Belgians so we’re supposed to be good at compromising ;) We’re very curious to see how this analysis evolves as food prices are going through the roof. We might come back with an updated analysis to you or if you have an interesting question regarding rising food prices, we’d more than happy to crunch the numbers and give you an objective answer.
bart6114
1,191,098
The simplest way to differentiate between the data engineers, data scientists and the data analyst.
Taking the three roles as a complete architectural model, we have architecture engineers designing...
0
2022-09-12T14:10:07
https://dev.to/ndurumo254/the-simplest-way-to-differentiate-between-the-data-engineers-data-scientists-and-the-data-analyst-21ei
datascience
Taking the three roles as a complete architectural model, we have architecture engineers designing and building the house, lorries and trucks, bringing the building materials and drivers to the construction site ##Data engineers Data engineers, in this case, are like architectural engineers. just like the architectural engineers,They will design and build the pipelines to ingest data. They are also responsible for maintaining these pipelines. They are the brains behind how the data will flow from data lakes or data warehouses to the data pipelines. ##Data scientists Data scientists are like trucks and lorries bringing construction materials to the construction site. Just like the trucks carries the material, data scientists will carry the preprocessed data to the consumer. They will use technologies such as machine learning to make future prediction. They will exploit the data from data pipelines and draw complex insights from such data. ##Data analysts. Data analysts. Like the driver ,takes the building materials to the site, data analysts use their skills to drive the data to the consumer. Data analysts examine and combine several datasets to help the business understand the trend in the business. Data analysts are the brains behind making an informed business decisions in an organization. They work with the current data to understand the current business situation of an organization
ndurumo254
1,191,408
I made a Fee Token in Balancer
Balancer is a Project that has caught my attention because is innovating in the Defi space. Balancer...
0
2022-10-20T18:11:23
https://dev.to/filosofiacodigoen/i-made-a-fee-token-in-balancer-38kg
--- title: I made a Fee Token in Balancer published: true description: tags: cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8d0i1ntwcx0k6fu32346.png --- Balancer is a Project that has caught my attention because is innovating in the Defi space. Balancer allows investors getting passive income while keeping a portfolio that suits them, traders can do swaps with low gas fees and protected against MEV, bots have good arbitrage opportunities among other things. All of this while offering a modern UI and Smart contracts. In this video we will launch a token that charges transaction fees for each swap on the Balancer pools. {% youtube 6oVyjnfNnUk %} ## Before we start For this tutorial you will need [Metamask](https://metamask.io/) or another compatible wallet with Goerli funds that you can get from a [faucet](https://goerlifaucet.com/). ## Smart contracts with transactions fees in Balancer ```solidity // SPDX-License-Identifier: MIT pragma solidity 0.8.17; abstract contract Context { function _msgSender() internal view virtual returns (address) { return msg.sender; } function _msgData() internal view virtual returns (bytes calldata) { return msg.data; } } interface IERC20 { function totalSupply() external view returns (uint256); function balanceOf(address account) external view returns (uint256); function transfer(address to, uint256 amount) external returns (bool); function allowance(address owner, address spender) external view returns (uint256); function approve(address spender, uint256 amount) external returns (bool); function transferFrom( address from, address to, uint256 amount ) external returns (bool); event Transfer(address indexed from, address indexed to, uint256 value); event Approval(address indexed owner, address indexed spender, uint256 value); } interface IERC20Metadata is IERC20 { function name() external view returns (string memory); function symbol() external view returns (string memory); function decimals() external view returns (uint8); } contract BalancerToken is Context, IERC20, IERC20Metadata { // Openzeppelin variables mapping(address => uint256) private _balances; mapping(address => mapping(address => uint256)) private _allowances; uint256 private _totalSupply; string private _name; string private _symbol; // My variables address public balancerVault = 0xBA12222222228d8Ba445958a75a0704d566BF2C8; address public feeWallet; uint public _feeDecimal = 2; // index 0 = buy fee, index 1 = sell fee, index 2 = p2p fee uint[] public fees; mapping(address => bool) public isTaxless; // Openzeppelin functions constructor() { _name = "My Balancer Token"; _symbol = "BT"; feeWallet = 0x0000000000000000000000000000000000000000; fees.push(100); // 1% buy fee fees.push(200); // 2% sell fee fees.push(0); // 0% p2p fee isTaxless[msg.sender] = true; isTaxless[address(this)] = true; isTaxless[feeWallet] = true; isTaxless[address(0)] = true; _mint(msg.sender, 1_000_000 ether); } function name() public view virtual override returns (string memory) { return _name; } function symbol() public view virtual override returns (string memory) { return _symbol; } function decimals() public view virtual override returns (uint8) { return 18; } function totalSupply() public view virtual override returns (uint256) { return _totalSupply; } function balanceOf(address account) public view virtual override returns (uint256) { return _balances[account]; } function transfer(address to, uint256 amount) public virtual override returns (bool) { address owner = _msgSender(); _transfer(owner, to, amount); return true; } function allowance(address owner, address spender) public view virtual override returns (uint256) { return _allowances[owner][spender]; } function approve(address spender, uint256 amount) public virtual override returns (bool) { address owner = _msgSender(); _approve(owner, spender, amount); return true; } function transferFrom( address from, address to, uint256 amount ) public virtual override returns (bool) { address spender = _msgSender(); _spendAllowance(from, spender, amount); _transfer(from, to, amount); return true; } function increaseAllowance(address spender, uint256 addedValue) public virtual returns (bool) { address owner = _msgSender(); _approve(owner, spender, _allowances[owner][spender] + addedValue); return true; } function decreaseAllowance(address spender, uint256 subtractedValue) public virtual returns (bool) { address owner = _msgSender(); uint256 currentAllowance = _allowances[owner][spender]; require(currentAllowance >= subtractedValue, "ERC20: decreased allowance below zero"); unchecked { _approve(owner, spender, currentAllowance - subtractedValue); } return true; } function _transfer( address from, address to, uint256 amount ) internal virtual { require(from != address(0), "ERC20: transfer from the zero address"); require(to != address(0), "ERC20: transfer to the zero address"); _beforeTokenTransfer(from, to, amount); // My implementation uint256 feesCollected; if (!isTaxless[from] && !isTaxless[to]) { bool sell = to == balancerVault; bool p2p = from != balancerVault && to != balancerVault; uint feeIndex = p2p ? 2 : sell ? 1 : 0; feesCollected = (amount * fees[feeIndex]) / (10**(_feeDecimal + 2)); } amount -= feesCollected; _balances[from] -= feesCollected; _balances[feeWallet] += feesCollected; // End my implementation uint256 fromBalance = _balances[from]; require(fromBalance >= amount, "ERC20: transfer amount exceeds balance"); unchecked { _balances[from] = fromBalance - amount; } _balances[to] += amount; emit Transfer(from, to, amount); _afterTokenTransfer(from, to, amount); } function _mint(address account, uint256 amount) internal virtual { require(account != address(0), "ERC20: mint to the zero address"); _beforeTokenTransfer(address(0), account, amount); _totalSupply += amount; _balances[account] += amount; emit Transfer(address(0), account, amount); _afterTokenTransfer(address(0), account, amount); } function _burn(address account, uint256 amount) internal virtual { require(account != address(0), "ERC20: burn from the zero address"); _beforeTokenTransfer(account, address(0), amount); uint256 accountBalance = _balances[account]; require(accountBalance >= amount, "ERC20: burn amount exceeds balance"); unchecked { _balances[account] = accountBalance - amount; } _totalSupply -= amount; emit Transfer(account, address(0), amount); _afterTokenTransfer(account, address(0), amount); } function _approve( address owner, address spender, uint256 amount ) internal virtual { require(owner != address(0), "ERC20: approve from the zero address"); require(spender != address(0), "ERC20: approve to the zero address"); _allowances[owner][spender] = amount; emit Approval(owner, spender, amount); } function _spendAllowance( address owner, address spender, uint256 amount ) internal virtual { uint256 currentAllowance = allowance(owner, spender); if (currentAllowance != type(uint256).max) { require(currentAllowance >= amount, "ERC20: insufficient allowance"); unchecked { _approve(owner, spender, currentAllowance - amount); } } } function _beforeTokenTransfer( address from, address to, uint256 amount ) internal virtual {} function _afterTokenTransfer( address from, address to, uint256 amount ) internal virtual {} } ``` **Thanks for watching this video!**
turupawn
1,191,459
10 new Android Libraries And Projects To Inspire You In 2022
This is my new compilation of really inspirational, worthy to check, promising Android projects and...
0
2022-09-12T20:58:17
https://medium.com/@mmbialas/10-new-android-libraries-and-projects-to-inspire-you-in-2022-18eeb64ef70b
android, kotlin, programming, productivity
This is my new compilation of really inspirational, worthy to check, promising Android projects and libraries released or heavily refreshed in 2022. I have listed projects written in [Kotlin](https://kotlinlang.org/) and [Jetpack Compose](https://developer.android.com/jetpack/compose) in an unordered list so if you are eager to learn new things, check them all! ---------- ### 1. [Maestro](https://github.com/mobile-dev-inc/maestro) This is huge! [Maestro](https://github.com/mobile-dev-inc/maestro) presents a fresh approach to UI tests build on experience from Appium, Espresso, UIAutomator, and XCTest. It is built for both platforms (operates on emulators / simulators and physical devices, both) and uses `yaml` language to create a test flow. You can find a sample test flow below. ![Maestro-yaml](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zyqy4gke2v3etl7zv25j.png) The project is really [well documented](https://maestro.mobile.dev/getting-started/installing-maestro#android) and has low learning curve. 100% recommendation! You can learn more about it from an article [Introducing: Maestro — Painless Mobile UI Automation](https://blog.mobile.dev/introducing-maestro-painless-mobile-ui-automation-bee4992d13c1). License: [Apache-2.0](https://github.com/mobile-dev-inc/maestro/blob/main/LICENSE) ---------- ### 2. [Page Curl](https://github.com/oleksandrbalan/pagecurl) This is a Jetpack Compose library for creating turning pages effect. Looks really cool! ![Page Curl](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61gcs4m8px9gvino1rvm.gif) Docs are pretty sufficient to quickly include the lib to your project. Most likely it isn’t for your production code, but for some side project for sure! License: [Apache-2.0](https://github.com/oleksandrbalan/pagecurl/blob/main/LICENSE) ---------- ### 3. [Redwood](https://github.com/cashapp/redwood) This is a library which could be a game changer in building reactive Android, iOS, and web UIs using Kotlin. Delivered by CashApp Development Team. What is Redwood? > Redwood integrates the Compose compiler, a design system, and a set of platform-specific displays. Each Redwood project is implemented in three parts: > - A design system. Redwood includes a sample design system called ‘Sunspot’. Most applications should customize this to match their product needs. > - Displays for UI platforms. The display draws the pixels of the design system on-screen. Displays can be implemented for any UI platform. Redwood includes sample displays for Sunspot for Android, iOS, and web. > - Composable Functions. This is client logic that accepts application state and returns elements of the design system. These have similar responsibilities to presenters in an MVP system. Btw, this is still in heavy development process, so be careful ☠️. License: [Apache-2.0](https://github.com/cashapp/redwood/blob/trunk/LICENSE.txt) ---------- ### 4. [Compose Shimmer](https://github.com/valentinilk/compose-shimmer) This is a library which enables a shimmer effect for Android Jetpack Compose apps. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kk9jrcmvlh1p1du4oqvs.gif) This effect was developed in order to create a shimmering effect that traverses the whole screen, highlighting only a certain subset of child views. The library offers also quite advanced theming and usage, which you can check in the comprehensive [docs](https://github.com/valentinilk/compose-shimmer). License: [Apache-2.0](https://github.com/valentinilk/compose-shimmer/blob/master/LICENSE.md) ---------- ### 5. [Appyx](https://bumble-tech.github.io/appyx/) This is a model-driven navigation for Jetpack Compose from [Bumble Engineering Team](https://twitter.com/BumbleEng). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y86bgi4i98q93ueu4k7w.gif) Using this lib you can: > **- Navigate directly from code** — In a type-safe way, without boilerplate > **- Gain control of navigation state** — Making your navigation unit-testable > **- Complete control over operations and behaviour** — Use and extend the back stack or the view pager from the library, or build your own > **- Your own navigation** — With Appyx, you can define your own navigation models > **- Use any animation for transitions** — Anything you can represent with Compose `Modifiers` > Using a model-driven approach, navigation states are yours to define — Appyx makes it happen with any animation you can represent using Compose `Modifiers`. The best way to learn about this cool approach is to check [docs](https://bumble-tech.github.io/appyx/) and a demo app in the project. License: [Apache-2.0](https://github.com/bumble-tech/appyx/blob/main/LICENSE) ---------- ### 6. [Twitter’s Jetpack Compose Rules](https://github.com/twitter/compose-rules) You probably know that Twitter’s engineering team heavily refactors theirs codebase to adopt Jetpack Compose. But when a big team with a large codebase start working on a challenging task like migrating to Compose, not everybody can be on the same page and follow all the rules. If you use static analysis tools, [these rules](https://github.com/twitter/compose-rules) will come to the rescue and help to adopt Jetpack Compose to your project. You can use them with: - [ktlint](https://github.com/pinterest/ktlint) - [detekt](https://github.com/detekt/detekt) - [Spotless](https://github.com/diffplug/spotless) License: [Apache-2.0](https://github.com/twitter/compose-rules/blob/main/LICENSE.md) ---------- ### 7. [Pokedex](https://github.com/skydoves/Pokedex) Despite the fact I wrote about this project in 2020 on Medium in [The 25 Best Android Libraries and Projects of 2020 — Summer Edition](https://betterprogramming.pub/25-best-android-libraries-projects-of-2020-summer-edition-dfb030a7fb0a), it is still up-to-date and worthy to mention [project](https://github.com/skydoves/Pokedex). It is developed by [Jaewoong Eum](https://medium.com/u/9bb203a4ab2e). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xlq33ddfdetowk0xju3o.gif) > Pokedex demonstrates modern Android development with Hilt, Coroutines, Flow, Jetpack (Room, ViewModel), and Material Design based on MVVM architecture. There is really neat documentation too. License: [Apache-2.0](https://github.com/skydoves/Pokedex/blob/main/LICENSE) ---------- ### 8. [Permission Flow for Android](https://github.com/PatilShreyas/permission-flow-android) This library, developed using [Kotlin Flow](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/), allows to get to know about real-time state of Android app permissions. You can observing permissions with `StateFlow` as well as in Jetpack Compose using `@Composable` annotation: ``` @Composable fun ExampleSinglePermission() { val state by rememberPermissionState(Manifest.permission.CAMERA) if (state.isGranted) { // Render something } else { // Render something else } } ``` The library is well written and has 87% test coverage. Also, `README` is comprehensive and explains a usage of the lib well. License: [Apache-2.0](https://github.com/PatilShreyas/permission-flow-android/blob/main/LICENSE) ---------- ### 9. [Seal](https://github.com/JunkFood02/Seal) Seal is open-source Video/Audio downloader, designed and themed with [Material You](https://material.io/blog/start-building-with-material-you). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bl247heff7hruh096fe.jpeg) By studying [this project](https://github.com/JunkFood02/Seal) you can learn how to download videos and audio files from video platforms supported by [yt-dlp](https://github.com/yt-dlp/yt-dlp). You can use it to get to know more about [Material Design 3](https://m3.material.io/) UI styling, especially with [dynamic color](https://m3.material.io/foundations/customization) theme. The project is written according to [MAD Skills](https://developer.android.com/series/mad-skills) principles. License: [GPL-3.0](https://github.com/JunkFood02/Seal/blob/main/LICENSE) ---------- ### 10. [UhuruPhotos. A LibrePhotos client](https://github.com/savvasdalkitsis/uhuruphotos-android) UhuruPhotos is an Android client for [LibrePhotos](https://github.com/LibrePhotos/librephotos) written using the latest Android technologies ([Jetpack Compose](https://developer.android.com/jetpack/compose), [SQLDelight](https://github.com/cashapp/sqldelight), [Coroutines](https://kotlinlang.org/docs/coroutines-overview.html)) based on MVI architecture. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0y4kayovc9xb11wp1uaf.png) With features such as offline support, backup, and syncing, it aims to become a Google Photos alternative with a lot of similar features. You can follow a development process on Github, or even participate in. You can also join a closed beta on the Google Play store. License: [Apache-2.0](https://github.com/savvasdalkitsis/uhuruphotos-android/blob/main/LICENSE) ---------- That’s it. I hope you enjoyed the list and that some of the libs or projects inspired you. You can also check my other articles that have been released earlier this year or last year: - [10 Almost Unknown Tools Which Facilitate Android Apps Development](https://dev.to/mmbialas/10-almost-unknown-tools-which-facilitate-android-development-1e3o) - [25 Best Android Libraries, Projects, and Tools You Won’t Want to Miss Out in 2021](https://dev.to/mmbialas/25-best-android-libraries-projects-and-tools-you-won-t-want-to-miss-out-in-2021-3kik) Till next time!
mmbialas
1,191,990
Creating a todo CLI with Rust 🔥
Hey! in this article, we'll build a to-do CLI application with Rust. Local JSON files are used to...
0
2022-09-13T11:01:55
https://www.tronic247.com/creating-a-todo-cli-with-rust/
rust, beginners, tutorial, programming
Hey! in this article, we'll build a to-do CLI application with Rust. Local JSON files are used to store the data. Here's a preview of the app: ![todo app preview](https://raw.githubusercontent.com/Posandu/todo-app-rust/main/demo.gif) Let's get started! ## Getting started First, we create a new project with Cargo: ```bash cargo new todo ``` Then, we add the following dependencies to the `Cargo.toml` file: ```toml chrono = "0.4.22" colorize = "0.1.0" rand = "0.8.5" serde = { version = "1.0", features = ["derive"] } serde_json = "1.0.85" ``` Here's what each dependency does: - `chrono` is used to get the current date and time. - `colorize` is used to color the output. - `rand` is used to generate random IDs. - `serde` and `serde_json` are used to get the data from the JSON file. ## Creating the folder structure Our `src` folder will look like this: ```bash src ┣ app ┃ ┗ mod.rs # The app module ┣ structs ┃ ┗ mod.rs # The structs ┣ todo ┃ ┗ mod.rs # Todo related functions ┣ utils ┃ ┗ mod.rs # Utility functions ┗ main.rs # The main file ``` Now, in the `main.rs` file, I'll import all modules: ```rust mod utils; mod structs; mod todo; mod app; fn main() { // ... } ``` ## Creating the structs Before we start, let's create the structs that we'll use to store the data. In the `structs/mod.rs` file, we'll create the following structs: ```rust use serde::{Deserialize, Serialize}; #[derive(Serialize, Deserialize, Debug, Clone)] pub struct Todo { pub created_at: String, pub title: String, pub done: bool, pub id: u32, pub updated_at: String, } #[derive(Serialize, Deserialize, Debug)] pub struct ConfigFile { pub data: Vec<Todo>, } ``` ## Creating the utility functions In the `utils/mod.rs` file, we'll import the dependencies and create the utility functions: ```rust use crate::structs; use chrono; use colorize::*; use rand::prelude::*; use serde_json::from_str; use serde_json::Result; use std::{fs, io::Write}; ``` The first function is to create the global data file if it doesn't exist: ```rust pub fn init() { // Check if folder exists if !fs::metadata("C:\\.todobook").is_ok() { fs::create_dir("C:\\.todobook").unwrap(); // Create folder // Create file let mut file = fs::File::create(DATA_FILE).unwrap(); // Write to file file.write_all(b"{\"data\":[]}").unwrap(); println!("{} {}", "Created folder and file".green(), DATA_FILE); } // Check if file exists else if !fs::metadata(DATA_FILE).is_ok() { // Create file let mut file = fs::File::create(DATA_FILE).unwrap(); // Write to file file.write_all(b"{\"data\":[]}").unwrap(); println!("{} {}", "Created file".green(), DATA_FILE); } } ``` The next function is to read the arguments from the command line. Before creating a function, we define a struct called `Command` in the `structs` module: ```rust pub struct Command { pub command: String, pub arguments: String, } ``` Now, we can create the `get_args` function. ```rust pub fn get_args() -> structs::Command { let args = std::env::args().collect::<Vec<String>>(); // Get arguments and collect them into a vector let command = args.get(1).unwrap_or(&"".to_string()).to_string(); // Get command or set it to an empty string let arguments = args.get(2).unwrap_or(&"".to_string()).to_string(); // "" arguments or "" structs::Command { command, arguments } // Return the command and arguments } ``` The next function returns a timestamp. ```rust pub fn get_timestamp() -> String { let now = chrono::Local::now(); let timestamp = now.format("%m-%d %H:%M").to_string(); timestamp } ``` After that, we create a function to generate a random ID: ```rust pub fn get_id() -> u32 { // Genrate number between 1 and 1000 let mut rng = rand::thread_rng(); let id: u32 = rng.gen_range(1..1000); id + rng.gen_range(1..1000) } ``` The next function is to read the data from the JSON file: ```rust pub fn get_todos() -> Result<Vec<structs::Todo>> { let data = fs::read_to_string(DATA_FILE).unwrap(); let todos: structs::ConfigFile = from_str(&data)?; Ok(todos.data) } ``` The last function is to write the data to the JSON file: ```rust pub fn save_todos(todos: Vec<structs::Todo>) { let config_file = structs::ConfigFile { data: todos }; let json = serde_json::to_string(&config_file).unwrap(); let mut file = fs::File::create(DATA_FILE).unwrap(); file.write_all(json.as_bytes()).unwrap(); } ``` And, that's it for the utility functions. ## Creating the todo functions Now, we create the functions that will be used to add, remove, and list todos. In the `todo/mod.rs` file, import the dependencies. ```rust use crate::structs::Todo; use crate::utils; use colorize::*; ``` The first function is to add a todo: ```rust pub fn add(title: String) { if title.len() < 1 { // Check if title is empty println!("{}", "No title provided".red()); return; } let mut todos = utils::get_todos().unwrap(); // Get todos let todo = Todo { created_at: utils::get_timestamp(), title, done: false, id: utils::get_id(), updated_at: utils::get_timestamp(), }; todos.push(todo); // Push todo to todos utils::save_todos(todos); // Save todos println!("{}", "Added todo".green()); } ``` The next function is to list todos: ```rust pub fn list() { let todos = utils::get_todos().unwrap(); if todos.len() == 0 { println!("{}", "No todos".red()); return; } println!( "{0: <5} | {1: <20} | {2: <20} | {3: <20} | {4: <20}", "ID", "Title", "Created at", "Updated at", "Done" ); println!(); for todo in todos { println!( "{0: <5} | {1: <20} | {2: <20} | {3: <20} | {4: <20}", todo.id, todo.title, todo.created_at, todo.updated_at, if todo.done { "Completed 😸".green() } else { "No 😿".red() } ); } } ``` We then create a function to mark a todo as done: ```rust pub fn done(id: String) { let mut todos = utils::get_todos().unwrap(); let id = id.parse::<u32>().unwrap_or(0); let exists = todos.iter().any(|todo| todo.id == id); if !exists { println!("{}", "Todo not found".red()); return; } for todo in &mut todos { if todo.id == id { todo.done = true; todo.updated_at = utils::get_timestamp(); } } utils::save_todos(todos); println!("{}", "Marked todo as done".green()); } ``` The next function is to remove a todo: ```rust pub fn remove(id: String) { let mut todos = utils::get_todos().unwrap(); let id = id.parse::<u32>().unwrap_or(0); let exists = todos.iter().any(|todo| todo.id == id); if !exists { println!("{}", "Todo not found".red()); return; } todos.retain(|todo| todo.id != id); utils::save_todos(todos); println!("{}", "Removed todo".green()); } ``` Now, we have all the functions we need to create a todo app. We should integrate and make it work. ## Integrating the functions In the `app/mod.rs` file, import the dependencies. ```rust use crate::todo::*; use crate::utils; use colorize::*; ``` We export a start function that will be called in the `main.rs` file. ```rust pub fn start() { // ... } ``` We first check and create the data file if it doesn't exist: ```rust utils::init(); ``` We then get the command and arguments: ```rust let args = utils::get_args(); ``` We then match the command and call the appropriate function: ```rust match args.command.as_str() { "a" => add(args.arguments), "l" => list(), "d" => done(args.arguments), "r" => remove(args.arguments), "q" => std::process::exit(0), _ => { /// SHOW HELP } } ``` As for the help, we do this. ```rust println!("{}", " No command found - Showing help".black()); let help = format!( " {} {} {} ----- Help: Command | Arguments | Description {} text Add a new todo {} List all todos {} id Mark a todo as done {} id Delete a todo ", "Welcome to".grey(), "TodoBook".cyan(), "Simple todo app written in Rust".black(), "a".cyan(), "l".blue(), "d".green(), "r".red() ); println!("{help}"); ``` Now, in the `main.rs` file, add this function call: ```rust fn main() { app::start(); } ``` Now, we can run the app using `cargo run`. You should see something like this: ![image](https://user-images.githubusercontent.com/76736580/189867561-446dc806-c8b2-46bd-b2db-07ddca4625c9.png) ## Conclusion Thanks for reading. I hope you enjoyed this tutorial. If you have any questions, feel free to ask in the comments. You can also check out the source code [here](https://github.com/Posandu/todo-app-rust/tree/main).
posandu
1,192,273
Top 6 HTML Tags for SEO Every Developer Should Know
This post was originally published on Hackmamba Search engine optimization (SEO) is a vital part of...
0
2022-09-13T15:25:19
https://hackmamba.io/blog/2022/09/top-6-html-tags-for-seo-every-developer-should-know/
webdev, html, css, beginners
This post was originally published on [Hackmamba](https://hackmamba.io/blog/2022/09/top-6-html-tags-for-seo-every-developer-should-know/) Search engine optimization ([SEO](https://www.techtarget.com/whatis/definition/search-engine-optimization-SEO)) is a vital part of online marketing. It helps to make your website recognizable to search engines like Google, Yahoo, and Bing. Without SEO, your website will be relatively unknown. You will have fewer visitors, which translates to fewer customers. The main goal of SEO is to make your website rank higher in search engines. Thanks to SEO, users can easily find your website while looking for information related to your business. It is essential to use HTML tags correctly when optimizing a website, and they play an important role in increasing your website's ranking and visibility. This article will look at some of the most important HTML tags for SEO and how to use them. ## Prerequisite You'll need the following to follow along with this article: - A basic understanding of HTML - An IDE on your computer - you can use [Visual Studio Code](https://code.visualstudio.com/download) ## What Are the Important HTML Tags for SEO? HTML tags for SEO increase your website's visibility to users and search engines. These Tags help you connect with your audience much more effectively. There are a variety of HTML tags you can use to improve the SEO of your website. Here is a few of them: ## Title Tags An essential element of any website is its title tag. Title tags inform the user and search engines of the website's name and purpose. The title tag is usually the clickable **blue-colored** link in a search engine result page. In HTML documents, the title tag is always between the head elements: ```html <head> <title>Your Title</title> </head> ``` A **search engine results page (SERP)** displays the website's name and other related data. The SERP is the page that appears after typing a question into Google or any other search engine. ![Titles on search engine results page](https://paper-attachments.dropbox.com/s_E48B564267ACC7D3F5931BD8FA7EA963948F5B6F4FE2DFB1B5ADEE1AB5695965_1653089746571_Untitled+design.png) The title is the **blue-colored** links in the example above. Google would look at the page's title to determine the page's relevance to the user's search query. So if you search "What is HTML?" for example, all contents related to the definition of HTML will appear on the SERP. [Google](https://developers.google.com/search/blog/2021/08/update-to-generating-page-titles) sometimes rewrites your title for you rather than using the title you provide. It occurs when your page answers the question, but the title does not fit the inquiry. The new page title can be from your heading elements, i.e., `<h1>` - `<h6>` elements, anchor text, or a completely new title. **Note:** Google will use your title over 80% of the time. If you notice that Google changes your title for most queries, change it. ### How to Optimize Title Tag? [Title tag optimization](https://developers.google.com/search/docs/advanced/appearance/title-link) best practices are as follows: - Avoid using the same title for every web page on your website. - Keep your titles short. It's unclear exactly how many characters Google can show on the SERP as it varies among screen sizes. It's advisable to keep your page titles within 55-70 characters long. That way, it will fit most screen sizes. Lengthy titles that appear in the SERP are likely to have parts truncated to 600 pixels. - Avoid stuffing your title with too many keywords. While descriptive keywords can be helpful in your title, repeating them is unnecessary. ## Meta Description Tag The [meta description tag](https://moz.com/learn/seo/meta-description#:~:text=Avoid%20double%20quotation%20marks%20in%20descriptions&text=To%20prevent%20this%20from%20happening,double%20quotes%20to%20prevent%20truncation.) summarizes your website's content on search engine results pages. It appears together with your page title in SERP. Like the title tag, the meta description tag is always between the head elements in an HTML document: ```html <head> <meta name="description" content="page description"> </head> ``` ![Meta descriptions on search engine results page](https://paper-attachments.dropbox.com/s_E48B564267ACC7D3F5931BD8FA7EA963948F5B6F4FE2DFB1B5ADEE1AB5695965_1658622531101_Untitled+design.png) The underlined texts in the example above are the meta descriptions of each page. At times, Google may use a quote from your website instead of your meta description. It occurs when the quoted text matches a given query better than your meta description. Essentially, it will pick the best option to enhance your chances of getting clicks. ### How to Optimize Meta Description Tags? If you fail to write a good meta description or fail to write one at all, Google will do it for you to match the search query. But it is vital to keep the following in [mind:](https://developers.google.com/search/docs/advanced/appearance/snippet#meta-descriptions) - Keep your meta description short and descriptive. The length of your meta description changes according to the size of the screen, just like the title. For optimal screen compatibility, your meta description should not exceed 150–160 characters. - Use the exact keywords from your title in your meta description. - Avoid using quotation marks in your meta description. Google sees double quotation marks as a request to cut the texts at that point where you use it in the SERP. When using non-alphanumeric characters, use [HTML entity](https://www.w3schools.com/html/html_entities.asp) or avoid using non-alphanumeric characters. - Use a different meta description for each page on your website. **Note:** You can create titles and meta descriptions longer than the suggested lengths but to avoid having them get cut off, make sure your title and meta descriptions begin with your keywords. ## Heading Tags (H1-H6) [Heading tags](https://www.w3schools.com/html/html_headings.asp) make it easier for readers and search engines to understand the contents of a web page. It helps structure the pages in a website and shows you how they're interconnected. HTML has six heading tags, ranging from `<h1>` to `<h6>`. The hierarchy goes from the most important to the least significant, with `<h1>` followed by `<h2>`, `<h3>`, `<h4>`, and so on. The heading tags in your HTML document look like this: ```html <h1>First heading</h1> // Subject of the web page <h2></h2> // organize subsections of the web page <h3></h3> // organize subsections of the web page <h4></h4> // Add additional information <h5></h5> // Add additional information <h6></h6> // Add additional information ``` The size of each heading decreases with its importance. In hierarchy, `<h1>` is bigger than `<h2>`, `<h2>` is bigger than `<h3>`, and so on up to `<h6>`. ### How to optimize Heading Tags? [Heading Tags optimization](https://victoriousseo.com/blog/h1-tag-seo-importance/) best practices are as follows: - Use heading tags in order. Go from `<h1>` to `<h2>` to `<h3>` and so on, on your web page. Search engines and online readers using a [screen reader](https://www.nomensa.com/blog/what-screen-reader) will find your page's content harder to understand if you skip heading levels. - Keep your heading tags short and descriptive. It's best to keep your headings between 20-70 characters long. - Use only one `<h1>` element per web page to avoid confusing search engines. ## Alt Attribute [Alt attribute](https://www.w3schools.com/tags/att_img_alt.asp), also known as alternative texts, describes an image when it fails to load on a web page. In HTML documents, the alt attribute is always part of an image element: ```html <img src=" image url" alt="image description"> ``` Alt attributes improve image comprehension for users of assistive technology like [screen readers](https://www.nomensa.com/blog/what-screen-reader). Also, it makes your image accessible to search engines and provides context for those who want more information about your image. ### How to optimize Alt Attributes? [Alt attribute optimization](https://moz.com/learn/seo/alt-text) best practices are as follows: - Keep your alt texts [short and descriptive](https://www.washington.edu/doit/how-long-can-alt-attribute-be) - Never leave the alt attributes empty - Avoid using phrases. Don't use "image of" or "graphic of," for example, as the user is already aware that you are describing an image. ## Canonical Tag A [canonical tag](https://moz.com/learn/seo/canonicalization) tells search engines which web page to show in SERPs. When you want one of two web pages with similar or duplicate contents to be the main page displayed on the SERP, you apply a canonical tag to that page. HTML documents always have the canonical attribute between head elements: ```html <head> <link rel="canonical" href="website url"> // The URL of the website you want as the ma // in page </head ``` ### How to Optimize Canonical Tags? [Canonical Tags optimization](https://developers.google.com/search/docs/advanced/crawling/consolidate-duplicate-urls) best practices are as follows: - Use one canonical tag per page - Always use [absolute URLs](https://developers.google.com/search/blog/2013/04/5-common-mistakes-with-relcanonical) - Do not [block indexation](https://developers.google.com/search/docs/advanced/crawling/consolidate-duplicate-urls) ## Robots Meta Tag [Robots meta tags](https://developers.google.com/search/docs/advanced/robots/robots_meta_tag) instruct search engines on what pages to [index](https://www.codehousegroup.com/insight-and-inspiration/tech-stream/what-is-web-page-indexing) and what not to index. In HTML documents, the robot tag is always between the head elements: ```html <head> <meta name="robots" content="noindex"> // Do not show this page on the SERP <meta name="robots" content="noindex, follow"> // Do not index the page but follow th // e link to other pages <meta name="robots" content="nofollow"> // Index the page but do not follow the link // to other pages <meta name="robots" content="none"> // Do not follow or index this page </head> ``` **NOTE:** The `name` and `content` attributes are not **case-sensitive**. ### How to Optimize Robots Meta Tags? [Robots meta tag optimization](https://moz.com/learn/seo/robots-meta-directives) best practices are as follows: - Address all bots with robots and individual bots with [individual bot names](https://developers.google.com/search/docs/advanced/crawling/overview-google-crawlers) - Use the different parameters such as Noindex, Nofollow, etc. ## Conclusion This article has taught you the different types of HTML tags used for SEO and the best practices to follow. ## Resources You will find the following resources helpful: - [HTML tags in SEO](https://seranking.com/blog/html-tags-in-seo/) - [How to optimize title tags for better ranking](https://www.greengeeks.com/blog/heres-how-to-optimize-your-title-tags-for-better-ranking/)
cesscode
1,192,288
Let's Get Cyber-Physical: The Expanding Role of CPS
Cyber-physical systems are closely tied to the IoT and will increasingly use AI in every imaginable...
0
2022-09-13T16:01:54
https://www.electronicdesign.com/technologies/embedded-revolution/article/21250006/luos-lets-get-cyberphysical-the-expanding-role-of-cps
opensource, microservices, systems, luos
Cyber-physical systems are closely tied to the IoT and will increasingly use AI in every imaginable use case to operate more autonomously. ⏩ https://www.electronicdesign.com/technologies/embedded-revolution/article/21250006/luos-lets-get-cyberphysical-the-expanding-role-of-cps Nicolas Rabault wrote a new article on Electronic Design to define CPS and their applications in many fields.
emanuel_allely
1,192,314
SOLID : Dependency Inversion
When I searched about it, it turned out Dependency Inversion is accomplished by using Dependency...
19,756
2022-09-16T14:28:13
https://dev.to/kaziusan/solid-dependency-inversion-399h
typescript, architecture, programming
When I searched about it, it turned out **Dependency Inversion** is accomplished by using **Dependency Injection** So I'm gonna use my own code about **Dependency Injection**. Please take a look at first this article, if you haven't read it yet {% link https://dev.to/kaziusan/-escape-from-crazy-boy-friend-explain-dependency-injection-so-easily-32fe %} ▼ Bad code in this article of **Dependency Injection** ```typescript class Woman { makeLover(){ const bf = new CrazyMan() bf.stayWith() } } class CrazyMan { stayWith() { console.log('I am dangerous man, stay with me') } } const woman = new Woman() woman.makeLover() ``` image is like this. Woman depends on Crazy man ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1tku3nzkll0szp1p0ck.png) We need to **invert** this direction, but how? It resolves by using interface, this is **Dependency Inversion** ```typescript class Woman { constructor(man: IMan) { this.man = man } makeLover(){ this.man.stayWith() } } // ⭐⭐ this interface is key !!!!!!! interface IMan { stayWith: () => void } class CrazyMan { stayWith() { console.log('I am dangerous man, stay with me') } } class GoodMan { stayWith() { console.log('I am good man, stay with me') } } const man = new GoodMan() const woman = new Woman(man) woman.makeLover() // I am good man, stay with me ``` ▼ Now Crazy man, Good man depend on **Man Interface**, as you can see direction inverts now ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ee75x45viqgzawx128yv.png) --- ref https://khalilstemmler.com/articles/tutorials/dependency-injection-inversion-explained/ [This is good sample of Typescript, it's in japanese though](https://www.membersedge.co.jp/blog/typescript-solid-dependency-inversion-principle/)
kaziusan
1,192,553
Setting up WSL environment with VSCode, git, node.
Setting up an environment for cloud development is very important because you need to setup up...
0
2022-09-13T22:06:58
https://dev.to/smitgabani/setting-up-wsl-environment-with-vscode-git-node-docker-162k
Setting up an environment for cloud development is very important because you need to setup up environment and use environment variables while developing and deploying your application. As we know Linux is the operating system used for the cloud because it is accepted everywhere and also because some cloud services are free if you use Linux operating system. Today I will help windows user setup their Linux development environment with the help of Windows Subsystem for Linux (wsl). **Installing Windows Subsystem for Linux** You can take a look at the link below. [https://docs.microsoft.com/en-us/windows/wsl/install](https://docs.microsoft.com/en-us/windows/wsl/install) Run PowerShell as an administrator. Run the command ``` // PowerShell wsl --install ``` Install distros for wsl from Microsoft Store Some options are: Ubuntu **You need to restart your computer now.** Once you restart your computer you can confirm the installation by opening an Ubuntu bash terminal by searching Ubuntu in the Start menu. You need to setup you ubuntu operating system (selecting an username and password and some setup like language etc.) Use this command to confirm the setup. ``` // PowerShell wsl -l -v ``` If you see Ubuntu you can move ahead with this article. Now you need to setup your Linux operating system. Run these set of command to setup your environment. Upgrade you Linux System ``` // Linux - Ubuntu sudo apt-get upgrade ``` Check if git is installed ``` // Linux - Ubuntu git --version ``` **Installing node and npm** If you are using ubuntu use this command to download Node.js v16.x ``` // Linux - Ubuntu curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash - sudo apt-get install -y nodejs ``` You can check the link to download other versions for node for other distros. [https://github.com/nodesource/distributions/blob/master/README.md](https://github.com/nodesource/distributions/blob/master/README.md) To install jq which is a JSON parser for curl and the terminal use the command. ``` // Linux - Ubuntu sudo apt-get install jq ``` See that is very easy. **Setting up VSCode.** Install the Remote WSL extension from extensions tab. Now you can open VS code in Linux Operating system and file system by clicking the green area in bottom left part of VS code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2iwhjg31hgkkfuxk4t8n.png) Then choose new WSL window. Once you open the new wsl window you can find where you are by using `pwd `on your terminal. **Exploring wsl file system using File Explorer.** To open the same file location using windows file explorer take a look at the image below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zl9qzwni7zuv629e8x0x.png) To get here type the command given below on the address bar. ``` \\wsl$ ``` Go to the below location to access your root dir ``` \\wsl$\Ubuntu\home\yourusername ``` You can create a directory here to work with. On open VS code wsl window and Click on Open folder and select the folder you created This is all for now I will edit this blog or add a new blog for instruction for setting up docker with wsl.
smitgabani
1,321,604
Reliable, Scalable, and Maintainable Applications
This is a reading note for Designing Data-Intensive Applications Overview For a developer...
0
2023-01-09T12:38:04
https://dev.to/atriiy/reliable-scalable-and-maintainable-applications-1fim
programming, architecture, distributedsystems
_This is a reading note for [Designing Data-Intensive Applications](https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/)_ ## Overview For a developer working in a commercial company, the most frequently encountered systems are __data-intensive systems__. Different companies have different businesses, but behind the different business models is a set of general abilities. For example, application code, in-memory cache, primary database, message queue, e.g., The commercial company doesn't have enough resources to allow developers to implement all of them; therefore, we make architecture decisions according to the problem scales. A website with only 2,000 visits per day doesn't need to consider caching, but things become entirely different for a website with 2,000 visits per second. Before jumping into the details of data-intensive systems, we need to know how to evaluate a system. Or in other words, when we say a _good_ system, what exactly are we talking about? Generally, we will evaluate a system from three aspects: reliability, scalability, and maintainability. We will discuss some techniques, architectures, or algorithms to achieve these goals. But before it, we need to understand these terms clearly. Consistent and clear terms understanding is the first step in discussing system design. ## Reliability We often refer to this term when discussing system architecture, but people often misunderstand it. It's worth noticing that _reliability_ is not describing the system returning the right value that the user expected or tolerating mistakes made by users; it means __continuing to work correctly, even when things go wrong__. Therefore, when we talk about reliability, we are actually talking about __fault-tolerant__. A good paper called _[A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=11747)_ expounds on this topic. Thus, we will focus on this paper in this section. ### Faults and failures A precise understanding of these two terms is the first step in understanding system reliability. A __fault__ is usually defined as one component of the system deviating from its specification, whereas a __failure__ is when the system as a whole stops providing the required service to the user. A system may continue to provide its service even when encountering a fault. Such a system is called __fault tolerant__. It's impossible to reduce the probability of a fault to zero. Therefore it is usually best to design fault-tolerance mechanisms that prevent faults from causing failures. ### Dependencies and failure regions If a component's behavior's correctness requires the second component's correct behavior, we say the first component depends on the second component. An actual system may have a set of possible dependencies forming a graph. The graph is _acyclic_ if it forms part of a tree, while it is _cyclic_ if part of the dependencies connects to themselves. It is better to describe the second situation as a directed cyclic graph. When we design a fault-tolerant system, it is essential to identify the dependencies between components of the system. Dependencies may be static, or they may change. A fault that occurs on a component may transmit in the dependencies graph. Thus it is essential to understand the failure regions. We define the failure region as a limitation of considering faults and failures to a portion of a system and its environment. ### Fault tolerance mechanisms Based on the discussion above, we introduce three fault tolerance mechanisms. The first is __redundancy management__. It provides redundant resources for the system and automatically replaces a component when it fails so that it can continue providing service for the users. The second is __acceptance test techniques__. The components of the system execute the testing for the information from other components before using it. This mechanism can work in the non-redundant system. The third is __comparison techniques__. Multiple processors are used to execute the same program and compare the results across processors. ### Achieve reliability Based on the discussion of reliability and fault-tolerant system, it's easy to understand the first step in implementing a reliable system is to understand your system precisely. You need to know the system's requirements so that you will have enough information to identify the portions that may go wrong. Determine the appropriate fault containment regions to deal with fault and make the time/space trade-offs. ## Scalability Reliability describes if a system can work reliably at that moment, but it does not mean a reliable system can still provide reliable service in the future. The number of users may increase, which puts the system under more pressure. Therefore, we use __scalability__ to describe a system's ability to cope with increased load. We will discuss the load and performance first. A good understanding of these two concepts can give us enough information to make trade-offs when developing the system's solution. After that, we will briefly talk about some approaches. It's hard to talk about all details of these approaches in an article; thus, we only talk about some conceptual stuff. ### Describing load Succinctly describing the load is the first step in improving the performance of our system. We always use a few numbers called __load parameters__ for this requirement. The choice of load parameters depends on the architecture of your system. If the core function requires intensive reading and writing to the database, the ratio of that behavior is a good candidate. There is no universal parameter measure of system load; you need to understand your system or the function you're interested in before choosing them. The book takes Twitter as an example; you can read it for inspiration. ### Describing performance Generally, we want the best performance with the least amount of resources. Therefore, we have two ways to investigate the performance of our system when loading increases. 1. Keep the resource unchanged when you increase the load parameter and observe how the performance is affected. 2. Increase the resource to keep the performance unchanged. Observe how many resources you need to increase. Like the load, we need to use a few numbers to describe the performance of our system. We always use response time for the online system to describe the performance. It describes the time between a client sending a request and receiving a response. It is vital to recognize that response time is not a single number but a _distribution_ of values. Therefore, looking at the average becomes meaningless from a statistical point of view. A system with an unstable response time may have the same average value as a stable system. But the resulting user experience is completely different. We always use __percentiles__ to bypass the fundamental issue with averages. The idea is to take the data during a period of time and sort it, then discard the worst 1% and observe the remaining value. For example, we can find the largest value that occurs at 99% of the response time. In practice, we often choose the 99.9%, 99%, 95%, and 90%. Why can't we use the 99%? Let's suppose that the largest value that occurs at 99% of the response is 400ms. It means 99% response time during this time is better than 400ms, but we don't know the distribution of these _better values_. Maybe we have 100 response time records; 80 of them are 398ms. Therefore we need to review the percentiles of the response time to recognize the distribution. At the same time, it is necessary to know what we talked about above is only researching a single request. In modern websites, a web page may need to send hundreds of requests. Therefore, although you may have put a lot of effort into improving the performance data, the users still experience the worst situation with a high probability. If you want more information about this topic, I suggest you read [Why Percentiles Don’t Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/) and [Everything You Know About Latency Is Wrong](https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/). ### Approaches for coping with load Based on the discussion of load and performance, we can say when we talk about the scalability of a system, we are talking about how to maintain good performance under increasing load parameters. We already have some common methods for addressing this requirement in modern software development. Like auto-scaling, load balancing, caching (including CDN), and distributed systems. We will talk about them in the future because each topic needs much discussion and has tons of details. Another thing is the trade-off during the architectural design. Although we have many ways to improve the performance, we can't implement all of them because of the limited resources. Therefore. a smart way is to use the simplest way and refactor it until the current strategy or high-performance requirements force you to change it. But with the improvement of cloud service infrastructure, some approaches like distributed databases may be the default in the future. ## Maintainability The majority cost of software development is in its ongoing maintenance instead of the initial development. Although we can not eliminate the pain of maintaining an old project, we should use some ways to reduce it. To achieve this, we should pay attention to three design principles: operability, simplicity, and evolvability. ### Operability Developing software is not only include writing code; it also includes running it smoothly. For example, we need to monitor the system's health by collecting logs and data; we need to use CI/CD to release the new version smoothly; we need to keep the software up to date; we need to share knowledge about the software with team members who doesn't participate in; we need to establish the good practices for development. Most of these requirements can be addressed by using some automatic tools. Good operability makes the team's life easier. The team can focus on more valuable things. ### Simplicity Managing the complexity of the software is the core topic of development. As time goes by, more and more functions are added to the system, and the system may also accumulate a lot of technical debt. These factors make the system becoming complex and difficult to understand. Two effective methods are abstraction and delamination. It can hide the complexity behind a clean facade. This book will introduce some approaches to dividing the huge system into well-defined, reusable components. If you want to know how to do the same thing for the code, a book worth reading is [Structure and Interpretation of Computer Programs](https://mitp-content-server.mit.edu/books/content/sectbyfn/books_pres_0/6515/sicp.zip/index.html). ### Evolvability As we mentioned before, the system is not static stuff. We don't only refactor the code and architecture to support more users but also add new features and change the existing functions. Therefore, evolvability is an important measuring aspect of the system. The __agile__ working patterns provide a good framework to deal with this problem. Some technical tools and patterns from the agile community are useful for the system. Test-driving development and refactoring are probably the most well-known methodologies. ## Summary This article talked about some fundamental principles of measuring data-intensive applications. A good understanding of these principles is the first step to jumping into deep technical details. Although we will introduce more technologies to make the applications reliable, scalable, and maintainable, we will never have a one-size-fits-all solution. Succinctly understanding your system is always the first step to addressing the problems and making the application better.
atriiy
1,192,688
Getting the Offer
Hello and welcome to the first post in my new series, After the Interview! If you've already been...
19,784
2022-09-14T12:21:09
https://corydorfner.com/getting-the-offer
career, learn, interview, offers
[Acing the Interview]: https://dev.to/dorf8839/series/12684 [contact page]: https://corydorfner.com/contact/ [survey by TopResume]: https://talentinc.com/press-2017-11-14 [survey by CareerBuilder]: https://www.careerbuilder.com/advice/these-5-simple-mistakes-could-be-costing-you-the-job Hello and welcome to the first post in my new series, After the Interview! If you've already been following along with my prior series, [Acing the Interview], welcome back! If not, I would highly recommend reading through that series as a precursor to what will be a very informative series on what to do after the interview. Throughout this series, we'll be discussing topics like receiving and evaluating your offer letter, as well as handling the negotiation process afterwards. First things first though, let's get started on what to do after the interview, but before you get that offer letter. *** >“One part at a time, one day at a time, we can accomplish any goal we set for ourselves.” >&nbsp;&nbsp;&nbsp;&nbsp;-*Karen Casey* Phew! You did it!😤 You made it through your interview! Ideally, the interviewer loved you and either scheduled a follow up interview for you with some other individuals in the company, or has let you know that they'll be getting back to you shortly with an offer letter. If you feel like the interview might not have gone as well as you hopped though, don't fret. While the interview is important, how you handle the follow up after the interview and potential rejection is just as critical. You have to remain present and responsive to the company, as there's a lot still to be done after the interview process, such as: 1. [Thank You Letters](#thank-you-letters) 2. [Handling Offers and Rejections](#handling-offers-and-rejections) ### Thank You Letters Thank you letters sent directly to your recruiter and interviewers can make or break your chances of getting an offer letter. One [survey by TopResume], show that 68% of recruiters and hiring managers say a quick thank you letter after the interview matters to them. The interview process doesn't end when you leave the building after your interview. The post-interview activities provides you with the wonderful opportunity to create a stronger relationship with the interviewers and hiring manager, keeping your candidacy at the top of their mind. That survey also showed that 16% of the interviewers out-right reject an interviewee because they never sent any kind of thank you email or note after the interview. Another [survey by CareerBuilder] showed that 43% of interviewees don't send thank you notes after the interview. ![Man shaking head in disapproval](https://media.giphy.com/media/WRp58hy5gmfjpMzHAZ/giphy.gif) This data shows that, while many companies expect a thank you letter after the interview, almost half of all candidates don't do so. Taking this into consideration, you can see how important this part of the process can be and how sending short, timely thank you letters after each interview and to each interviewer can be incredibly helpful. Not only can you reiterate why you're the right person for the job, it also provides you with the opportunity to mention something you may have forgotten during the interview, or even provide the correct solution to a question you may have gotten wrong during the interview. This shows the interviewers that you have the persistence and capabilities needed to do the job, even if you weren't able to prove that to them during the intense pressure of an interview. So be sure to get a business card or contact information from each interviewer and tuck that information away to follow up within 24 hours of the interview. When writing your concise thank you letter, be sure to cover the following topics: - Use short and clear subject lines for your follow-up email. Something like, "Thank you for the interview yesterday" or "[First Name], it was great meeting you" - Thank each interviewer for their time by starting the email with the interviewer's first name and including some authentic part of the interview that you are grateful for involving that interviewer. Don't hesitate to mention something specific that you both discussed, such as a specific problem or skill that individual needs, and how it's one of your strong points, making you a great fit for the position - Reiterate your interest in the company and why you are a great cultural fit, but don't come off as desperate. 1 or 2 quick sentences should be more than enough to do so - Wrap up the email by asking if they have any other questions for you and what the next steps are in that company's interview process. - Finally, be sure to include your full name, email, and phone number in the signature of the email so that they know how to reach out to you, if needed. Below is a sample thank you letter that you're free to use in your post-interview communication: > [Interviewer's First Name], > > Thank you for taking time out of your busy schedule to meet with me yesterday about the [insert job title] job opportunity. It was a pleasure learning more about the role and where you see the company over the next several years. > > It's clear to me by the projects you discussed that [insert company name] would be a very exciting place to work. I'm very interested in becoming an integral part of your team and contributing to its many successes in the future. > > I'm confident that my knowledge in [insert applicable skill] and experience with [insert applicable skill] will help bring about those successes with you and the rest of your team, as your new [insert job title]. > > If you have any other questions for me, or need additional information, please don't hesitate to reach out. I'm looking forward to hearing back from you on the next steps. > Once again, thank you for your time and have a great rest of the day. > > Sincerely,</br> > John Smith</br> > Phone: 123-456-7890</br> > Email: email@gmail.com As you can see, thank you letters take very little time to write but can have a major impact on your chances of landing that dream job. Be sure to write up one and send it out quickly after each interview to help separate you from the rest of the candidates and get the job offer you deserve. ### Handling Offers and Rejections Even when it comes to handling offers and rejections from companies, it's very important how you handle yourself and how you respond. If you have received an offer, but have interviewed with multiple companies, you might be waiting to hear back from them. If you wanted to wait before accepting/declining the offer extended you to, you can ask for an extension on the offer. Most offers from companies come with a deadline included, likely one to four weeks out from the offer date. While you should try to accept or decline the offer within that window, companies will typically try to accommodate an extension, if needed. When it comes to declining an offer, it will be in your best interests to do so in a cordial manner and on good terms to keep a line of communication open with the interviewers. It might turn out that you become interested in working for the company a few years in the future or the contacts you made at this company might move to a different company that you're more excited to work for. With this in mind, provide a non-offensive and inarguable reason as to why you're declining the offer. For example, if you're declining an offer from a big company to go work at an environmentally friendly company, let them know why you're declining there offer and that you feel like making an environmental impact on the world is right for you at this current point in time. A big company can't argue with the fact that they likely won't suddenly become an environmentally friendly company in the next few years. Not only does this provide you with an inarguable reason as to why you're declining the offer, but it also helps build bridges to re-interview with the company in a few years when they do create plans to become environmentally friendly. If you don't hear back from the interviewer immediately, it doesn't mean you're rejected and won't get the job. There are numerous reasons why the decision to hire you could be delayed. The company may have a couple more interviews to go through before making a decision. One of the interviewer's might not have provided their feedback on you to the interview team yet, causing a delay in getting a holistic idea of you. It could also be that they're busy drafting up your offer letter and discussing the details of that offer letter with your future boss! Whatever is the cause of the delay though, try to stay calm. Spend some time outside enjoying nature, or being around friends and family, to get your mind off the interview. If there still is no response from the company after a few days, it's more than acceptable to follow-up, politely, with your recruiter. If you do end up being rejected by the company, it's not the end of the world. While being rejected is never a fun experience, it doesn't mean that you're not an excellent engineer or that you're not worthy of the position you interviewed for. Just as the mock interviews teach you something about yourself and the interview process, so will you have learned a lot from this interview. It might be that you had an off day, or just don't "test well" with these types of interviews. Whatever the reasoning, it will be in your best interest to reflect on the interview process and try to identify what went wrong. Be sure to remain cordial and responsive by thanking the interviewer for their time, letting them know that you're disappointed with their decision but understand, and ask when you can reapply to the company. While some of the bigger companies won't provide feedback due to their own company policies, it never hurts to ask them something like, "For the next interview, is there anything that you believe would be beneficial for me to work on?". Once you find out that information, you can focus on improving upon it for the next interview. There are plenty other companies and excellent positions to interview for out in the world and I know you'll find one that you absolutely love. It will take time but stay persistent and you'll be sure to find it. If there is no other company that you want to work for though, it is possible for you to re-apply to the company and try again. Big tech companies like Facebook, Apple, and Google reject numerous candidates during every interview cycle. The hiring managers also realize that these interview processes aren't perfect and can sometime reject good candidates because of it. When previously rejected individuals re-apply again 6-12 months later, companies are typically eager to re-interview and might even expedite the process based off your prior performance. Your initial interview results most likely won't have a major impact on your re-interview so keep studying up and preparing yourself for that next interview. With a little bit of persistence, you may end up getting that offer letter to the company of your dreams. ![thumbs up gif](https://media.giphy.com/media/xUKrrEnN9I5lnrcSMv/giphy-downsized.gif) *** If you enjoyed the post, be sure to follow me so that you don't miss the rest of this series, where I continue, in detail, on what to do After the Interview, including evaluating and negotiating your offer letter! The links to my social media accounts can be found on my [contact page] of my personal website. Please feel free to share any of your own experiences with the interviewing process, general questions and comments, or even other topics you would like me to write about. If this series of posts help you land that dream job of yours, be sure to let me know as well. Thank you and I look forward to hearing from you!👋
dorf8839
1,192,820
Auto-expand menu using Angular Material
This post is going to be a bit longer than usual so bear with me 🙂 I was working on a task at work...
0
2022-09-14T14:31:25
https://dzhavat.github.io/2022/09/14/auto-expand-menu-using-angular-material.html
angular, material
This post is going to be a bit longer than usual so bear with me 🙂 I was working on a task at work where one of the requirements was to make a menu auto-expand whenever the user navigates to a sub-page that is part of a menu group. To give you a visual idea, take a look at the following video: ![Final demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5sa5s3pwa3sjtdstzxf.gif) Pages 3 and 4 are grouped under "Nested menu" and the menu auto-expands when the user navigates to one of those pages. Looks nice, doesn't it? 😎 In this post I'll show you how I built it. It's not hard to achive but there's one disclaimer. The disclaimer: The solution I'm going to share in this post is specific to our components architecture. If you want to achieve the same in your project, the final solution might be different. So let me tell you a bit more about our setup before diving into the code. ### Components architecture The application I'm working on is made up of two parts: 1. Application specific components 2. Design System components #### Design System components As you might've guessed it, the Design System consists of small components focused on particular UI needs. They are used in the application. As part of the Design System, we've got components like `nav-list` and `nav-list-item`, and an `expand-on-active-link` directive where the real magic happens 🪄 ##### `nav-list-item` component This component is a wrapper around `mat-list-item` from Material and has two requirements: 1. Should support internal links 2. Should support external links The component class has a `link` Input and some logic that decides whether the link is internal or external. That's not important for this post but you can see it in the final GitHub repo. Here's its template at this point: ```html <!-- nav-list-item.component.html --> <a *ngIf="isExternalLink; else internalLink" mat-list-item mat-ripple [href]="link" [attr.target]="target" ><ng-container *ngTemplateOutlet="templateContent"></ng-container ></a> <ng-template #internalLink> <a mat-list-item mat-ripple [routerLink]="link" routerLinkActive="active" ><ng-container *ngTemplateOutlet="templateContent"></ng-container ></a> </ng-template> <ng-template #templateContent> <ng-content></ng-content> </ng-template> ``` ##### `nav-list` component `nav-list` is a wrapper around `mat-nav-list` from Material. The component has an `expandable` Input property that, when set to `true`, places a `mat-nav-list` (and its projected content) inside a `mat-expansion-panel`, otherwise it displays `mat-nav-list` directly. Here's its template at this point: ```html <!-- mat-nav-list.component.html --> <ng-container *ngIf="expandable; else navListTemplate"> <mat-expansion-panel class="mat-elevation-z0"> <mat-expansion-panel-header> <mat-panel-title>{% raw %}{{ title }}{% endraw %}</mat-panel-title> </mat-expansion-panel-header> <ng-container *ngTemplateOutlet="navListTemplate"></ng-container> </mat-expansion-panel> </ng-container> <ng-template #navListTemplate> <mat-nav-list><ng-content></ng-content></mat-nav-list> </ng-template> ``` We'll come back to the `expand-on-active-link` directive later. #### Application specific components The application specific components is where components from the Design System are used. ### `sidebar-nav` component The component simply puts `nav-list` and `nav-list-item` together. Its template is as follows: ```html <!-- sidebar-nav.component.html --> <nav-list> <nav-list-item link="/page-1">Page 1</nav-list-item> <nav-list-item link="/page-2">Page 2</nav-list-item> <nav-list-item link="https://angular.io/">Angular</nav-list-item> </nav-list> <nav-list [expandable]="true" [title]="'Nested menu'"> <nav-list-item link="/page-3">Page 3</nav-list-item> <nav-list-item link="/page-4">Page 4</nav-list-item> </nav-list> ``` From the code above, the first `nav-list` displays a list of links while the second `nav-list` displays an expandable list of links grouped under the "Nested menu" title. Demo time ⌚ ![Menu that must be expanded manually](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ys46lp8ytsh6r9q4xuc6.gif) The expandable menu must be opened manually. Any highlighted menu item is hidden until the menu is expanded which can be confusing for the user. We want to fix that by adding auto-expand capabilities. Let's see how. ### Making the menu auto-expand First let's define some requirements: 1. An expandable menu should automatically expand when the user navigates to a sub-page that is part the menu. 2. An already expanded menu should stay open when the user navigates to another top level page. There are probably a few solutions here. One could be to listen for the [`NavigationEnd`](https://angular.io/api/router/NavigationEnd) router event and somehow figure out which `nav-list` to expand based on routes. Another could be to listen for the [`isActiveChange`](<(https://angular.io/api/router/RouterLinkActive#properties)>) event on each [`routerLink`](https://angular.io/api/router/RouterLink) and expand the closest `nav-list`. This is the approach I used. So there are a few changes that needs to be made. #### Modifying the `nav-list-item` component Remember how this component supports internal and external links? Well, each internal link uses the `routerLink` directive, which conveniently has an `isActiveChange` Output property that emits `true` every time a link becomes active and `false` when it becomes inactive. For now we’ll simply forward the emitted value to another Output property on the `nav-list` component class. We’ll see why later. So the component class and its template now look like this: ```html <!-- nav-list-item.component.html --> <!-- ... --> <a [routerLink]="link" (isActiveChange)="isActive.emit($event)" ...>...</a> ``` ```ts // nav-list-item.component.ts @Component({ // ..., selector: 'nav-list-item', }) export class NavListItemComponent { // ... @Output() isActive = new EventEmitter<boolean>(); } ``` #### Modifying the `nav-list` component What we want to do here is query the template for all the projected `nav-list-item` components. We can use the `@ContentChildren` decorator for that. ```ts // nav-list.component.ts @Component({ // ... selector: 'nav-list', }) export class NavListComponent { // ... @ContentChildren(NavListItemComponent) navListItemComponents: QueryList<NavListItemComponent> | null = null; } ``` Once we have all `nav-list-item` components we're going to send them to a custom directive (shown below) that will listen for the `isActive` event on each link within a sub-menu and expand the related `mat-expansion-panel` component in case any of the links emit `true`. Let's modify the `nav-list` template first, then we'll look at the custom directive. ```html <!-- nav-list.component.html --> <!-- ... --> <mat-expansion-panel expandOnActiveLink [navListItemComponents]="navListItemComponents" ... > <!-- ... --> </mat-expansion-panel> ``` As you can see, a custom `expandOnActiveLink` directive is added only on the `mat-expansion-panel`. The directive has one Input called `navListItemComponents` which takes a list of `nav-list-item` components. #### `expand-on-active-link` directive Here's the power of Angular's directives. When you add a directive to a component, you can inject an instance of that component in the directive's `constructor`. We're going to use that for our advantage. The plan is to inject an instance of [`MatExpansionPanel`](https://material.angular.io/components/expansion/api#MatExpansionPanel) in the directive and use its `open` method to expand the panel whenever one of the projected `nav-list-item` components emits `true` from its `isActive` Output. Let's first see the directive, then we'll talk about the code: ```ts // expand-on-active-link.directive.ts @Directive({ selector: '[expandOnActiveLink]', exportAs: 'expandOnActiveLink', standalone: true, }) export class ExpandOnActiveLinkDirective implements AfterContentInit { @Input() navListItemComponents: QueryList<NavListItemComponent> | null = null; constructor(private panel: MatExpansionPanel) {} ngAfterContentInit(): void { const navListItems = this.navListItemComponents?.toArray(); if (navListItems) { from(navListItems) .pipe( mergeMap((item) => item.isActive), filter((isActive) => isActive) ) .subscribe(() => { // Looks like there's a bug in `mat-drawer` component // that prevents `mat-expansion-panel` from expanding // This littl' fella fixes it :) setTimeout(() => this.panel.open(), 0); }); } } } ``` A few things to note here. First, `navListItemComponents` are accessed in the [`ngAfterContentInit`](https://angular.io/api/core/AfterContentInit) lifecycle hook because [`ContentChildren`](https://angular.io/api/core/ContentChildren) queries are set right before it. Second, the [`from`](https://rxjs.dev/api/index/function/from) function takes an array of `nav-list-item` components and sends each component to the [`mergeMap`](https://rxjs.dev/api/index/function/mergeMap) operator. `mergeMap` picks up the `isActive` Output property of each component and merges their events into a single stream. The `filter` operator afterwards makes sure that only `true` events will continue down the stream. At the end, the injected `panel` instance is used to open the `MatExpansionPanel` component. `setTimeout` is used because at the time of this writing, apparently there's a bug in Material that prevents `mat-expansion-panel` from expanding if it is placed inside a `mat-drawer` 🤷‍♂️. ### Final demo Wow, that was a lot! Here's a final [StackBlitz demo](https://stackblitz.com/github/dzhavat/angular-material-auto-expand-sidebar-menu?file=src/app/sidebar-nav.component.ts). Also a [GitHub repo](https://github.com/dzhavat/angular-material-auto-expand-sidebar-menu). ### Conclusion I hope you liked this post. It's very specific to a particular component setup but an interesting challenge nonetheless. Oh, but we're not done yet! This is the part where **you** come in. Do you have a suggestion for improving the solution? What can be done differently? Let me know on [Twitter](https://twitter.com/dzhavatushev).
dzhavat
1,193,158
Top 40 GIT interview questions and answers for SDET - Automation QA - DevOps - SDE - QAE?
Refer below link for frequently used GIT commands and git...
0
2022-09-14T13:26:48
https://dev.to/sidharth8891/top-40-git-interview-questions-and-answers-for-sdet-automation-qa-devops-sde-qae-2jei
Refer below link for frequently used GIT commands and git conflicts ************************** https://lnkd.in/eWzB45W2 ************************** Want to learn more about GIT and go through all posts related to GIT for SDET then do check out [HERE](https://automationreinvented.blogspot.com/search/label/GIT) **Git merge conflicts** Version control systems are all about managing contributions between multiple distributed authors ( usually developers ). Sometimes multiple developers may try to edit the same content. If Developer A tries to edit the code that Developer B is editing, a conflict may occur. To alleviate the occurrence of conflicts, developers will work in separate isolated branches. The git merge command's primary responsibility is to combine separate branches and resolve any conflicting edits. Git merge Conflicts generally arise when two people have changed the same lines in a file, or if one developer deleted a file while another developer was modifying it. In these cases, Git cannot automatically determine what is correct. Conflicts only affect the developer conducting the merge, the rest of the team is unaware of the conflict. Git will mark the file as being conflicted and halt the merging process. It is then the developers' responsibility to resolve the conflict. "Remember that if you are planning to move to SDET role, then expertise in GIT is non negotiable." #qa #automation #devops #interview #testautomation #softwaretesting #github #gitcommands #testing #testers #sdet #gitops #sourcecode #learningtech #learning #team #developer #people Follow @sidharth88 for more posts
sidharth8891
1,193,200
Использование GitHub в обучении студентов
Статьи впервые опубликованы на портале Хабр: Использование GitHub в обучении студентов -...
0
2022-09-14T14:45:38
https://dev.to/anstfoto/ispolzovaniie-github-v-obuchienii-studientov-3gg3
github, education, learning, team
*Статьи впервые опубликованы на портале Хабр:* - *[Использование GitHub в обучении студентов](https://habr.com/ru/post/533940/) - https://habr.com/ru/post/533940/* - *[Вариант с форками](https://habr.com/ru/post/534198/) - https://habr.com/ru/post/534198/* - *[Вариант командной работы](https://habr.com/ru/post/534292/) - https://habr.com/ru/post/534292/* - *[Вариант командной работы с несколькими репозиториями](https://habr.com/ru/post/536590/) - https://habr.com/ru/post/536590/* *** В своей преподавательской практике использую GitHub... Но для начала давайте представлюсь. Зовут меня Старинин Андрей. И я преподаю программирование, хотя по первому образованию я биолог. А ещё - один из основателей и ведущих подкаста "IT за Edu". Мой стек дисциплин: - C++ - основы программирования - основы ООП - GUI-приложения (Qt) - C# - ООП - сетевое программирование - GUI-приложения (WPF) - взаимодействие приложений и БД (ADO.Net) - Базы данных - проектирование БД - SQLite - MySQL - Управление проектами Кажется, что всего много. Но успеваем не сильно погрузиться в отдельные технологии. После какого-то времени (точно не помню уже какого) понял, что студентов можно, и даже нужно, "приучать" к системам управления версиями почти сразу - с начала обучения. Для обучения выбрал GitHub. Хотя Bitbucket тоже нравится. Да, я не учу студентов сразу по харду, они не сразу изучают git в CLI. Я их знакомлю сначала с web-интерфейсом GitHub'а. Потом рассказываю про GUI-клиенты. Из них мне нравится GitKraken. Но не заставляю их пользоваться тем, что нравится мне - они вольны выбирать сами чем пользоваться. Постепенно - это примерно так: 1. Просто показываю как выкладывать код 2. Прошу их выкладывать свои решения и присылать мне ссылки на репозитории 3. Выкладываю текст заданий и прошу ответы присылать через pull-request'ы 4. Пробуем поработать в маленьких командах над одним репозиторием без веток 5. Пробуем поработать небольшой командой над одним репозиторием с отдельными ветками 6. Пробуем работать над большим проектом большой командой с несколькими репозиториями и ветками. И вот такой постепенный подход стараюсь применять при изучении тем. Иногда темы заканчиваются быстрее, чем успеем перейти к большому или маленькому проекту. Но это несильно страшно. По изучении нескольких тем мы можем полученные знания объединить в один большой проект. Не все студенты сразу всё понимают и принимают. Но тем интереснее и приятнее когда они "доходят". Ещё люблю подход: учимся на своих ошибках. Во время обучения есть возможность ошибаться и понять к чему это приводит. Что мне нравится в GitHub при обучении? - Поддержка аккаунтов для организаций, а в аккаунтах возможность создания команд с гибкими настройками доступов ![github organizations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n686z8dxourekvsfauia.png) ![github organizations teams](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oowirgqfgv7a9c6bvqvj.png) ![github organizations teams role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2yq53v0c027wr81g33z3.png) - Поддержка Markdown-разметки. Можно более "красиво" оформлять задания. ![github markdown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n9ki7zmm54o47wbh960.png) - Система форков. Может любой человек сделать форк, а потом предложить запрос на слияние. Не всегда нужно всех студентов добавлять в команду. ![github forks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sq039vqmvffhqlua326h.png) - Возможность комментировать участки кода при проведении ревью. Очень удобно указывать на сильные и слабые моменты в программах. ![github codereview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lb71ka611xptgnubzmu4.png) - Возможность назначать ревьюером любого члена команды. Студенты должны уметь не только хорошо писать программы, но и проверять чужой код. - Система issues. Можно давать другим командам студентов задание на проверку кода и выявления багов, с занесением всего в issues. ![github issues](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xv10tamkp6ffh6ub8fqs.png) Для чего я приучаю студентов к GitHub'у? - Создание своего портфолио уже с самого начала обучения, а не только под конец. - Понимание принципов написания кода. Когда начинают чужой код проверять - многое понимают - Понимание "соглашения об именовании". Пока не наступят на грабли разного именования в одной команде - не понимают. Ну или не все понимают - Понимание как работать в команде. И как командам между собой взаимодействовать. Прекрасно понимаю, что мои методы не самые лучшие и далеки от совершенства, да и от реальности далековаты. Но стараюсь их приблизить к реальности. ## .Вариант с форками. Начну с варианта, когда не обязательно добавлять студентов в аккаунт организации. Т.е. можно и в своём аккаунте делать репозитории с заданиями. ### .Примерный порядок действия. - Создаёте репозиторий с названием задания. - В `README.md` добавляете текст задания и подробную (желательно, но не обязательно) инструкцию что и как должны сделать. Обязательно обращаете внимание на создание форка и после выполнения (читай, наполнения репозитория) создания запроса на слияние (pull request) с вашим исходным репозиторием. Пример - [https://github.com/college-VIVT/TerminalEmulator](https://github.com/college-VIVT/TerminalEmulator) В нужном месте сообщаете студентам задание и ссылку на репозиторий. ![readme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zteunf760hbfq3u5gnoo.png) - Ждёте выполнения задания, а точнее создания запроса на слияние. ![pull requests](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gjm0of4dtyhvgpj7qut.png) - Проверяете. Оставляете комментарии либо ко всему заданию целиком, либо к его отдельным частям. ![code review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tol67uks83t0ic90z5w9.png) ![code review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmbsdhkvaccr5lmo9mwb.png) - Принимать (мерджить) запрос на слияние в данной ситуации не нужно. Если всё хорошо - то можно просто оставить комментарий в ревью кода. Если всё плохо - то не принимаете. #### .Плюсы и минусы. ##### Плюсы: - Не нужен аккаунт организации - Можно рассылать любому количеству студентов, даже из разных групп или учебных заведений ##### Минусы: - Нужно следить, чтобы не сделали мердж - Нужно объяснять что такое форк и запрос на слияние (у некоторых моих студентов это вызвало дополнительные затруднения) - Сложности с принятием запросов Approve . Мне хочется, чтобы в репозитории было только задание и не было кода решения от студентов. **Какие можно внести дополнения:** добавить под каждого студента свою ветку, но это лишние действия при создании и дальнейшем наполнении репозитория. ## .Вариант командной работы. Продолжу вариантом про командную работу. Но рассмотрю ту его версию, когда нет большого числа репозиториев и веток. Про работу большой команды расскажу, наверное, в отдельном посте. ### .Примерный порядок действия. - Создаёте аккаунт организации ![account organization](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zv45pq08uicoo5yz8nrl.png) - Добавляете в него студентов. ![add students to organization](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucar8sfyie4069hjdu87.png) - Создаёте репозиторий. В README.md добавляете текст задания. Также наполняете репозиторий предварительно необходимым минимумом (нужными файлами для выполнения задания). Создаёте необходимые ветви. Обычно создаю ветвь dev или develop ![create repo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31zd140lkangowm0y2ii.png) - Студенты получив задания, делают ответвления от последнего коммита. Выполняют задания, коммитят. Задания можно выдавать как через issues, так и какой-нибудь сервис с Kanban или Scrum ![create issues](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dqdur6fuoqwkagbjoqlq.png) - Создают запрос на слияние ![create pull request](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/620q9adscoif6c5y92jc.png) - Проверяете. Оставляете комментарии либо ко всему заданию целиком, либо к его отдельным частям. ![code review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5bh9gkfer1hy1g3f9x4.png) ![code review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ziws90tfe8b30i7roa1b.png) #### Плюсы и минусы ##### Плюсы: - Более приближенный к реальности вариант моделирования - Можно назначать студентов в качестве ревьюеров кода. Даже преподавательского. Я люблю делать в коде специально ошибки как явные, так и неявные, чтобы студенты их находили и исправляли. ##### Минусы: - Нужно создавать отдельный аккаунт для организации - Нужно объяснить как работать с ветками и следить, чтобы пушили в нужную ветку. **Какие можно внести дополнения:** связать репозиторий с Kanban- или Scrum-сервисом, чтобы выдача заданий фиксировалась в карточках на досках. ## .Вариант командной работы с несколькими репозиториями. Расскажу про "самый приближённый" к реалиям вариант, когда в рамках реализации одной программы возникают подпроекты и над ними трудятся разные команды в разных репозиториях. ### .Примерный порядок действия. Часть действий повторяются из предыдущего примера. - Создаёте аккаунт организации ![account organization](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uoiyvn85enq05vej2jjd.png) - Добавляете в него студентов. ![add students to organization](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/524diqt8o75msll92ydq.png) - Создаёте репозиторий. В README.md добавляете текст задания. Также наполняете репозиторий предварительно необходимым минимумом (нужными файлами для выполнения задания). Создаёте необходимые ветви. Обычно создаю ветвь dev или develop ![create repo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bu2ip1bq5kcronn4zqk5.png) - Студенты получив задания, клонируют репозиторий себе на локальные машины. ![clone repo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewjdykawxk9cneuwkxrm.png) ![clone repo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yin9j5pdmmvhnyz5l96r.png) - По мере обсуждения решения выявляются подпроекты. Создаются команды под каждый подпроект. Для каждого подпроекта создаётся свой репозиторий с предварительным наполнением. ![teams](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i7mwwk4d5b2mgtkumja3.png) ![teams](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/olyc44iqpz0jtifznekl.png) - Команды выполняют задания, коммитят, пушат. Задания можно выдавать как через issues, так и какой-нибудь сервис с Kanban или Scrum ![github projects](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/er66t56fi06usmjxtetk.png) - Создают запрос на слияние ![create pull request](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1d2x2ee798qr9bnk872.png) - Проверяете. Оставляете комментарии либо ко всему заданию целиком, либо к его отдельным частям. ![code review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rsxi53u1d3awekrmntwk.png) ![code review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w1p8fxw5zqhwlww8968.png) - Создаются релизы. Готовые DLL или ещё что берётся из релизов и подключается в основной проект. ![realese](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hzydena4l48npmjir17a.png) - В каждой команде ведётся техдокументация. ![documentation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sggujdvog8bdxwtp0dv.png) #### Плюсы и минусы ##### Плюсы: - Более приближенный к реальности вариант моделирования - Можно назначать студентов в качестве ревьюеров кода. Даже преподавательского. Я люблю делать в коде специально ошибки как явные, так и неявные, чтобы студенты их находили и исправляли. - Каждая команда работает над своим подпроектом - Студенты пробуют межкомандное взаимодействие при разработке одного большого проекта. ##### Минусы: - Нужно создавать отдельный аккаунт для организации - Нужно объяснить как работать с ветками и следить, чтобы пушили в нужную ветку. - Нужно объяснять что такое релиз, как происходит версионирование. - Нужно объяснять как пишется и для чего нужна техдокументация. ##### Какие можно внести дополнения: - связать репозиторий с Kanban- или Scrum-сервисом, чтобы выдача заданий фиксировалась в карточках на досках - создавать не отдельные репозитории для каждого подпроекта, а использовать git submodules
anstfoto
1,193,256
Let's create a React File Manager Chapter XII: Progress Bars, Skeltons And Overlays
We can now navigate between directories, but we don't have any feedback when we are loading the...
19,719
2022-09-14T15:51:42
https://dev.to/hassanzohdy/lets-create-a-react-file-manager-chapter-xii-progress-bars-skeltons-and-overlays-1ih3
react, typescript, mongez, javascript
--- title: Let's create a React File Manager Chapter XII: Progress Bars, Skeltons And Overlays published: true description: series: File Manager React Mongez tags: react, typescript, mongez, javascript cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fmnreiqss4xt7e9wuz1.png # Use a ratio of 100:42 for best results. --- We can now navigate between directories, but we don't have any feedback when we are loading the files, so let's add a loading progress bar. ## Updating Root Path We forgot to update file manager root path when we load file manager, so let's do it now. ```diff // FileManager.tsx // load the given directory path const load = useCallback( (path: string, isRoot = false) => { setIsLoading(true); + if (isRoot) { + fileManager.setRootPath(path); + } fileManager.load(path).then(node => { setCurrentDirectoryNode(node); setIsLoading(false); if (isRoot) { setRootDirectoryNode(node); } }); }, [fileManager], ); ``` ## Removing Modal So modal is good to display file manager in isolated layout, but what if we need to make it take the entire page, thus we need to remove the modal, and we can make another wrapper component for modal. ```tsx // FileManager.tsx import { Grid } from "@mantine/core"; import BaseFileManager from "app/file-manager/utils/FileManager"; import { useCallback, useEffect, useRef, useState } from "react"; import Content from "../../Content"; import FileManagerContext from "../../contexts/FileManagerContext"; import { Node } from "../../types/FileManager.types"; import { BodyWrapper } from "./FileManager.styles"; import { FileManagerProps } from "./FileManager.types"; import LoadingProgressBar from "./LoadingProgressBar"; import Sidebar from "./Sidebar"; import Toolbar from "./Toolbar"; export default function FileManager({ rootPath }: FileManagerProps) { const [isLoading, setIsLoading] = useState(true); const [currentDirectoryNode, setCurrentDirectoryNode] = useState<Node>(); const [rootDirectoryNode, setRootDirectoryNode] = useState<Node>(); const { current: fileManager } = useRef(new BaseFileManager()); // load the given directory path const load = useCallback( (path: string, isRoot = false) => { setIsLoading(true); if (isRoot) { fileManager.setRootPath(path); } fileManager.load(path).then(node => { setCurrentDirectoryNode(node); setIsLoading(false); if (isRoot) { setRootDirectoryNode(node); } }); }, [fileManager], ); // load root directory useEffect(() => { if (!rootPath) return; load(rootPath, true); }, [rootPath, fileManager, load]); return ( <FileManagerContext.Provider value={fileManager}> <LoadingProgressBar /> <Toolbar /> <BodyWrapper> <Grid> <Grid.Col span={3}> <Sidebar rootDirectory={rootDirectoryNode} /> </Grid.Col> <Grid.Col span={9}> <Content /> </Grid.Col> </Grid> </BodyWrapper> </FileManagerContext.Provider> ); } FileManager.defaultProps = { rootPath: "/", }; ``` Don't forget to update the props types as well. ```tsx // FileManager.types.ts import { Node } from "../../types/FileManager.types"; export type FileManagerProps = { /** * Root path to open in the file manager * * @default "/" */ rootPath?: string; /** * Callback for when a file/directory is selected */ onSelect?: (node: Node) => void; /** * Callback for when a file/directory is double clicked */ onDoubleClick?: (node: Node) => void; /** * Callback for when a file/directory is right clicked */ onRightClick?: (node: Node) => void; /** * Callback for when a file/directory is copied */ onCopy?: (node: Node) => void; /** * Callback for when a file/directory is cut */ onCut?: (node: Node) => void; /** * Callback for when a file/directory is pasted * The old node will contain the old path and the new node will contain the new path */ onPaste?: (node: Node, oldNode: Node) => void; /** * Callback for when a file/directory is deleted */ onDelete?: (node: Node) => void; /** * Callback for when a file/directory is renamed * The old node will contain the old path/name and the new node will contain the new path/name */ onRename?: (node: Node, oldNode: Node) => void; /** * Callback for when a directory is created */ onCreateDirectory?: (directory: Node) => void; /** * Callback for when file(s) is uploaded */ onUpload?: (files: Node[]) => void; /** * Callback for when a file is downloaded */ onDownload?: (node: Node) => void; }; ``` Now let's clean our home page and just render the file manager. ```tsx // HomePage.tsx import Helmet from "@mongez/react-helmet"; import FileManager from "app/file-manager/components/FileManager"; export default function HomePage() { return ( <> <Helmet title="home" appendAppName={false} /> <FileManager /> </> ); } ``` ## Adding Progress Bar Component As we mentioned before, we're going to use events as it's super powerful, so we can use it to listen when file manager is loading then we show the progress bar, once loading is done we hide it. Create `components/LoadingProgressBar.tsx` file and add the following code: ```tsx // LoadingProgressBar.tsx import useFileManager from "../../hooks/useFileManager"; export default function LoadingProgressBar() { const fileManager = useFileManager(); return <div>LoadingProgressBar</div>; } ``` Nothing fancy here, we just need to get the file manager instance to listen to its events. Now let's import it in our File Manager component and add it to the body. ```tsx // FileManager.tsx import { Grid, Modal } from "@mantine/core"; import BaseFileManager from "app/file-manager/utils/FileManager"; import { useCallback, useEffect, useRef, useState } from "react"; import Content from "../../Content"; import FileManagerContext from "../../contexts/FileManagerContext"; import { Node } from "../../types/FileManager.types"; import { BodyWrapper } from "./FileManager.styles"; import { FileManagerProps } from "./FileManager.types"; import LoadingProgressBar from "./LoadingProgressBar"; import Sidebar from "./Sidebar"; import Toolbar from "./Toolbar"; export default function FileManager({ open, onClose, rootPath, }: FileManagerProps) { const [isLoading, setIsLoading] = useState(true); const [currentDirectoryNode, setCurrentDirectoryNode] = useState<Node>(); const [rootDirectoryNode, setRootDirectoryNode] = useState<Node>(); const { current: fileManager } = useRef(new BaseFileManager()); // load the given directory path const load = useCallback( (path: string, isRoot = false) => { setIsLoading(true); if (isRoot) { fileManager.setRootPath(path); } fileManager.load(path).then(node => { setCurrentDirectoryNode(node); setIsLoading(false); if (isRoot) { setRootDirectoryNode(node); } }); }, [fileManager], ); // load root directory useEffect(() => { if (!rootPath || !open) return; load(rootPath, true); }, [rootPath, fileManager, open, load]); return ( <FileManagerContext.Provider value={fileManager}> <Modal size="xl" opened={open} onClose={onClose}> <LoadingProgressBar /> <Toolbar /> <BodyWrapper> <Grid> <Grid.Col span={3}> <Sidebar rootDirectory={rootDirectoryNode} /> </Grid.Col> <Grid.Col span={9}> <Content /> </Grid.Col> </Grid> </BodyWrapper> </Modal> </FileManagerContext.Provider> ); } FileManager.defaultProps = { rootPath: "/", }; ``` > Sometimes i paste the entire component code, and others i don't so you can see the changes, but you can always check the full code in the github repo. Now let's use [Mantine Progress Bar](https://mantine.dev/core/progress/) and try it ```tsx // LoadingProgressBar.tsx import { Progress } from "@mantine/core"; import useFileManager from "../../hooks/useFileManager"; export default function LoadingProgressBar() { const fileManager = useFileManager(); return <Progress size="lg" value={50} striped animate />; } ``` It should look like ![Progress Bar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/seu2va23bmhopez144pc.png) Now let's add the logic to show/hide the progress bar. ```tsx // LoadingProgressBar.tsx import { Progress } from "@mantine/core"; import { useEffect, useState } from "react"; import useFileManager from "../../hooks/useFileManager"; export default function LoadingProgressBar() { const fileManager = useFileManager(); const [progress, setProgress] = useState(0); useEffect(() => { // let's create an interval that will update progress every 300ms let interval: ReturnType<typeof setInterval>; // we'll listen for loading state const loadingEvent = fileManager.on("loading", () => { setProgress(5); interval = setInterval(() => { // we'll increase it by 10% every 100ms // if it's more than 100% we'll set it to 100% setProgress(progress => { if (progress >= 100) { clearInterval(interval); return 100; } return progress + 2; }); }, 100); }); // now let's listen when the loading is finished const loadEvent = fileManager.on("load", () => { // clear the interval setProgress(100); setTimeout(() => { clearInterval(interval); // set progress to 0 setProgress(0); }, 300); }); // unsubscribe events on unmount or when use effect dependencies change return () => { loadingEvent.unsubscribe(); loadEvent.unsubscribe(); }; }, [fileManager]); if (progress === 0) return null; return <Progress size="lg" value={progress} striped animate />; } ``` The code looks a bit complicated, but it's not that hard, we just create an interval that will increase the progress by 10% every 100ms, and we'll listen to `loading` and `load` events to start and stop the interval. And when the effect is unmounted or dependencies change we'll unsubscribe the events. Now to test it, we'll fake the loading by adding a `setTimeout` in our `list` function. ```tsx // file-manager-service.ts import FileManagerServiceInterface from "../types/FileManagerServiceInterface"; import fetchNode from "../utils/helpers"; export class FileManagerService implements FileManagerServiceInterface { /** * {@inheritDoc} */ public list(directoryPath: string): Promise<any> { return new Promise(resolve => { setTimeout(() => { resolve({ data: { node: fetchNode(directoryPath), }, }); }, 3000); }); } } ``` Now the progress bar should look like ![Progress Bar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lnv2bu11tcs56gblxiyv.png) The sidebar is hidden, let's add [Skelton](https://mantine.dev/core/skeleton/) to it. First we declare a loading state, then we'll listen for loading events. ```tsx // Sidebar.tsx import { Card, Skeleton } from "@mantine/core"; import { IconFolder, IconHome2 } from "@tabler/icons"; import { useEffect, useMemo, useState } from "react"; import useFileManager from "../../../hooks/useFileManager"; import { Node } from "../../../types/FileManager.types"; import SidebarNode from "./SidebarNode"; export type SidebarProps = { rootDirectory?: Node; }; export default function Sidebar({ rootDirectory }: SidebarProps) { const rootChildren = useMemo(() => { return rootDirectory?.children?.filter(child => child.isDirectory); }, [rootDirectory]); const fileManager = useFileManager(); const [isLoading, setIsLoading] = useState(false); useEffect(() => { const loadingEvent = fileManager.on("loading", () => setIsLoading(true)); const loadEvent = fileManager.on("load", () => setIsLoading(false)); return () => { loadingEvent.unsubscribe(); loadEvent.unsubscribe(); }; }, [fileManager]); if (isLoading) { return ( <Card shadow={"sm"}> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> </Card> ); } if (!rootDirectory) return null; return ( <> <Card shadow="sm"> <SidebarNode node={rootDirectory} navProps={{ p: 0, }} icon={<IconHome2 size={16} color="#78a136" />} /> {rootChildren?.map(child => ( <SidebarNode navProps={{ p: 0, pl: 10, }} key={child.path} icon={<IconFolder size={16} fill="#31caf9" />} node={child} /> ))} </Card> </> ); } ``` Pretty neat, right? Now let's jump to the content part. ## Content Loading State We'll add a loading state to the content part, and we'll show a [Overlay](https://mantine.dev/core/loading-overlay/) when the content is loading. As previous in sidebar, we'll add loading state and listen for loading events, we can just **copy/paste** the code xD. ```tsx // Content.tsx import { Card } from "@mantine/core"; import { useEffect, useState } from "react"; import useFileManager from "../hooks/useFileManager"; export default function Content() { const fileManager = useFileManager(); const [isLoading, setIsLoading] = useState(false); useEffect(() => { const loadingEvent = fileManager.on("loading", () => setIsLoading(true)); const loadEvent = fileManager.on("load", () => setIsLoading(false)); return () => { loadingEvent.unsubscribe(); loadEvent.unsubscribe(); }; }, [fileManager]); return ( <> <Card shadow="sm"> <div>Content</div> </Card> </> ); } ``` If we notice, there is a pattern here, we're looking for a loading state, and we're listening for loading events, so we can create a custom hook to handle this. Let's create a `hooks/useLoading` hook. ```tsx // hooks/useLoading.ts import { useEffect, useState } from "react"; import useFileManager from "./useFileManager"; export default function useLoading(): boolean { const fileManager = useFileManager(); const [isLoading, setIsLoading] = useState(false); useEffect(() => { const loadingEvent = fileManager.on("loading", () => setIsLoading(true)); const loadEvent = fileManager.on("load", () => setIsLoading(false)); return () => { loadingEvent.unsubscribe(); loadEvent.unsubscribe(); }; }, [fileManager]); return isLoading; } ``` Now let's use it in `Content.tsx` and `Sidebar.tsx`. ```tsx // Sidebar.tsx import { Card, Skeleton } from "@mantine/core"; import { IconFolder, IconHome2 } from "@tabler/icons"; import { useMemo } from "react"; import useLoading from "../../../hooks/useLoading"; import { Node } from "../../../types/FileManager.types"; import SidebarNode from "./SidebarNode"; export type SidebarProps = { rootDirectory?: Node; }; export default function Sidebar({ rootDirectory }: SidebarProps) { const rootChildren = useMemo(() => { return rootDirectory?.children?.filter(child => child.isDirectory); }, [rootDirectory]); const isLoading = useLoading(); if (isLoading) { return ( <Card shadow={"sm"}> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> </Card> ); } if (!rootDirectory) return null; ... ``` Same in `Content.tsx`. ```tsx // Content.tsx import { Card } from "@mantine/core"; import useLoading from "../hooks/useLoading"; export default function Content() { const isLoading = useLoading(); return ( <> <Card shadow="sm"> <div>Content</div> </Card> </> ); } ``` Now let's create our overlay, but first we need to make a wrapper for the content, so we can position the overlay. create a `Content.styles.tsx` file and add the following code. ```tsx // Content.styles.tsx import styled from "@emotion/styled"; export const ContentWrapper = styled.div` label: ContentWrapper; position: relative; `; ``` Now we import it ```tsx // Content.tsx import { Card } from "@mantine/core"; import useLoading from "../hooks/useLoading"; import { ContentWrapper } from "./Content.styles"; export default function Content() { const isLoading = useLoading(); return ( <> <Card shadow="sm"> <ContentWrapper>Content</ContentWrapper> </Card> </> ); } ``` As content height is small, let's set a height, and add a `overflow: auto` to the content wrapper. ```tsx // Content.styles.tsx import styled from "@emotion/styled"; export const ContentWrapper = styled.div` label: ContentWrapper; position: relative; height: 300px; overflow: auto; `; ``` Let's also create a `SidebarWrapper` and add a `overflow: auto` to it. ```tsx // Sidebar.styles.tsx import styled from "@emotion/styled"; export const SidebarWrapper = styled.div` label: SidebarWrapper; overflow: auto; height: 300px; position: relative; `; ``` Let's inject it in both Cards, the one in the loading and the other for the content. ```tsx // Sidebar.tsx import { Card, Skeleton } from "@mantine/core"; import { IconFolder, IconHome2 } from "@tabler/icons"; import { useMemo } from "react"; import useLoading from "../../../hooks/useLoading"; import { Node } from "../../../types/FileManager.types"; import { SidebarWrapper } from "./Sidebar.styles"; import SidebarNode from "./SidebarNode"; export type SidebarProps = { rootDirectory?: Node; }; export default function Sidebar({ rootDirectory }: SidebarProps) { const rootChildren = useMemo(() => { return rootDirectory?.children?.filter(child => child.isDirectory); }, [rootDirectory]); const isLoading = useLoading(); if (isLoading) { return ( <Card shadow={"sm"}> <SidebarWrapper> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> <Skeleton height={8} mt={6} radius="xl" /> <Skeleton height={12} mt={6} width="80%" radius="sm" /> <Skeleton height={8} mt={6} width="60%" radius="xl" /> </SidebarWrapper> </Card> ); } if (!rootDirectory) return null; return ( <Card shadow="sm"> <SidebarWrapper> <SidebarNode node={rootDirectory} navProps={{ p: 0, }} icon={<IconHome2 size={16} color="#78a136" />} /> {rootChildren?.map(child => ( <SidebarNode navProps={{ p: 0, pl: 10, }} key={child.path} icon={<IconFolder size={16} fill="#31caf9" />} node={child} /> ))} </SidebarWrapper> </Card> ); } ``` Now it looks like this: ![File Manager](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8ednhthtrmxuolntmiu.png) Back to Content Component, let's add the overlay. ```tsx // Content.tsx import { Card, LoadingOverlay } from "@mantine/core"; import useLoading from "../hooks/useLoading"; import { ContentWrapper } from "./Content.styles"; export default function Content() { const isLoading = useLoading(); return ( <> <Card shadow="sm"> <ContentWrapper> <LoadingOverlay visible={isLoading} overlayBlur={2} /> </ContentWrapper> </Card> </> ); } ``` And our final look is: ![File Manager](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cn37eircpz1jv6do2sjp.png) We're done with loaders. And we're good now with our progress, next chapter we'll make a stop and clean up our code, also we'll reorganize our files and structure. ## Article Repository You can see chapter files in [Github Repository](https://github.com/hassanzohdy/file-manager-react/tree/12-Progress-Bars-Skeltons-And-Overlays) > Don't forget the `main` branch has the latest updated code. ## Tell me where you are now If you're following up with me this series, tell me where are you now and what you're struggling with, i'll try to help you as much as i can. Salam.
hassanzohdy
1,193,951
How to merge two arrays?
Let’s say you have two arrays and want to merge them: const firstTeam = ['Olivia', 'Emma',...
0
2022-09-15T10:16:48
https://dev.to/jolamemushaj/how-to-merge-two-arrays-3eb0
javascript, webdev, 100daysofcode, beginners
Let’s say you have two arrays and want to merge them: ```javascript const firstTeam = ['Olivia', 'Emma', 'Mia'] const secondTeam = ['Oliver', 'Liam', 'Noah'] ``` One way to merge two arrays is to use concat() to concatenate the two arrays: ```javascript const total = firstTeam.concat(secondTeam) ``` But since the 2015 Edition of the ECMAScript now you can also use spread to unpack the arrays into a new array: ```javascript const total = [...firstTeam, ...secondTeam] console.log(total) ``` The result would be: ```javascript // ['Olivia', 'Emma', 'Mia', 'Oliver', 'Liam', 'Noah'] ``` There is also another way, in case you don't want to create a new array but modify one of the existing arrays: ```javascript firstTeam.push(...secondTeam); firstTeam; // ['Olivia', 'Emma', 'Mia', 'Oliver', 'Liam', 'Noah'] ```
jolamemushaj
1,193,960
Open Edge Developer Tools automatically by launching debug with Visual Studio 2022
During my daily activities, I use Visual Studio constantly to work with Blazor projects. Since Blazor...
16,758
2022-09-15T11:04:08
https://dev.to/kasuken/open-edge-developer-tools-automatically-by-launching-debug-with-visual-studio-2022-54og
programming, dotnet, blazor, productivity
During my daily activities, I use Visual Studio constantly to work with Blazor projects. Since Blazor is a frontend technology that exchanges data with APIs, the developer toolbar of the browser is a very important tool. My flow to debug an application is: press F5, wait until the browser is loaded, press F12, open the dev tools (and sometimes move the developer toolbar in the right position). It may seem like a quick activity but repeating this activity many times during the working hours, takes a lot of time and it's annoying. I found a trick to reduce this manual activity. From the Visual Studio toolbar on the top, click on the debug menu, on the arrow close to the name of the project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/laavpw5j4ywl50ug4jhv.png) Click on "**Browse with**", in the new window, click on "**Add...**" and insert the following values: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2dop7hdvjr0yyz5aulkz.png) For debugging purposes, I use the Edge Developer, but this trick works with the other versions as well. The path of my dev instance is: "**C:\Program Files (x86)\Microsoft\Edge Dev\Application\msedge.exe**" As arguments, you can set: "**--auto-open-devtools-for-tabs**". The friendly name is up to you, for instance, I use the name "**Edge Dev with Tools**" Click Ok and save. I set as Default this new browser definition. Now you can launch your debugging session as always and the developer tools will open automatically at startup. With my screen I use the developer tools in a separate window, as you can see in the screenshot below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ea2fn6klmmhb36n0neth.png) Happy debugging!
kasuken
1,193,978
Add all the first elements and second elements to new list from a list of list using `list comprehension`
Question: Add all the first elements and second elements to new list from a list of list using list...
0
2022-09-15T11:32:23
https://dev.to/mu/python-interview-question-12-34-56-to-1-3-5-2-4-6-5fa8
python, interview, career
Question: Add all the first elements and second elements to new list from a list of list using `list comprehension` Ex: Given: [ [1,2], [3,4], [5,6] ] Expected: [[1, 3, 5], [2, 4, 6]] Solution: ```python a = [ [1,2], [3,4], [5,6] ] final_list = [[x[0] for x in a], [x[1] for x in a]] print(final_list) ```
mu
1,203,046
Test your PHP
Got these tests from recent interviews and thought it to share with everyone to once in a while...
0
2022-09-26T04:29:06
https://dev.to/jtwebguy/test-your-php-237d
php, phpskills, phptest
Got these tests from recent interviews and thought it to share with everyone to once in a while measure your PHP skill level. #1. Check if Parentheses, Brackets and Braces Are Balanced. It's better to use Stacks method but use your own if you must. `'{{}}' = Pass '[]}{{}}' = Fail '[[{]{{}}' = Fail '[]()(' = 1x open parenthesis/bracket/curly brace '[]({{{{{{{{{{{{{{{{{}}}}}}}}}})' = Too many open ` #2. Unique names `$names = unique_names(['Ava', 'Emma', 'Olivia'], ['Olivia', 'Sophia', 'Emma']);` `// should print Emma, Olivia, Ava, Sophia function unique_names(array $array1, array $array2) : array { //todo: your code here }` #3. Find the roots in any order For example: findRoots(4, 12, 8); //[-1,-2] or [-2,-1] root formula ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4l6mtddi9nd8m7hbwk0.png) `@return array An array of two elements containing roots in any order` `function findRoots($a, $b, $c) { //todo: your code here }` #4. Fibonacci Sequence Write a code that will get the Fibonacci sequence number starting from #1 up to n numbers. `should print: 1 1 2 3 5 8 13....n` Post your code in the comments.
jtwebguy
1,210,177
Centralized Outbound Routing on AWS
Managing and securing external connectivity can be challenging and expensive when an organization's...
0
2022-12-15T19:31:09
https://dev.to/andreacfm/centralized-outbound-routing-on-aws-212o
aws, vpc
Managing and securing external connectivity can be challenging and expensive when an organization's workload is split between many isolated accounts. Let's consider a use case where an organization has dev, prod, and shared workload deployed on private subnets in 3 isolated aws accounts. Some of these workloads must be able to fetch data from the public internet, access aws resources using vpc endpoints, and maybe communicate with each other. To cover the above requirements we need to perform some steps in each account. **Deploy a nat gateway** As a best practice, we should deploy one NGW for each AZ to increase our system availability. Apart from adding some management overhead and complexity to our landing zone creation, the main downside here is represented by the cost of this solution. Considering the us-east-1 region each NGW will cost 0.045 USD per hour + 0.045 USD per GB of generated traffic. Without considering the traffic cost we will have a cost for each account of ~100 USD/month (considering 3 AZs). **Create the required VPC endpoints** to keep the communication with aws services internal without flowing through the NAT gateway Again, deploying multiple times the vpc endpoints introduces some overhead and is not cost-effective. Each endpoint deployed on 3 AZs costs ~22USD/month without considering the traffic. We will incur this fee for each account where we deploy the endpoints. **Create VPC peerings** when required to establish the internal communication between VPCs This aspect introduces little overhead with very few accounts and a pretty stable network topology. It may become a huge issue if the accounts involved in your network grow and the intra-connections requirements change more frequently than expected. We will introduce the benefits of using a centralized Transit Gateway over VPC peering shortly. **Manage the network security** From a security point of view, we will need to replicate in each account whatever policy we need to respect about outbound connections. This includes firewalls, proxies, etc ... The security team will need to trace and act on several different accounts to ensure that the security policies are correctly applied. Introducing a centralized outbound network account may alleviate some of the outlined issues. This account will act as the main organization network router and a centralized exit point for all the organization's outbound connections In 2018 AWS introduced Transit Gateways as a way to centralize VPCs and on-premises networks through a central hub. Some of the advantages of using a TGW compared to VPC peering are as follows: * A TGW allows attachments from many VPCs, Direct connect and site to site at the same time, and can route traffic between all the attachments. A new VPC peering must be established between 2 points. Peering VPC A and C to B does not allow A to C communication. * Reuse the same VPN connection for multiple VPCs. * TGW supports up to thousands of attached networks * TGW route table per attachment allows fine-grained routing configurations In the following diagram, I have described the proposed architecture for creating a centralized outbound account using Transit Gateway as the main account network router + TGW peerings attachments from spoke accounts. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gz1xr3vhzzbrxowwep7w.jpg) Some notes: * The private subnets route table in DEV and PROD (#A - #B) accounts routes any external traffic (0.0.0.0/0) through a local TGW (#PT - #DT). * The TGW attachments route tables (#C and #D) will send any non-local request through the peering attachments between #PT/#DT and #OT. This will make traffic flow into the OUTBOUND account. * Both the peering attachments from DEV and PROD use a custom TGW route table (#E) that routes all the traffic to the OUTBOUND VPC that is directly attached to the TGW as well. At this point, the traffic directed to a local resource in the outbound VPC (a vpc endpoint by example) is locally managed while the rest flows through the NAT Gateway. * The TGW attachment between the OUTBOUND VPC and the TGW allows the traffic to flow back to the DEV and the PROD accounts. Check this GitHub [repo](https://github.com/andreacfm/aws-centralized-outboud-demo) for a terraform sample deployment **The VPC Endpoints DNS resolution** deserves some more attention. When we create a VPC endpoint aws adds a dedicated ENI that receives a private IP address from the subnet CIDR range where the endpoint is deployed. The private IP DNS resolution (if enabled in the endpoint configuration) is managed by AWS behind the scenes using a hidden route53 private hosted zone. Considering our example deploying an endpoint in the OUTBOUND account private subnet will allow the endpoint private IP to be resolved only in the perimeter of that account but will not work in the PROD and the DEV account. To solve this issue we can perform the following steps: * Disable the private DNS resolution in the VPC endpoint configuration * Create a private route 53hosted zone in the OUTBOUND account. For example for s3: ![private hosted zone](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vb374ll93gahsa1det14.png) * Add an alias record that points to the VPC endpoint ![route53 alias](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1gi67nni14wx35sx7wxl.png) * Associate the hosted zone with the PROD and DEV VPCs. Keep in mind that this cannot be done from the AWS console but must be performed using the aws CLI, APIs, or SDKs. That said if you are working with terraform this use case is [covered](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_vpc_association_authorization). In this way the service endpoint URL will be resolved in the PROD and DEV account the request will be correctly routed to the private IP in the OUTBOUND VPC. **What about internal routing between VPCs?** As we said before the Transit Gateway in the outbound account acts as the main network router. All the traffic originated by the organization vpcs flows through it. As a consequence, we can use the TGW routes table to allow or deny connections between the attached VPCs. In the above diagram, a connection originated from the PROD VPC flows through the peering attachment and is redirected to the DEV VPC (see the routings in the outbound account subnets routes tables). Adding a blackhole route in the TGW route table associated with the peering attachments can prevent this flow. For example |CIDR|Destination| |----|-----------| |10.10.0.0/16|VPC Outbound TGW Attachment| |0.0.0.0/0|VPC Outbound TGW Attachment| |172.18.0.0/16|blackhole| |172.19.0.0/16|blackhole| The clear benefit is the ability to manage the org network topology acting on a single resource. Note that this configuration is done in the OUTBOUND account and does not need any action in the PROD or DEV account. **What about costs?** One of the benefits of this solution is the reduction of the costs that come from reusing the same NAT Gateway and VPC endpoints deployments. Let’s consider an organization with 3 accounts with workload deployed in private subnets that need external connectivity + 3 VPC endpoints (s3, dynamo, and SSM for session manager) **Without the outbound account, we have a total cost of ~498 USD/month** NAT Gateway Deploying 3 NGW in 3 Azs will cost ~300 USD/month (100 USD * 3 accounts) VPC Endpoints Deploying 1 endpoint in 3 Azs will cost ~198 USD/month (22 USD * 3 endpoints * 3 accounts) **With the outbound account, we have a total of ~276 USD/month** NAT Gateway We will deploy only on the Outbound account in 3 Azs for a total of ~100USD VPC Endpoints We will deploy only on the Outbound account in 3 Azs for a total of ~66USD (22 * 3 endpoints) TGW Gateways We must calculate the cost of the 3 TGW attachments. As of today, this will cost 36.50 USD x attachment with a total cost of ~110 USD/month **Important**: We are not calculating data transfer that may vary depending on many different aspects in both of the solutions. Data flowing through a TGW attachment will incur a cost of ~ 0.02 USD per GB. As we can see the savings will increase with more accounts joining the network. Is possible that a huge amount of data flowing through the TGWs will reduce the savings. That said even without any savings we will obtain a better architecture for the same price. **Conclusion** As we saw Introducing a network hub account has several benefits. Working with a centralized network router gives better visibility of the organization network and simplifies the setup of spoke accounts that will require less management overhead. The network routing is managed on a dedicated account that can be provisioned with the required permissions to make it available only to a dedicated team. Since every network connection can be inspected, allowed, and logged in a single point, applying the organization's network security policies is easier and more effective. In a future post, we will analyze how we can use tools like NACL, Security Groups, and the new VPC Firewall to control the traffic flowing through the OUTBOUND account.
andreacfm
1,210,194
It's Hacktoberfest!
Get a chance to win $30k worth of cash prizes, a team meeting with the CTO, and exclusive swag by...
0
2022-10-03T21:24:21
https://dev.to/singlestore/its-hacktoberfest-100a
database, hacktoberfest, opensource, github
Get a chance to win $30k worth of cash prizes, a team meeting with the CTO, and exclusive swag by building your application idea with SingleStore through Nov. 7th. Sounds appealing, right? Here's what you need to know: [https://singlestore.devpost.com/]
drmartingit
1,210,292
Strong Parameters in Rails?
Ah, strong parameters in Rails The magic charm that allows developers write spell-bound code. Just...
0
2022-12-16T19:29:56
https://dev.to/hermitex/strong-parameters-4po1
webdev, ruby, beginners, rails
Ah, strong parameters in Rails ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvebndbcv91l2kpmursh.gif) The magic charm that allows developers write spell-bound code. Just kidding, they're actually a pretty useful tool for protecting your application from malicious user input. But let's be real, they can still be a pain to deal with at times. So, what are strong parameters exactly? Essentially, they're a way to specify which parameters are allowed to be passed through to your controller actions. This helps prevent attackers from injecting harmful data into your application. Let's say we have a form that allows users to create a new post. We want to allow them to specify the title and body of the post, but we don't want them to be able to set the published attribute. Without strong parameters, anyone could just send a POST request with a published parameter set to true and voila, they've published a post without your permission. Here's an example of how we can use strong parameters to only allow the title and body attributes to be passed through: ```ruby def create params.require(:post).permit(:title, :body) end ``` This will ensure that only the title and body attributes are allowed to be passed through to the create action. Any other attributes will be filtered out. But wait, there's more! You can also specify which attributes are required by using the require method. For example: ```ruby def create params.require(:post).permit(:title, :body).require(:title) end ``` This will ensure that the title attribute is not only permitted, but also required. If the title attribute is not present, the controller action will raise a ActionController::ParameterMissing error. Now, I know what you're thinking. "This is all well and good, but what if I have a bunch of nested attributes that I want to permit? Do I have to write out every single attribute individually?" Fear not, my fellow developer friend! Rails has introduced the permit! method, which allows you to permit all attributes and nested attributes. Just be careful with this one, as it can potentially open up your application to malicious input if used improperly. ```ruby def create params.require(:post).permit! end ``` There you have it, a quick overview of strong parameters in Rails.They may seem like a nuisance at times, but trust me, they're worth the effort in the long run to keep your application secure. Happy coding!
hermitex
1,210,337
Gesture Implementation for Cross Platform native Swift mobile apps
Introduction Hello everyone 👋 In our continuous series of SCADE articles, today we will...
0
2022-10-04T02:35:21
https://dev.to/scade/gesture-implementation-for-cross-platforms-mobile-apps-3be7
android, swift, ios, scade
--- title: Gesture Implementation for Cross Platform native Swift mobile apps published: true description: tags: android, swift, ios, scade cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i7cmfnt6jwyv9w9ht0f.png # Use a ratio of 100:42 for best results. --- ## Introduction Hello everyone 👋 In our continuous series of SCADE articles, today we will be developing a SCADE application that will demonstrate the gesture implementation for both Android/iOS platforms. This app will contain the use case of displaying the touch effect to buttons when pressed. It improves the user experience for the SCADE apps. The good part is, the same codebase is used for both Android & iOS platforms. Swift + SCADE = Awesome App ✌ So let’s start 😎. ## Prerequisite If you haven’t installed the SCADE IDE, [download](https://www.scade.io/download/) the SCADE IDE and [install](https://docs.scade.io/docs/installation) it on your macOS system. The only prerequisite for SCADE is the Swift language (at least basics). Also, please ensure the Android emulator or physical device is running if you want to run SCADE apps on an android device. ## Getting Started To keep it simple, let’s understand the concepts by creating a new SCADE project, <code>File -> New -> New Project<strong>. </strong></code>Then select the SCADE option, enter the project name, and click Create. We are ready to code the SCADE app. We will now create the user interface of the application. Now navigate to the _folder/sources/main.page_ and click on the <code>+</code> </strong> button on the top right of the editor and drag the widgets such as label, button, etc. ## Create Some UI Let’s start by creating the Pin Code entry panel user interface. It will require a few buttons for the digits 0-9 and a few labels to display the entered digits. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lfm1jf3pvdzjq5avja5.png) Please choose the desired font family, size & color from the design palette for the labels. 1. **Label 1**: We will add the app heading and assign the text (“Please choose your personal 5-digit pincode”) 2. **RowView**: A RowView is required to hold the 5 labels, which will actually display the entered 5 digits by the user. It will contain 5 labels, equidistant from each other. 3. **Label 2**: Let’s add another label with text(“Please choose a 5-digit pin code”). 4. **GridView**: Add a GridView widget that will contain the panel of digits. There will be 3 columns and 4 rows. We will add the buttons from 0-9 and a backspace button as well as equidistant from each other as shown in the above image. ## Let’s dive into the Code We have developed the UI for the application. Let’s now write the logic to fetch the entered pin digits by the user. We will initialize a variable `position` to 0 to get the position of the digit entered by the user. Then accordingly, we will store the input digit to the corresponding labels as per the value of the position. If the value of the position exceeds 5, we will initialize all digit labels to zero. The logic of the same is implemented in the below code snippet. ```swift var position = 0 func updatePin(val: Int) { if val == -1 { if position == 1 { self.digit_1.text = "*" position -= 1 } else if position == 2 { self.digit_2.text = "*" position -= 1 } else if position == 3 { self.digit_3.text = "*" position -= 1 } else if position == 4 { self.digit_4.text = "*" position -= 1 } else if position == 5 { self.digit_5.text = "*" position -= 1 } } else { position += 1 if position == 1 { self.digit_1.text = String(val) } else if position == 2 { self.digit_2.text = String(val) } else if position == 3 { self.digit_3.text = String(val) } else if position == 4 { self.digit_4.text = String(val) } else if position == 5 { self.digit_5.text = String(val) pinString = "\(self.digit_1.text)\(self.digit_2.text)\(self.digit_3.text)\(self.digit_4.text)\(self.digit_5.text)" self.callFinalMethod(pin: pinString) } else { position = 5 print("Exceeded") } } } ``` ## Let’s implement the touch gesture Let’s implement the touch effect to buttons whenever they are pressed. For this, let us first define the colors which will be visible upon clicking the buttons. ```swift // colors let colorDefaultGreen = SCDGraphicsRGB(red: 211, green: 211, blue: 211, alpha: 255) let whiteColor = SCDGraphicsRGB(red: 255, green: 255, blue: 255, alpha: 255) ``` As the next step, we will now use `SCDSvgPanGestureRecognizer` instance which will override the method `onPanAction()`, which provides the interface to implement the gesture handler logic. ```swift func getUpDownGestureForButton(btn: SCDWidgetsButton, val: Int) -> SCDSvgPanGestureRecognizer { // Create action* func onPanAction(recognizer: SCDSvgGestureRecognizer?) { // depending on whether we are inside or outside of the button, // we set the button background a different color switch recognizer!.state { case .began: btn.backgroundColor = self.colorDefaultGreen self.updatePin(val: val) case .ended: btn.backgroundColor = self.whiteColor default: return } } // create recognizer let panGestureRecognizer = SCDSvgPanGestureRecognizer(onPanAction) // Configure gesture --> nothing to configure. Return it return panGestureRecognizer } ``` `SCDSvgGestureRecognizer` instance returns the state of the event, for example, it returns two states `began` and `ended`. We can implement the logic to add the touch effect here by changing the background colors of the buttons as soon as it changes the state from `began` to `ended`. Finally, let’s append the above method `getUpDownGestureForButton` to each of the buttons. ```swift // add gesture to button. It changes the background color // of the button when button is pressed (began) and finger is lifted // up (ended) self.b0.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b0, val: 0)) self.b1.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b1, val: 1)) self.b2.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b2, val: 2)) self.b3.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b3, val: 3)) self.b4.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b4, val: 4)) self.b5.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b5, val: 5)) self.b6.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b6, val: 6)) self.b7.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b7, val: 7)) self.b8.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b8, val: 8)) self.b9.drawing!.gestureRecognizers.append(getUpDownGestureForButton(btn: self.b9, val: 9)) self.b_cancel.drawing!.gestureRecognizers.append( getUpDownGestureForButton(btn: self.b_cancel, val: -1)) ``` The above code snippet implements the gestureRecognizers method which calls the gesture implementation method as parameter. ## Run the App on iOS/Android In order to run the app on iOS/Android devices, please make sure the physical devices or simulator/emulator are up and running. SCADE apps can be run on SCADE emulators, iOS & Android devices as well as Apple and Android emulators. You can also [build](https://docs.scade.io/docs/build-file) the app for Android (APK/AAB) or iOS (IPA) to publish them to the respective app stores. You need to click on the App Name button and choose the device target accordingly. ### iOS ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0ggxb3oyq1ytt6pwt3f.gif) ### Android Now we are good to test the code on Android devices. Set the target to any Android device you like and test if it is working as expected. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/535038t9ok4rut4ynbae.gif) Voila🎊! We have successfully implemented the touch effect to buttons using the gesture feature available in SCADE. This is one of the examples of using a gesture-powered feature by the SCADE editor. You should definitely try the SCADE editor to build some of the cross platforms apps. We will be releasing a series of articles related to the SCADE & swift-android compiler. Thank you for reading and see you in the next article! Happy Coding 😊!
scade
1,210,389
Turnstile , alternative to CAPTCHA from cloudflare
Demo: https://demo.turnstile.workers.dev/ Turnstile is our smart CAPTCHA alternative. It...
0
2022-10-04T03:44:16
https://dev.to/chandrapenugonda/turnstile-alternative-to-captcha-from-cloudflare-4410
captcha, javascript, cloudflare, privacy
Demo: https://demo.turnstile.workers.dev/ Turnstile is our smart CAPTCHA alternative. It automatically chooses from a rotating suite of non-intrusive browser challenges based on telemetry and client behavior exhibited during a session. > Less data collection, more privacy, same security ## Swap out your existing CAPTCHA in a few minutes You can take advantage of Turnstile and stop bothering your visitors with a CAPTCHA even without being on the Cloudflare network. While we make it as easy as possible to use our network, we don't want this to be a barrier to improving privacy and user experience. To switch from a CAPTCHA service, all you need to do is: 1. Create a Cloudflare account, navigate to the `Turnstile` tab on the navigation bar, and get a sitekey and secret key. 2. Copy our JavaScript from the dashboard and paste over your old CAPTCHA JavaScript. 3. Update the server-side integration by replacing the old siteverify URL with ours. source code for demo example: https://github.com/cloudflare/turnstile-demo-workers reference: https://blog.cloudflare.com/turnstile-private-captcha-alternative/?ck_subscriber_id=904285867
chandrapenugonda
1,210,684
All about Game Developer
Hello! That the technology industry offers many acting possibilities you probably already know. But...
0
2022-10-04T10:58:07
https://dev.to/albericojr/all-about-game-developer-3gbj
gamedev, devops, programming
Hello! That the technology industry offers many acting possibilities you probably already know. But today we will talk a little about one of the most promising professions of today that is that of Game Developer. In recent years, the game market has been established as the largest entertainment industry, with movements that exceeded $ 174 billion and promises to grow even more in the coming years. In addition to being promising, the numbers make it clear that there will be no shortage of opportunities for those who want to embark on this universe. So if your eyes shine with the development of games, it's time to know a little, but about this profession. Throughout this article, I will tell you some things you need to know to enter the area and succeed as a game developer. Come check it out! ## What does a Developer game do? As its name implies, the game developer is the professional specializing in game writing. In other words, it is he who dominates the programming languages needed to build an interactive experience and those who write all the codes that make a game work correctly. In practice, it is up to the game developer to define how the characters will move throughout the game and at what speed the game will run. In addition to performing a series of code reviews and tests to solve any bug that may arise along the way. If you have a favorite game that borders on perfection, you have a lot to thank this professional. After all, it is he who makes a good part of the magic happens. ## difference between game developer and game designer Before we continue talking about Game Developer's career, we need to clarify a doubt. The difference between these two professionals who although they work in close collaboration have very different responsibilities. While the game developer is responsible for writing thousands of lines of code that make the game work. The game designer is the one who takes care of the creative part of all these processes. It is he who helps define all the details that will be part of the game, including not only the aesthetic part, but also the behaviors of the characters and the rules. The game designer offers the necessary guidelines for developers to do their work. ## What does it take to become a game developer? Now that you have opened what is the main mission of a game developer, you must be wondering what you need to master to work in the area. Firstly, it is important to know now that the game development area is for those who like to do a thorough job. After all, any mistake in typing the code can compromise the performance of the game or generate rework and delay in the project. To succeed in the profession, it is also important to master the most used programming languages in this type of development. Like C, C ++, C# and Java, and the key tools for function performance, such as Unity 3D, Unreal and Directx. Finally, the ability to work well in a team will also make all the difference, as Game Developer needs to work in collaboration with many other professionals. ## How to become a game developer? As you have seen so far, Game Developer's career demands solid knowledge about programming. It's time to understand where you can start developing to enter the area. Well, today, many Brazilian universities and colleges already offer a degree in digital games. With an average duration of 2.5 years, the course is focused on creating, development and testing of various types, as well as addressing project management. Other options for those who want to enter this market are the Faculties of Computer Science and Engineering. In addition, you can find many other complementary courses that will contribute to your learning journey on the technical part. One thing is certain, with the growing warming of this market, which is not lacking are alternatives for those who want to professionalize in the area. ## How much does a Developer Game earn? This is a question that does not want to silence. How much is the salary of a game developer? Well, the answer to this question depends on some variables, such as the time -performance experience, the company's location. But not to leave you in the dark, we will present an estimate of how much a game developer earns in Brazil. - * Junior Game Developer: R $ 4,000 ** The junior professional is the one who is starting his trajectory in the area and has the minimum experience necessary to perform programming functions considered basic. - * Full Game Developer: R $ 6,160 ** The full developer is the one who has been working in the area for at least five years. Besides having experience in projects of various types. - * Senior Game Developer: R $ 11,437 ** Usually, senior level programmers have more than eight years of experience in the area and very high knowledge, as well as the ability to delegate tasks and develop new talents. Remembering that the values presented can change (for less or more) depending on the company. Remembering that the game development area also offers many job opportunities abroad. If you liked the content leave your comment and share it too, this article may be useful for many other people.
albericojr
1,210,402
New Usefull JavaScript Tips For Everyone
*Old Ways * const getPrice = (item) =&gt; { if(item==200){return 200} else if...
0
2022-10-04T04:52:35
https://dev.to/shaon07/new-usefull-javascript-tips-for-everyone-1389
javascript, webdev, beginners, programming
## **Old Ways ** ``` const getPrice = (item) => { if(item==200){return 200} else if (item==500){return 500} else if (item===400) {return 400} else {return 100} } console.log(getPrice(foodName)); ``` ## **New Ways** ``` const prices = { food1 : 100, food2 : 200, food3 : 400, food4 : 500 } const getPrice = (item) => { return prices[item] } console.log(getPrice(foodName)) ``` Hope you like my new tricks :D [](https://github.com/shaon07) follow my github account
shaon07
1,210,441
How To find easy to rank keywords on Google - Quick SEO Hack
There are many ways to find low competition keywords that rank easily on Google, Bing, and other...
0
2022-10-04T06:53:12
https://dev.to/its__keziah/how-to-find-easy-to-rank-keywords-on-google-quick-seo-hack-311l
rankongoogle, seo, python, laravel
There are many ways to find low competition keywords that rank easily on Google, Bing, and other Search Engine. Now I am discussing some techniques to find low competition and profitable keywords that will rank easily. **1. KGR Keyword Research Method:** KGR keyword research method is an effective strategy to find long tail and easy ranking keywords. It was invented by Doug Cunnington. **What is the KGR?** Keyword Golden Research (KGR) is a strategy for discovering long tail keywords which rank on top easily. **How is KGR calculated?** KGR= (Allintitle results) / (Monthly search volume) Where monthly search volume is 250 or less. If KGR Ratio is less than 0.25, it works well. See the screenshot bellow My keyword is ‘’ Magellan outdoors folding camp cot’’ Monthly search volume of these keywords is 70. (I use SEMrush for search volume) Now calculate the KGR (04 / 70) = 0.06 The aim is to find a KGR score is 0.25 or lower than 0.25. We get a keyword whose KGR value is less than 0.25. So it is a great KGR keyword. **Importance of KGR method:** Good content with KGR keywords will rank easily without backlinks. It is a modern SEO strategy that's really working well. It is most effective for a new website or blog. If you want to rank in a short time, it is a great way for you. KGR or Golden Keyword Ratio is called easy ranking Keywords **Is it really work now on 2022?** KGR keywords are easy ranking keywords. But all keywords will not work. If your website or blog is brand new, KGR method is best for you. If you write 10 good content with focus KGR keywords, your maximum content will rank. **How much time will it take to rank after using KGR keywords?** KGR keyword research is a modern-day SEO strategy that works for every niche. It will take 1-3 months to see the result. if you generate high-quality content by focusing KGR keywords you have a great chance to rank on top within a short time. **Why go with such low search volume keywords?** Generally, KGR keywords are low search volume keywords, but these kinds of keywords rank easily without much effort. If you rank for a single KGR keyword you will rank on other similar keywords which bring a good amount of traffic to your website.
its__keziah
1,210,576
Webinar: BDD Pitfalls and How to Avoid Them
When new products are launched, the disconnect between business professionals and engineers often...
0
2022-10-04T08:34:15
https://www.lambdatest.com/blog/bdd-pitfalls-and-how-to-avoid-them/
webinar, techtalks, bdd
When new products are launched, the disconnect between business professionals and engineers often results in wasted time and resources. A strategy for improving communication can help prevent bottlenecks to the project’s progress. When business managers understand the capabilities of the engineering team, and when engineers understand what the business requires from its software, both sides can work together to create applications with real business value. That’s where [behavior-driven development](https://www.lambdatest.com/blog/behaviour-driven-development-by-selenium-testing-with-gherkin/?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog) (BDD) comes in to achieve this business value. We’re sure you would have many questions about how BDD can help your teams become more engaged and focused on business outcomes without wasting time in endless meetings or unmaintainable test scripts. In our very first session of Voices of Community, we had a special guest [John Ferguson Smart](https://www.linkedin.com/in/john-ferguson-smart/), Founder at Serenity BDD, teamed up with [Manoj Kumar](https://www.linkedin.com/in/manoj9788/), VP of Developer Relations at LambdaTest, to discuss how to avoid BDD pitfalls and three simple steps to embed effective BDD practices into your teams. {% youtube 71vw17fRGxs %} If you missed the power-packed webinar, let us look at the event’s major highlights. ## About the Webinar The webinar starts with John highlighting the red flags when [implementing BDD](https://www.lambdatest.com/blog/implement-bdd-testing-for-quality-test-automation/?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog) and avoiding pitfalls. John claims that around 95% of software teams make use of BDD in the wrong way. John highlights the **agenda** for the webinar. This is as follows: * How you write **User Stories** can hold back your team (and the biggest single mistake teams make). * How almost everyone using Cucumber and BDD for [automation testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog) is doing it WRONG. * Three steps to embedding effective BDD and test automation practices into your teams. After this, John explains the major BDD pitfalls faced by software teams. ## Pitfall #1 — User Story Writing John explains that the trap of BDD comes from misunderstanding what User Stories are about. He insists developers ask themselves, “**Why are their User Stories holding them back?**” He further breaks down this question that developers should address while using BDD. They are: * Do you struggle to break down and organize your requirements? * Do stories take a long time to prepare? * Do user stories that are complete end up needing rework and fixes? According to John, the most significant indicator of dysfunctional user stories is using “**Given, When, and Then**” in the initial story descriptions. He explains that using them breaks the [agile development](https://www.lambdatest.com/blog/agile-development/?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog) flow and indicates that the team considers writing user stories as an old-school requirement document. This also shows that teams don’t know how to use “Given, When, and Then” while writing user stories. John gives an example of a standard dysfunctional user story error to prove his point. You can see it in the screenshot below. With the help of this example, John explains that when teams give definitive acceptance criteria, they provide an impression that they have done the work, but this practice cuts off the conversation. John suggests that acceptance criteria should be in bulleted points or given with examples, not in a definitive manner. ## Pitfall #2 — Asking Product Owners or Business Analysts to Write User Stories in Gherkin He says product owners or business analysts are not very good at writing user stories. They’re not well-trained to write stories in Gherkin and to try to express their business requirements, so by asking them to write this given one-name format, you’re just putting an unnecessary burden on them and distracting them from writing down the actual requirements. This also makes them slower as they take longer to write the requirements of much poorer quality. It also cuts out the conversation, and you end up not giving the team a chance to understand and discuss to try and flesh out the details. John then explains an old concept called the **Card, Conversation, and Confirmation** founded by Ron Jeffery. With the help of various examples, John teaches the audience how to write user stories in plain English. John then highlights the most effective “**Conversation**” techniques that teams can implement. They are as follows: * We can record the conversation in a tabular format, writing tables on a whiteboard or a virtual whiteboard that works well. * We can do example mapping, where we flesh out the business rules and come up with examples and counterexamples. Example mapping is an excellent technique because you quickly get much breadth and edge cases. * We can also do feature mapping. Feature mapping is all about understanding user journeys and user flows and mapping out variations of the flows. John then explains how we can ensure the “**Confirmation**” aspect for the correct executable specifications. He suggested four ways: * Formalize the acceptance criteria. * Ratify the acceptance criteria. * Automate the acceptance criteria. * Demonstrate the acceptance criteria. ## Pitfall #3 — BDD Test Scripts — Why isn’t your test automation delivering? John goes on to explain why BDD scripts could not deliver effectively. He states the following reasons: * Test scripts might be flaky or brittle. * Teams struggle to finish automation within each sprint. * Tests not giving clarity about progress or coverage. John then explains what effective automation looks like. As per it should do the following: * Give fast, actionable feedback if something goes wrong. You should know what goes wrong, why, and how to fix it. * It should also deliver a report on meaningful business-related progress by showing what you have delivered in business terms. * It needs to be stable and trustworthy. * It should be completed within the sprint. John states that “**using Cucumber to automate test scripts misses the benefits of both BDD and Agile Test Automation.**” He explains this statement with an example. John then explains the BDD approach with the help of a flowchart. Moving forward, John covers the essential tips for teams to make BDD work for them effectively. These are as follows: * Find your BDD champions to grow your BDD practices organically and sustainably. * Discover your BDD Dialect and learn how to express your requirements to tailor it to your domain. * Build an [automation framework](https://www.lambdatest.com/blog/automation-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog) that scales. Test automation should be easier as it grows. ## Q&A Session: Before wrapping up, John answered several questions the viewers raised. Here are some of the insightful questions from the session: **How can we use BDD in the performance testing phase? Is there any advantage of using it for performance testing?** For non-functional requirements, you can use BDD, the discovery process to articulate. The idea of concrete examples of non-functional requirements is essential. In the case of accessibility, you might come up with a concrete example of a user who is colorblind. Your accessibility issue is what sort of concrete use cases you have to deal with, and suddenly that becomes not non-functional but very functional indeed. **Many people use Cucumber for test automation to check bugs after the code has already been implemented. Is that BDD? What are your thoughts?** That’s not BDD. That’s nothing to do with BDD. That’s simply using Cucumber as a bad test grifting tool, so that will cause great pain if you do that. Still, then again, writing test automation after the fact in that way is relatively not great in any case, so it’s not the fault of Cucumber. It’s just the thought that that’s not a great way to automate. **What do you recommend using, literal strings or RegEX for your parameters?** I don’t use RegEx very much anymore. I generally use Cucumber Expressions as they are more powerful and customizable. Still, the RegEx concept helps make your step definition triggerable, so I recommend it. **How do we deal with negative scenarios that we need to cover, or do we need to have a separate automation suite for this or do we keep these corner scenarios for exploratory tests during their release?** There are different approaches. My general approach is that you do not need to put negative scenarios or edge cases if the business is interested in them. If the business is interested in negative scenarios, there are different countering or examples. ## Hope You Enjoyed The Webinar! We hope you liked the webinar. In case you missed it, please find the webinar recording above. Make sure to share this webinar with anyone who wants to learn more about **BDD pitfalls and how they can avoid them**. Stay tuned for more exciting [LambdaTest Webinars](https://www.lambdatest.com/webinar/?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog). You can also subscribe to our newsletter [Coding Jag](https://www.lambdatest.com/newsletter/?utm_source=devto&utm_medium=organic&utm_campaign=oct04_kj&utm_term=kj&utm_content=blog) to stay on top of everything testing and more! That’s all for now, happy testing!
lambdatestteam
1,210,607
Learning Vue - fetching data and re-render
Introduction I am a "codenewbie", currently I am learning Vue and have some re-appearing...
0
2022-10-04T09:28:08
https://dev.to/viktoriabors/learning-vue-fetching-data-and-re-render-38i3
help, vue, beginners, codenewbie
## Introduction I am a "codenewbie", currently I am learning Vue and have some re-appearing problems I don't have the knowledge to fix it. ## Fetching data and rendering it I am using Vue 3 and composition api (setup() method). I can fetch data from a fake todo api and it's rendered on the page. The problem starts when I would like to add new task. I can see in the console that it's pushed to the task list array, but it doesn't re-render. Same problem when I want to edit, delete or mark as a complete one task. I can see the changes in the console, but doesn't update on the page. ## How to re-render the page when the data set gets updated? I am using the key attribute on the component, so theoretically it should work. But no... The problem is also, that even though the task array is updated (with new task or an old task is updated), the original array is returned from the setup(). Here is a little snippet from my code ![The code where I have the problem](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkocpgntdbm27oqrgt9k.png) ## Otherwise.. It is pretty nice to work with Vue, I just need to figure out some detailed stuff :) _Thanks in advance if you can give some advices_
viktoriabors
1,210,654
Sending Emails with Firebase
Firebase is a Google-owned web and mobile application development platform that allows you to create...
0
2022-10-04T10:11:01
http://mailtrap.io/blog/sending-emails-with-firebase/
firebase, node, javascript, tutorial
Firebase is a Google-owned web and mobile application development platform that allows you to create high-quality apps and take your business to the next level. You can also use the platform to send emails and enhance the capabilities of your apps. In this article, we’re going to talk about how to send emails with Firebase and Nodemailer and how to test them with Mailtrap. ## What you’ll need - [Node.js](https://nodejs.org/en/) (version 6.0 or above) development environment - [Firebase](https://firebase.google.com/) - [Mailtrap](https://mailtrap.io/) - [Nodemailer](https://nodemailer.com/about/) ## What we’ll cover - Setting up Firebase - Setting up Mailtrap - Setting up [Nodemailer](https://mailtrap.io/blog/sending-emails-with-nodemailer/) - Creating a Nodemailer transporter - Creating a Firebase function - Building the email message - Deploying to Firebase Are you ready? Let’s get going… ## 1. Setting up Firebase ### Step 1.1: Create a Firebase project Go to your instance of the Firebase UI and [create a new project](https://console.firebase.google.com/). ### Step 1.2: Install Firebase CLI The Firebase CLI is a versatile utility that provides you with an easy way to manage, view, and deploy code and assets to your Firebase project. You can [install](https://firebase.google.com/docs/) the utility on your development environment using a method that matches your preferences and use cases. Regardless of how you install it, you’ll still get the same functionalities. For this tutorial, we’ll use the following command in our Windows Command Prompt to install the CLI globally: ``` npm install -g firebase-tools ``` Note that you’ll need to have Node.js installed on your system before running the above command. After installing it, you can create a new directory that will contain the code for this Firebase project. Then, go to the directory and run the following command to sign in to your Firebase account: ``` firebase login ``` Then, you’ll need to follow the ensuing prompts so that you can be authenticated into the Firebase platform. This way, you’ll be granted the right to access your Firebase projects locally, as your local machine will be connected to Firebase. ### Step 1.3: Initialize Firebase SDK for Cloud Functions [Cloud Functions](https://firebase.google.com/docs/functions) allow you to access Firebase events directly in your application. This way, you can easily integrate with the Firebase platform and accomplish a wide range of tasks. Initializing Firebase SDK for Cloud Functions will allow you to create an empty project that contains some dependencies you can use to build the rest of your application. After logging in successfully, you can initialize a Firebase project in your local directory by following these steps: - Run the firebase init functions command - Follow the ensuing prompts to associate the project on your local machine with an existing Firebase project - The command gives you two language options for writing Cloud Functions: TypeScript and JavaScript. For this tutorial, choose JavaScript. - If you want to use ESLint for catching bugs and enforcing style, you can accept that option. For this tutorial, just decline it. - If you want to install dependencies with npm, you can accept that option. For this tutorial, just accept it. After the installation is complete, the structure of your new directory will look something like this: ``` firebaseproject +- .firebaserc # Hidden file you can use to switch between | # projects easily | +- firebase.json # Has the properties for your project | +- functions/ # Directory that has all your functions code | +- .eslintrc.json # Optional file that has JavaScript # linting rules | +- package.json # npm package file that has Cloud Functions code | +- index.js # Main source file for your Cloud Functions code | +- node_modules/ # Directory that has dependencies ``` For the rest of this tutorial, we’ll only make use of the **functions** directory. And, we’ll use the **index.js** file to put all our code. ### Step 1.4: Install Firebase Admin SDK The Admin SDK allows you to access Firebase from privileged environments and carry out various tasks, such as sending Firebase Cloud Messaging messages programmatically. To install it, on the Command Prompt, navigate to your **functions** folder and run the following command: ``` npm install firebase-admin --save ``` Then, go to your index.js file and import and initialize the Admin SDK. Here is the code: ``` const admin = require("firebase-admin"); admin.initializeApp(); ``` ## 2. Setting up Mailtrap Mailtrap is a free email testing tool to view and share emails in a development environment. You can use it to inspect and debug your emails and avoid spamming your real customers. Setting it up is quick and easy. Just go to the [signup](https://mailtrap.io/register/signup) page and register for a free account. Then, go to the **SMTP settings** tab in your inbox and copy the details for host, port, username, and password. We’ll use these details in the next section of this tutorial. You can also use [Mailtrap Email API](https://mailtrap.io/blog/sending-emails-with-firebase/#What-youll-need:~:text=Mailtrap%20is%20a,good%20to%20go.) and send emails on production. After testing, you only need to complete a 5-minute domain verification, and you should be good to go. ## 3. Setting up Nodemailer Nodemailer is a simple module for [sending email with Node.js](https://mailtrap.io/blog/send-emails-with-nodejs/) applications. It comes with a wide range of features for allowing you to send emails fast, efficiently, and securely. To install Nodemailer, go to your **functions** directory and run the following command: ``` npm install nodemailer ``` Then, go to your index.js file and import it into your project. Here is the code: ``` const nodemailer = require('nodemailer'); ``` ## 4. Creating a Nodemailer transporter Next, create a reusable Nodemailer transporter object using your Mailtrap’s SMTP information. Here is the code: ``` let transporter = nodemailer.createTransport({ host: "smtp.mailtrap.io", port: 2525, auth: { user: "71b312d8f1a983", // generated by Mailtrap pass: "e7a8f2287183dd" // generated by Mailtrap } }); ``` In this case, **transporter** will be an object that can be used to send emails. Notice that’s the same code available in your Mailtrap inbox—if you select Nodemailer, under the **Integrations** section. ## 5. Creating a Firebase Cloud Function For this tutorial, we’ll create a Firebase HTTP Cloud Function, which will be triggered whenever its URL is executed in the browser. Here is its syntax: ``` exports.emailSender = functions.https.onRequest((req, res) => {...}); ``` As you can see on the code above, we named the function **emailSender** (you can call it any name). Then, we used the **functions** library with the **https** API and the **onRequest** event to register the function. The callback event handler function accepts two parameters: **req** and **res**. While the **req** object provides you with access to the properties of the initial HTTP request sent by the client, the **res** object allows you to send a response back to the client. ## 6. Building the email message Next, let’s use the Nodemailer module to build the email message inside the emailSender function. Here is the code: ``` exports.emailSender = functions.https.onRequest((req, res) => { const mailOptions = { from: 'from@example.com', //Adding sender's email to: req.query.dest, //Getting recipient's email by query string subject: 'Email Sent via Firebase', //Email subject html: '<b>Sending emails with Firebase is easy!</b>' //Email content in HTML }; return transporter.sendMail(mailOptions, (err, info) => { if(err){ return res.send(err.toString()); } return res.send('Email sent successfully'); }); }); ``` As you can see in the code above, we started by defining **mailOptions** using the following properties: **from, to, subject**, and **html**. More options are available on the [Nodemailer documentation](http://nodemailer.com/smtp/). Notice that we used **req.query.dest** to get the recipient’s email address by a query string. Next, we applied the previously created **transporter** object to send the email message using the **sendMail()** method. We passed two parameters to the method: the data that defines the mail content and a callback function that is executed once the message is delivered or fails. While **err** is the error object returned when the message fails, the **info** object includes the results of the sent message. In this case, our function will return the sent email message, if everything is successful, or an error message if the message is not delivered. ## 7. Deploying to Firebase Lastly, we’ll need to deploy the project’s function to Firebase. So, we’ll use the Firebase CLI again. To do this, navigate to your **functions** directory and run the following command: ``` firebase deploy ``` Once the upload process is completed successfully, you’ll get a function URL that you can use to trigger the Cloud Function. The URL will look like this: ``` https://us-central1-<project-id>.cloudfunctions.net/emailSender ``` Notice that the Firebase project ID and the function’s name are in the URL. To execute the function, we just include the **dest** parameter in the URL and run it in a browser: ``` https://us-central1-<project-id>.cloudfunctions.net/emailSender?dest=test@example.com ``` That’s how to send the email! ## Wrapping up Here is the entire code for the **index.js** file: ``` const functions = require('firebase-functions'); const admin = require('firebase-admin'); const nodemailer = require('nodemailer'); //Initializing Firebase Admin SDK admin.initializeApp(); //Creating Nodemailer transporter using your Mailtrap SMTP details let transporter = nodemailer.createTransport({ host: "smtp.mailtrap.io", port: 2525, auth: { user: "71b312d8f1a983", pass: "e7a8f2287183dd" } }); //Creating a Firebase Cloud Function exports.emailSender = functions.https.onRequest((req, res) => { //Defining mailOptions const mailOptions = { from: 'alfo.opidi85@gmail.com', //Adding sender's email to: req.query.dest, //Getting recipient's email by query string subject: 'Email Sent via Firebase', //Email subject html: '<b>Sending emails with Firebase is easy!</b>' //Email content in HTML }; //Returning result return transporter.sendMail(mailOptions, (err, info) => { if(err){ return res.send(err.toString()); } return res.send('Email sent succesfully'); }); }); ``` ## Conclusion As we’ve illustrated in this tutorial, sending emails with Firebase, Nodemailer and Mailtrap is simple and straightforward. Mailtrap offers you a useful way to test your emails in a pre-production environment. This way, you can debug your email samples before distributing them to real users. Want to go fancy and send emails from pure JavaScript (yes, it’s possible!), we have thought about you and [prepared a guide](https://mailtrap.io/blog/javascript-send-email/). And don’t forget that you can also send emails via [Mailtrap Email API](https://mailtrap.io/email-api/), which is among the quickest possible options. Happy sending emails! --- Thank you for reading our article on sending emails with [Firebase and Nodemailer](http://mailtrap.io/blog/sending-emails-with-firebase/) that was originally published on Mailtrap Blog.
sofiatarhonska
1,210,731
Connect to an OpenVPN server running on Synology DSM 7
Introduction This is the second part of the series "Configure OpenVPN on Synology DSM 7"....
20,026
2022-10-05T15:37:08
https://dev.to/dider/connect-to-an-openvpn-server-running-on-synology-dsm-7-5bal
synology, openvpn, openvpnconnect, security
### Introduction This is the second part of the series "Configure OpenVPN on Synology DSM 7". In the [first part](https://dev.to/dider/configure-openvpn-server-on-synology-dsm-7-371n) we've set up an OpenVPN server on Synology DSM 7, configured port forwarding and firewall on our router and NAS. In this part we'll see how we can connect to that OpenVPN server using the OpenVPN Connect client in Windows 10 and iOS. ### The setup The setup remains the same as what we've used in the [first part](https://dev.to/dider/configure-openvpn-server-on-synology-dsm-7-371n): **NAS:** Synology DS920+, DSM 7.1-42661 Update 4 **OpenVPN server app:** VPN Server package (1.4.7-2901) by Synology Inc. **Router:** Ubiquiti UniFi DreamMachine **OpenVPN clients:** - OpenVPN Connect 3.3.6.2752 on Windows 10 - OpenVPN Connect 3.3.2.5086 on iOS 16.0.2 The OpenVPN Connect client is an official client developed and maintained by OpenVPN Inc. It can be downloaded from here: [https://openvpn.net/client-connect-vpn-for-windows/](https://openvpn.net/client-connect-vpn-for-windows/) > _There's another client called OpenVPN GUI. This is a community project and can also be used on Windows. It can be downloaded from here: [https://openvpn.net/community-downloads/](https://openvpn.net/community-downloads/)_ We'll use the official OpenVPN Connect client as the UX is pretty identical on both Windows and iOS. ### Exporting the configuration file: First we have to export the configuration .ovpn file to be used with the clients. Clicking the `Export Configuration` will export the configuration and initiate a file download. The exported file is a .zip file that contains a `VPNConfig.ovpn` file (a configuration file for the client) and a `README.txt` file (simple instruction on how to set up OpenVPN connection for the client). ![Export Configuration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tgopozrt6ebdt9teebh7.png) Following is how the .ovpn file looks like. ``` dev tun tls-client remote YOUR_SERVER_IP 1194 # The "float" tells OpenVPN to accept authenticated packets from any address, # not only the address which was specified in the --remote option. # This is useful when you are connecting to a peer which holds a dynamic address # such as a dial-in user or DHCP client. # (Please refer to the manual of OpenVPN for more information.) #float # If redirect-gateway is enabled, the client will redirect it's # default network gateway through the VPN. # It means the VPN connection will firstly connect to the VPN Server # and then to the internet. # (Please refer to the manual of OpenVPN for more information.) #redirect-gateway def1 # dhcp-option DNS: To set primary domain name server address. # Repeat this option to set secondary DNS server addresses. #dhcp-option DNS DNS_IP_ADDRESS pull # If you want to connect by Server's IPv6 address, you should use # "proto udp6" in UDP mode or "proto tcp6-client" in TCP mode proto udp script-security 2 reneg-sec 0 cipher AES-256-CBC auth SHA512 auth-user-pass comp-lzo <ca> -----BEGIN CERTIFICATE----- MIIF...hHwg== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIF...GCc= -----END CERTIFICATE----- </ca> key-direction 1 <tls-auth> # # 2048 bit OpenVPN static key # -----BEGIN OpenVPN Static key V1----- c78b6...6c58c2 -----END OpenVPN Static key V1----- </tls-auth> verify-x509-name 'myhostname.synology.me' name ``` Let's talk about the configuration file a little. We basically have to change one thing in the above config file. At line #4, we have to replace `YOUR_SERVER_IP` with the DDNS hostname, `myhostname.synology.me`, which we've configured in the [first part](https://dev.to/dider/configure-openvpn-server-on-synology-dsm-7-371n). Or we can use the static IP address if we have one. The other directive of note is `redirect-gateway def1`. This is what determines whether we configure a split-tunnel or full-tunnel VPN. If we want full-tunneling then we have to uncomment the directive. This means that all connection requests, including the ones for websites on the public internet, will go through the VPN server. But we're only interested in accessing the Synology apps like DS Photo, DS Video, DS File etc. (which are only available within our home network and not exposed to the public internet). So, we'll leave this commented out. > Note that: - OpenVPN allows VPN server to issue an authentication certificate to the clients. - Each time VPN Server runs, it will automatically copy and use the certificate shown at `Control Panel` > `Security` > `Certificate`. This is the certificate which we got from Let's Encrypt while configuring DDNS using Synology provider. - If we want to use a third-party certificate, we have to import the certificate at `Control Panel` > `Security` > `Certificate` > `Add` and restart VPN Server. We'll explore this in the third part of this tutorial. - VPN Server will automatically restart each time the certificate file shown at `Control Panel` > `Security` > `Certificate` is modified. We will also have to export the new .opvn file to all clients. - More info on Certificates can be found here: [https://kb.synology.com/en-br/DSM/help/DSM/AdminCenter/connection_certificate?version=7](https://kb.synology.com/en-br/DSM/help/DSM/AdminCenter/connection_certificate?version=7) ### Let's check firewall settings on Windows 10 Since we'll be using Windows 10 as our client OS, it's a good idea to check its firewall settings before we try to connect. We need to check whether outgoing UDP requests are allowed on remote port 1194 in Windows Firewall. I've found that it works without having to add any additional rule. ### Connect using OpenVPN Connect in Windows 10 I've already installed the OpenVPN Connect 3.3.6.2752 client from the link mentioned above under 'The setup'. I've also disconnected from my home Wi-Fi network in Windows and switched to mobile hotspot so that I connect from 'outside' of my home network. When we first launch the app, it lets us import a config file via an URL or a file upload. We'll use the file upload option. ![The OpenVPN Connect client](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ow4aga02z8kya59u2spd.png) ![Select the .ovpn configuration file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nv9dy1tti6yfqqh4s9pw.png) After selecting the .ovpn config file, we're prompted to enter the VPN Username and Password. This is the same `vpnuser` that we've configured in [part one](https://dev.to/dider/configure-openvpn-server-on-synology-dsm-7-371n). ![Enter VPN Username and Password](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0okbn9c62apbgwxgxrik.png) We're also being asked to assign a Certificate and Key for the client but we'll skip them. Because we're not concerned with Certificate Authentication in this part. We'll look at that in the third part. > Note that we can also customize the profile name at the top. After we've entered the Username and Password, let's click the big orange `CONNECT` button. ![Missing external certificate](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/azsiu35dro0k1yg1mgqh.png) But we're presented with an info dialog that says that the external certificate is missing. It also says that we can still continue if our profile allows connection without client certificate. It does, so we'll click `CONTINUE`. > Note: > - By default the OpenVPN sever doesn't require a client certificate. > - In the config file for the OpenVPN server, `openvpn.conf`, there is a directive, `verify-client-cert none`, which dictates that. > - The config file is located here on the NAS: `usr/syno/etc/packages/VPNCenter/openvpn/openvpn.conf`. > - In order to access that file, we have to SSH into the NAS. > - It's possible to tell the client to not expect a client `Certificate and Key` because it's a bit annoying to skip it everytime. This can be done by adding this directive to the .ovpn file: `setenv CLIENT_CERT 0`. > - It's documented here: [https://openvpn.net/faq/how-to-make-the-app-work-with-profiles-that-lack-a-client-certificate-key/] (https://openvpn.net/faq/how-to-make-the-app-work-with-profiles-that-lack-a-client-certificate-key/) Anyway, after clicking `CONTINUE`, we're hit with another roadblock. This time the connection failed, and the error message read "Peer certificate verification failure". ![Connection failed, Peer certificate verification failure] (https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5jpul4oqz7pp4o5y3x4l.png) The culprit is on the last line in the `VPNConfig.ovpn` file above: `verify-x509-name 'myhostname.synology.me' name` This is the issue that I've mentioned about in the [first part](https://dev.to/dider/configure-openvpn-server-on-synology-dsm-7-371n). That last line got added when we ticked the `Verify server CN` checkbox. !['Verify server CN' checkbox ticked](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/derhwndaaoapjm69l9lp.png) When the .ovpn file was exported, the `myhostname.synology.me` was wrapped within single quotes (''). And because of this, the client couldn't connect when the .ovpn file was imported to it. It seems like this issue only appeared in OpenVPN Connect client since version 3.3.x. Fortunately, after a little googling around I've found a fix, which was provided by the user called `DreamCypher` in this OpenVPN Support Forum topic: [https://forums.openvpn.net/viewtopic.php?p=106554#p106554] (https://forums.openvpn.net/viewtopic.php?p=106554#p106554) The fix is very simple. We just need to wrap `myhostname.synology.me` within double-quotes (""): `verify-x509-name "myhostname.synology.me" name` So let's do that, import the updated .ovpn file to the client and try connecting again. It works! ![VPN connection works](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/udnvtz9499iabpp0s5zg.png) ### Connect using OpenVPN Connect in iOS Let's search for the OpenVPN Connect client in App Store and install it. The client UI is pretty identical to the Windows client. Now we have to import the `VPNConfig.ovpn` file. There's no need to change anything, just import the exact same file that we've imported to the Windows client. I've put it on my Synology NAS home directory and will now open it in the DS File app in iOS. > DS File is a file manager app developed by Synology. ![Open the DS File app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkobhrk85xwk0mku5ffx.PNG) Then tap the `...` menu and tap on Share. ![Tap on Share](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vtr0scd9pidme4720f2q.JPEG) Tap the OpenVPN app icon to import the .ovpn file to it. ![Tap the OpenVPN app icon](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/myhz5a3zzqfu43c4wcqp.PNG) The UI we're presented with next is already familiar with us by now. We can customize the profile name, enter the VPN Username and Password and tap `CONNECT`. We will leave the `Certificate and Key` field with the default value `None` as we're not going to use client-side Certificate Authentication. We'll look at how to do that in part three of this tutorial. ![Enter Username and Password](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iz0kx1bjhqzfhstj2dsi.PNG) iOS now prompts us to allow the OpenVPN app to add a VPN configuration to the OS. We will allow it. ![Allow OpenVPN app to add a VPN configuration to iOS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndwzs4y03i7laxoqdef6.PNG) We're asked to enter our iPhone passcode. Let's do that. ![Enter iPhone passcode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1f6u64xqcpmagkrpv0j9.PNG) Et voilà! We're connected. ![VPN connection established](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ol1x2aaxxu8iimqotpp5.PNG) If we go to `Settings` > `General` > `VPN & Device Management` > `VPN`, we can see the configuration added by the OpenVPN app. ![VPN configuration added by the OpenVPN app in iOS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/df77asemc477det1mwd3.PNG) ### Summary So that's about it. Configuring the client is pretty straight forward (when it works of course ;)). There are tons of very good tutorial videos and posts on OpenVPN all over the internet. And the OpenVPN docs are also very helpful. Hope this tutorial also comes in handy for some.
dider
1,210,929
Why you shouldn't ignore .gitignore
If you already know some basic git commands, it is time to learn about .gitignore. When I started to...
0
2022-10-04T13:52:23
https://www.cristina-padilla.com/gitignore.html
webdev, beginners, tutorial, git
If you already know some [basic git commands](https://www.cristina-padilla.com/gitcommands.html), it is time to learn about **.gitignore**. When I started to learn Git, I read that .gitignore is basically a file that helps hide other files to keep a repository cleaner, meaning we are telling Git not to track every single file. Ok, so the concept is clear but my question back then was: in which context or what kind of files do I exactly want to hide? I had no idea but understood it much better while doing the following exercise: [how to start unit testing](https://how-to.dev/how-to-start-unit-testing-with-jasmine). I read the article because I wanted to learn about unit tests (pieces of code that help check whether your code is working as expected or not) and while going through the tutorial and comparing my code with the [author's solution](https://github.com/how-to-js/how-to-unit-test/tree/jasmine), I realized some extra files like node_modules or .DS_store automatically showed up in my repository. I didn't really know how they got there and what they meant but, as every beginner, I thought: as my unit test is working, all good. Fortunately, my dear mentor reviewed my code and gave me some useful suggestions: > use .gitignore to make those irrelevant files disappear and make your repository cleaner. Some of the files .gitignore can hide are system files (like .DS_Store), dependency caches (like node_modules or packages) or files with sensitive data. GitHub already offers the possibility to add .gitignore when creating a repository. ![Git ignore option on GitHub](https://community.codenewbie.org/remoteimages/uploads/articles/p5nneg50a36ojooq5abh.png) Gitignore is a text document that looks like [this](https://github.com/Mama-simba/unit-tests/blob/main/.gitignore). My mentor also showed me a very cool tool to create [gitignore templates](https://www.toptal.com/developers/gitignore). Find the video tutorial [here](https://docs.gitignore.io/). In my case, I did the unit-test exercise without having any .gitignore and I thought that I had to restart the whole exercise again. Fear not! There is also a way to create a .gitignore file and move all nonsense files into it. I used the following commands for it: ![Removing files commands](https://community.codenewbie.org/remoteimages/uploads/articles/o59ybmrqhatxyms6ma2k.png) Thanks to this my [unit test repository](https://github.com/Mama-simba/unit-tests) looks much better now. I hope this helps you understand better gitignore now and you can start applying it in your projects.
crispitipina
1,211,191
Using NDI in your Real Time Live Streaming Production Workflow
If you're a developer who's also creating live streams and content you know that it takes a lot of...
0
2022-11-07T20:18:33
https://dev.to/dolbyio/using-ndi-in-your-real-time-live-streaming-production-workflow-59m7
streaming, contentproduction, webdev
If you're a developer who's also creating live streams and content you know that it takes a lot of effort setup a solid content streaming workflow. ## NDI to the rescue! NDI® (Network Device Interface) is a free protocol for Video over IP, developed by NewTek. The innovation is in the protocol, which makes it possible to stream video and media across networks with low latency from many device sources. These NDI device sources can be physical hardware or software based. This makes it possible to connect to any device, in any location, anywhere in the world – and transmit live video to wherever you are. There are a suite of [NDI tools](https://www.ndi.tv/tools/) that work directly with NDI systems and sources on your network. Combine NDI with [Dolby.io](https://Dolby.io) Real-time Streaming to deliver real time video for remote or interactive experiences. Dolby.io Real-time Streaming offers incredibly low latency streams typically under a second. And even better, besides having the ability to white-label your own stream with their viewer, or [develop a complete streaming solution](https://docs.dolby.io/streaming-apis/docs/getting-started), that same stream can also be distributed through streaming services such as YouTube, Facebook and Twitch. ## Devices Everywhere There are many low-to-moderate cost prosumer video devices, PTZ cameras and security systems that offer NDI support. If you do not have a camera that supports NDI, you can simply download one of many software-based solutions that stream video and audio over your network over NDI. In fact, that shiny new iPhone with the amazing and gorgeous camera might actually provide the best camera solution for live streaming content over NDI. Some of our customers have had great success with various apps that are available in the App Store. [NDI HX Camera](https://apps.apple.com/us/app/ndi-hx-camera/id1477266080) by NewTek and [Stream Camera for NDI HX](https://apps.apple.com/us/app/stream-camera-for-ndi-hx/id1633326432) by fellow iOS developer Thomas Backes. ## Streaming Workflow Everyone has their own opinion on what a good live streaming content workflow actually looks like. You decide. We've created a [quick guide](https://docs.dolby.io/streaming-apis/docs/using-ndi) to make it easy for you to integrate **_your_** workflow; you have multiple options to publish NDI out with your Dolby.io account. This [guide](https://docs.dolby.io/streaming-apis/docs/using-ndi) will walk you through two of these options and assumes NDI tools are already installed on your computer. > Dolby.io recently acquired Millicast; you many note some references in the documentation. ## OBS WebRTC Besides our web application, we also provide a forked version of OBS that's fine tuned for advanced 4K streaming and other features of the Dolby.io platform. Download the [OBS WebRTC](https://github.com/CoSMoSoftware/OBS-studio-webrtc/releases) publisher. In OBS create your NDI scene and add your NDI source, which can be from a camera or a mobile app. ![Image of OBS NDI settings panel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78evnzga9brqzlc42g96.png) You are now ready to start publishing using NDI with OBS WebRTC for a real time broadcast at scale. For the stream, OBS has the following settings: VP9 1920x1080 Bitrate 4000Kbps FPS 30 You can adjust the OBS WebRTC settings as needed to deliver the best quality and experience. [Sign up](https://dashboard.dolby.io/signup) to get started and then choose streaming to navigate to the Real-time Streaming API section.
dzeitman
1,211,689
UUIDs Are Bad for Database Index Performance, enter UUID7!
UUIDs, Universal Unique Identifiers, are a specific form of identifier designed to be unique even...
20,041
2022-10-07T22:10:14
https://www.toomanyafterthoughts.com/uuids-are-bad-for-database-index-performance-uuid7/
database, performance, webdev, news
UUIDs, Universal Unique Identifiers, are a specific form of identifier designed to be unique even when generated on multiple machines. Compared to autoincremented sequential identifiers commonly used in relational databases, generating does not require centralized storage of the current state, I.e., the identifiers that have already been allocated. This is useful when a centralized system would pose a performance bottleneck or a single point of failure. UUIDs are designed to be able to support very high allocation rates, up to 10 million per second per machine. Despite the fact that some types (eg., UUID4) are not guaranteed to be unique by their method of generation, the chance of generating two conflicting UUIDs is very low. This is also due to the fact that UUIDs are 128 bits long. UUIDs were formally defined as an [Internet Draft in 2002](https://datatracker.ietf.org/doc/rfc4122/), which was promoted to a Proposed Standard as [RFC 4122](https://www.rfc-editor.org/rfc/rfc4122.html). At the time of writing, 17 years later, it still has the status of a proposal. ## Representation & structure UUIDs are just 128 bit numbers, or 16-byte long binary sequences, and can be stored as such for efficiency. In API calls, they are commonly transferred in a text format of hexadecimal sequenced separated by dashes in the pattern of 8-4-4-4-12. This mirrors the internal structure as defined by the RFC: ``` 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | time_low | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | time_mid | time_hi_and_version | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |clk_seq_hi_res | clk_seq_low | node (0-1) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | node (2-5) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ``` ## UUID versions Even though the layout looks like a single schema for UUID generation, there are multiple versions of how the individual fields are populated – UUID1 to UUID5. The version is actually encoded in the UUID itself and makes up the most significant 4 bits of the `time_hi_and_version` field. ### UUID1 - time-based ![The layout of UUID1 in the text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o177d1mys9mlz5su4fir.png) ### UUID2 – DCE security version ![The layout of UUID2 in the text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lkquba8donkcepuzkyoj.png) The specification of this version is available at [https://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01](https://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01) ### UUID3 – name-based, MD5 ![The layout of UUID3 in text text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5evbpw5xdp0pnslajw7l.png) ### UUID4 – random ![The layout of UUID4 in the text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s60l7fczf2ttwpuoyrzc.png) ### UUID5 – name-based, SHA1 ![The layout of UUID5 in the text representation ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ulwb49ab0svx5byo81sq.png) ## UUIDs and B-tree indices As described in the [post comparing random and sequential primary keys](https://dev.to/vdorot/choose-the-right-primary-key-to-save-a-large-amount-of-disk-io-1o1l), B-tree indexes are well suited for inserting keys when values are increasing and not so much when the values are randomly distributed. Notice something about UUIDs? **Neither version is designed to generate increasing values**, where the sort order is similar to how the UUIDs were generated in time. No, not even UUID1. **UUID1 is based on time, but the timestamp is split up into multiple segments and they are in the wrong order** in the UUID structure. The least significant bits – the bits that have the highest change frequency are located at the start of the UUID stream – as the `time_low` field. ## UUID1 vs UUID4 Commonly used UUID versions are 1 and 4 because they can be generated “out of thin air”, without requiring some custom values (namespace, name). Considering that none of them is increasing, there should be little difference in insert performance between UUID1 and UUID4. To test this hypothesis, I extended the [test rig](https://github.com/vdorot/primary_key_io) developed for [comparing random and sequential integer IDs](https://dev.to/vdorot/choose-the-right-primary-key-to-save-a-large-amount-of-disk-io-1o1l). The script inserts equal-sized chunks of records into a table and tracks the I/O write volume of the DB engine generated by the inserts. The I/O measurement is done by running the DB engine in a Docker container and polling [Docker’s stats API](https://docs.docker.com/engine/api/v1.41/#tag/Container/operation/ContainerStats). For this comparison, I’ve used [SQLite in clustered mode](https://www.toomanyafterthoughts.com/primary-key-random-sequential-performance/#htoc-sqlite). ![Written bytes to disk (cumulative), SQLite, clustered index](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73zpisvybsevb89wxlyl.png) Umm, nope. Hypothesis busted. UUID1 performs much better. The reason is that while overall, uuid1 isn’t ordered by time, it is increasing in short periods until time_low rolls over and starts from 0 again. The time it takes time_low to roll over is about 7 minutes and 9 seconds. After this period, filled B-tree pages will be revisited again and page splitting will occur. Inserting 1M records during the test took about 33 minutes, so this would mean only 4 rollovers. In order to show how it would perform on a longer timescale, I created a modified version of UUID for which time passes 100× faster – `uuid1_fastrollover`. ![Written bytes to disk (cumulative), SQLite, clusteed index](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mtfwj3lz1yhb6cxf7dlf.png) ## Designing efficient UUIDs What if a new UUID version could be designed that would take the randomness of UUID4 and combine it with a timestamp prefix? This would make the UUID increase overall, but not locally – due to the random postfix. The random part ensures uniqueness when a high generation rate is necessary and also makes the UUIDs hard to predict – it’s not possible to guess the previous, or next UUID. It’s fairly simple to devise a custom UUID scheme, but fortunately, there is a new Internet-Draft (at the time of writing) defining new pseudo-sequential UUID versions that aim to solve exactly this issue: [draft-peabody-dispatch-new-uuid-format-04](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04). The current state and progress can be viewed at [IETF Datatracker](https://datatracker.ietf.org/doc/draft-peabody-dispatch-new-uuid-format/). ### [UUID 6](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format#section-5.1) – field-compatible version of UUID1 The original field structure is kept, but the timestamp fields are shuffled around so that the whole timestamp is in the correct order – starting with the most significant bits and ending with the least significant ones. This way, consecutively generated UUIDs are always increasing in value. ![The layout of UUID6 in its text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg58jhfi7cmuq5mqnr5z.png) ### [UUID 7](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format#section-5.2) – time-ordered ![The layout of UUID7 in its text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfrnjfsvlwmiw9hq04gz.png) This is is the go-to version for use in newly developed systems, where forward compatibility is not necessary. Notice that UUID1 uses a Gregorian epoch timestamp, while UUID7 is defined to use the Unix epoch timestamp. ### [UUID 8](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format#section-5.3) – custom, vendor-specific ![The layout of UUID8 in its text representation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oyu0o72jijopakvn2fxf.png) I have included this version just for completeness – it is in the draft, but here the ordering will clearly depend on the custom data, so this version is not going to be tested. ## Testing & comparison Implementation notes – which Python library was used for UUID7 and UUID8? I’ve found a library on PyPI that appears to implement UUID7 – uuid7, but upon further inspection of (the only) version 0.1.0, it does not seem to match the structure defined in the draft at the time of writing. Confusingly enough, there is a package named [uuid6](https://pypi.org/project/uuid6/) that implements both UUID6 and UUID7 and seems to follow the draft. I used it for the test. ### InnoDB – clustered index ![UUID versions tested on MariaDB clustered index](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pvrut1eub60m6dh4zd8.png) UUID6 and UUID7 perform equally well, as expected. ![total written bytes - UUID1_fastrollover, UUID4, UUID6, UUID7 compared](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3u7wm4yazict2bwonp3b.png) ### PostgreSQL – non-clustered index ![UUID versions compared on PostgreSQL non-clustered index](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x54bi79cc9t721fwm5hm.png) When using Postgres’s non-clustered index, there is a similar overhead for UUID4 over UUID6/7. The I/O is an order of magnitude lower than for InnoDB though. ![Total written bytes compared on PostgreSQL - UUID_fastrollover, UUID4, UUID6, UUID7](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ep54pvnr77oynthpn9mk.png) ## Conclusion When comparing UUID1 and UUID4 from the original RFC, UUID1 performs much better even though the time fields are in the wrong order, this is because the low bits still take about 7 minutes to roll over, and during this time the UUID value is increasing. UUID6 and UUID7 from the draft take this further and keep the ordering of generated values over the full timeline. Even though the versions are not yet standardized, there are existing libraries, at least for Python, that follow the draft. When designing a new database, UUID7 is the go-to option and would perform well in most database engines. If you’re working on a system that already generates UUID1 or UUID4 but you can still choose a DB engine that supports non-clustered indexes like PostgreSQL, then it is a viable option. There is a performance hit, but much less pronounced compared to InnoDB’s clustered index. Microsoft SQL Server also [supports both index types](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described?view=sql-server-ver16). Alternative solutions & related studies [Universally Unique Lexicographically Sortable Identifier](https://github.com/oklog/ulid) [UUIDs are Popular, but Bad for Performance — Let’s Discuss](https://www.percona.com/blog/2019/11/22/uuids-are-popular-but-bad-for-performance-lets-discuss/) – Percona Database Performance Blog [Illustrating Primary Key models in InnoDB and their impact on disk usage](https://www.percona.com/blog/2015/04/03/illustrating-primary-key-models-in-innodb-and-their-impact-on-disk-usage/) – Percona Database Performance I dig the straightforward graphical view of InnoDB page allocation. MySQL 8 can convert UUID1 to binary and shuffle the time fields to the “correct” order: [UUID_TO_BIN(string_uuid, swap_flag)](https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_uuid-to-bin) [BIN_TO_UUID(binary_uuid, swap_flag)](https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_bin-to-uuid) [Rick James: GUID/UUID Performance Breakthrough](http://mysql.rjweb.org/doc.php/uuid#mysql_8_0)
vdorot
1,211,955
Process Analytics  - September 2022 News
Welcome to the Process Analytics monthly news 👋. Our monthly reminder: The goal of the Process...
20,050
2022-10-06T07:38:56
https://medium.com/@process-analytics/process-analytics-september-2022-news-d5d3a69830c6
news, typescript, analytics, visualization
Welcome to the Process Analytics monthly news 👋. Our monthly reminder: The goal of the Process Analytics project is to provide a means to rapidly display meaningful Process Analytics components in your web pages using BPMN 2.0 notation and Open Source libraries. Goodbye summer 🏖️, welcome colorful autumn 🍂! For the Process Analytics team, fall activities and news start to color the project calendar 🔥. From preparation for Hacktoberfest, to improving the integration of bpmn-visualization in projects, September was a busy month. Let’s see what’s what 📚. ## Events ### Participation in Hacktoberfest 2022 as maintainers This year again, the Process Analytics team is participating in [Hacktoberfest 2022](https://hacktoberfest.com/), as we did in Hacktoberfest [2021](https://medium.com/@process-analytics/hacktoberfest-2021-with-process-analytics-44eecc238ead) and [2020](https://dev.to/marcin_michniewicz/hacktoberfest-challenge-398g). So what is Hacktoberfest 🤔? It is the biggest open source event globally, organized by DigitalOcean 🌐. During the month of October, all open source enthusiasts (called _contributors_) are encouraged to participate by contributing to open source projects. On the other side, project _maintainers _are invited to prepare their repositories to receive valuable contributions from the community. The Process Analytics team worked hard in September 💪 to create different contribution categories. You can find more details about our participation in [Hacktoberfest 2022 with Process Analytics](https://medium.com/@process-analytics/hacktoberfest-2022-with-process-analytics-add185b50721). #### Promoting Hacktoberfest with Process Analytics We have also developed a small application that can be easily customized and used by Hacktoberfest maintainers to promote their own projects. The application shows a simple process explaining how to participate. It is available as a [live demo](https://cdn.statically.io/gh/process-analytics/bpmn-visualization-examples/62a27db/demo/hacktoberfest-custom-themes/index.html) of the [bpmn-visualization-examples](https://github.com/process-analytics/bpmn-visualization-examples/) repository. ![Hacktoberfest 2022 theme with Process Analytics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbcmkae4fpdpechjmqkz.gif) ### Participation in ICPM, the leading Process Mining conference Our demo proposal submitted to the [International Conference on Process Mining (ICPM) 2022](https://icpmconference.org/2022/) has been accepted 🥳 . What is ICPM❓ It is a leading conference for process analytics researchers, practitioners and vendors. During a one week event, the latest scientific results, tools and trends are shared and discussed by community leaders. As part of the conference structure, a demo session is organized to showcase innovative process mining tools and applications. The Process Analytics team is going to present a live demo. Join us during the week of October 23-28, 2022 in Bolzano 🔥. ## We are on Discord We have set up a [Discord](https://discord.com/invite/HafnBYsRXd) server for Process Analytics. You can now join and get in touch with us. Different channels are available for different purposes: * To get our latest news, visit the **📣 news **channel. * Use the ❓ **questions-answers** channel if you have any question related to the project usage. * We welcome any feedback and suggestions. Just ping us on the **feedback **channel and we’ll be more than happy to read/listen. * Finally, two channels are dedicated to contributors: 💻 **contributing** and **hacktoberfest**. ## bpmn-visualization JS/TS library In September, we released 2 versions of the bpmn-visualization library: [0.26.1](https://github.com/process-analytics/bpmn-visualization-js/releases/tag/v0.26.1) & [0.26.2](https://github.com/process-analytics/bpmn-visualization-js/releases/tag/v0.26.2). These 2 releases improve the integration of _bpmn-visualization_ in projects. ### Simplified bundler and framework configuration to integrate bpmn-visualization It is now easier to use _bpmn-visualization_ in Parcel, Webpack and Angular projects 🔥. Previously, it was necessary to add a special configuration to the bundler to integrate _bpmn-visualization_. Starting with version 0.26.2, this is not needed anymore. ### Clarified TypeScript support In the past, the minimum version of TypeScript required by _bpmn-visualization_ to work was not clearly indicated. We’ve fixed that too: 👀 _bpmn-visualization_ currently requires TypeScript 4.5. This requirement is enforced by automatic tests and there is no plan to increase the minimum TypeScript requirement in the upcoming _bpmn-visualization_ versions. ### Plans for the future Here are the topics we will be addressing in the coming months: * Development of Process Analytics/Mining use cases using the bpmn-visualization libraries * Simplification of integration in TypeScript projects ## That’s all folks! We hope you enjoyed this September project news and are looking forward to what the rest of the fall will bring 👋. In the meantime, stay on top of the latest news and releases by following us: * Website: [https://process-analytics.dev](https://process-analytics.dev/?utm_source=dev.to&utm_medium=display&utm_campaign=news) * Twitter: [@ProcessAnalyti1](https://twitter.com/ProcessAnalyti1) * GitHub: [https://github.com/process-analytics](https://github.com/process-analytics) * Discord: [Join our server!](https://discord.com/invite/HafnBYsRXd) *Cover photo by [Lukasz Szmigiel](https://unsplash.com/photos/ps2daRcXYes?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [ Unsplash](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)*
assynour
1,212,007
Prerendering in Angular - Part IV
The last piece of our quest is to adapt our Angular Prerender Builder to create multiple folders, per...
19,852
2022-10-05T17:05:57
https://garage.sekrab.com/posts/prerendering-in-angular-part-iv
angular, webdev, javascript, tutorial
The last piece of our quest is to adapt our Angular Prerender Builder to create multiple folders, per language, and add the index files into it. Hosting those files depends on the server we are using, and we covered different hosts back in [Twisting Angular Localization](https://garage.sekrab.com/posts/using-angular-app_base_href-token-to-serve-multilingual-apps-and-hosting-on-netlify) post. If you are going to make the best use of this article, I suggest you read the above post first. > At this point, I understand how reliant I have become on older posts, to release the dependency as much as possible, I will use some hard coded values. Find the new builder in [StackBlitz](https://stackblitz.com/edit/angular-express-prerender) project under `prerender-builder/prerender-multi` folder. ## Custom index file As we previously did with the [express prerender function](https://garage.sekrab.com/posts/prerendering-in-angular-part-ii), we have custom `index.[lang].html` files created via a builder we spoke of in our [Replacing i18n series](https://garage.sekrab.com/posts/pre-generating-multiple-index-files-using-angular-builders-and-gulp-tasks-to-serve-a-multilingual-angular-app). In our multilingual prerenderer, we need to loop through the supported languages, and pass the right index file. We can pass the supported language as part of the schema, or we can pass them from a `wiretindex` builder (which we built in the article linked above). I am not going to use the index builder in our [StackBlitz](https://stackblitz.com/edit/angular-express-prerender) to simplify matters, but if you do, it would be fetched the same way the browser target and the server target are fetched, like this: ```ts // optional\ // index.ts, inside execute function\ // let schema have options.indexTarget to point to writeindex target\ // get writeindex target to read supported languages and other options\ const indexTarget = targetFromTargetString(options.indexTarget);\ const indexOptions = (await context.getTargetOptions(indexTarget)) as any; ``` In our StackBlitz, let's add the needed options directly to the schema. The following elements are needed to figure out where the index file is. ```ts // new options to add to schema\ interface Options {\ destination: string; // this is where the index files are\ languages: string[]; // the supported languages, in 2-char codes\ localePath: string; // this is where the locale scripts sits inside the browser target\ } ``` Remember we still need to create a server build, to generate the `main.js`. Whether with `ngExpressEngine` or not, it must at least export the `AppServerModule` and the `renderModule` function. ## Language files This part is specific to our chosen way of localization, which is to embed a JavaScript that runs on both client and server, with the translation keys, inside `cr.resources` declared variable. Here is a [shortcut to the English locale](https://stackblitz.com/edit/angular-runtime-translation?file=src%2Flocale%2Fcr-en.js) used. This is embedded in `index.en.url.html`. For example: ```html <!-- host/index/index.en.url.html -->\ <!DOCTYPE html>\ <html lang="en">\ <head>\ <base href="/en/" />\ <!-- this file has the resources and embedded in the index file -->\ <script src="locale/cr-en.js" defer></script>\ ...\ </html> ``` The Angular `renderModule` function has no context, so we need to impose one. In our case, the resources are defined in a global variable: `cr`. In `worker.ts` we add the following `global` definition: ```js // to worker.ts, add the cr resources group like we did previously\ // wait for execution to populate\ // if you set noImplicitAny in tsconfig, this can be casted to any\ // (<any>global).cr = {};\ global.cr = {}; ``` In `PreRender` function we import the script, and populate ```ts // in worker.ts, we need to import the file, which assigns the global.cr[language],\ // then we need to populate our server global.cr.resources\ await import(localePath);\ global['cr'].resources = global['cr'][language]; ``` And finally we construct output paths containing the language: ```ts // worker.ts: correct route and language, like ./client/en/route/index.html\ const outputFolderPath = path.join(clientPath, language, route); ``` We change the model passed to `PreRender` to contain the missing information: `language` and `localePath` ```ts // worker.ts\ // change the render options, and change the PreRender signature\ export interface RenderOptions {\ indexFile: string; // this is now the full path of the index file\ // ...\ // add language, and the locale script path\ language: string;\ localePath: string;\ } ``` ## The main loop Now back to our `index.ts`, we need to loop through the languages, and prepare two paths: the index path, and the locale path. ```ts // index.ts\ // change the renderer, pass 'options' from execute function\ async function _renderUniversal(\ routes: string[],\ context: BuilderContext,\ clientPath: string,\ serverPath: string,\ // passing options\ options: IOptions\ ): Promise<BuilderOutput> {\ // ... the changes are as follows\ try {\ // loop through options languages\ for (const lang of options.languages) {\ // create path to pass to worker for example: './index/index.lang.url.html'\ const indexFile = path.resolve(context.workspaceRoot, getIndexOutputFile(options, lang)); // check existence of locale else skip silently, client/locale/cr-en.js for example\ const langLocalePath = path.join(clientPath,`${options.localePath}/cr-${lang}.js`); if (!fs.existsSync(langLocalePath)) {\ context.logger.error(`Skipping locale ${lang}`);\ continue;\ }\ // then the results map\ const results = (await Promise.all(\ routes.map((route) => {\ const options: RenderOptions = {\ // ...\ // adding these\ localePath: langLocalePath,\ language: lang\ };\ // ...\ })\ )) as RenderResult[];\ // ...\ }\ // ...\ }\ return { success: true };\ }\ // the index outpfile can be as simple or as complicated as the project setup needs\ function getIndexOutputFile(options: any, lang: string): string {\ // index/index.lang.url.html as an example\ return `${options.destination}/index.${lang}.url.html`;\ // could be client/en/index.html, like in Surge host\ } ``` Back to `angular.json`, we assign the required params in a new target: ```json "prerender-multi": {\ "builder": "./prerender-builder:prerender-multi",\ "options": {\ "browserTarget": "cr:build:production",\ "serverTarget": "cr:server:production",\ "routes": ["/projects", "/projects/1"],\ "languages": ["en", "ar", "tr"],\ // where are the index files?\ "destination": "./host/index",\ // where in browser target do the locale scripts sit?\ "localePath": "locale"\ }\ } ``` Run `ng run cr:prerender-multi` creates language folders under the browser target, notice that I did not group them under a `static` folder as I did in Part II of this series, to simplify matters, and to be more realistic: we use Angular builder when we don't have SSR in mind (otherwise use Express to prerender), to host on hosts like Netlify, which favors static files over routing rules. This is why it is important to place the static files directly under the public hosting folder (in our case, `client` folder). ### Prerendering the home index.html A note about creating a prerendered version of the homepage itself, like `en/index.html`. If we generate a static `index.html`, things will work as expected, but there is a price to pay. All non static pages, will load the `index.html` first before it kicks off JavaScript to hydrate. That is bad! - For SEO, the server version is that static physical file of index, no matter what route is requested - For user experience. the site may flicker the static index before it reroutes. In hosts like Netlify, or Firebase where we deliberately create the language sub folders, I would avoid generating the root statically. ```js // if you have this, avoid prerendering root\ // in firebase host\ "i18n": {\ "root": "/"\ },\ "rewrites": [\ {\ "source": "/ar{,/**}",\ "destination": "/ar/index.html"\ },\ {\ "source": "**",\ "destination": "/en/index.html"\ }\ ]// in netlify\ # [[redirects]]\ from = "/en/*"\ to = "/en/index.html"\ status = 200 ``` But if we use `index.[lang].html` on root, and serve it as a rewrite, or as the case is with Surge `/en/200.html` file, serving a static root is not a problem. ```js // if you have this, prerendering root is ok\ // in firebase host\ "rewrites": [\ {\ "source": "/ar{,/**}",\ "destination": "/index.ar.html"\ },\ {\ "source": "**",\ "destination": "/index.en.html"\ }\ ]// in netlify\ [[redirects]]\ from = "/en/*"\ to = "/index.en.html"\ status = 200 ``` I hope I covered all corners of this beast. If you have questions about creating a single build per multiple languages, hosting on different hosts, or prerendering for different languages, or you have an idea or suggestion, please hit the comment box, or the twitter link to let me know. Let me know if there are other subjects in Angular you would like to see ripped apart 😁 #### RESOURCES - [StackBlitz project](https://stackblitz.com/edit/angular-express-prerender) #### RELATED POSTS - [Alternative way to localize in Angular](https://garage.sekrab.com/posts/alternative-way-to-localize-in-angular)
ayyash
1,212,180
CLI application for working with disposable email service
Here's my latest Go project - Mail.tm CLI, a CLI application for working with Mail.tm disposable...
0
2022-10-05T21:03:42
https://dev.to/abgeo/cli-application-for-working-with-disposable-email-service-1n31
go, cli, cobra, pterm
Here's my latest Go project - Mail.tm CLI, a CLI application for working with Mail.tm disposable email service. Disposable email - is a free email service that allows receiving emails at a temporary address that self-destructs after a specific time elapses. Mail.tm CLI is a command-line-based application that works with one such service - Mail.tm. Mail.tm CLI allows you to manage a disposable #mailbox and receive emails from the CLI console. Stars, issues, and Pull Requests are welcome! https://github.com/ABGEO/mailtm
abgeo
1,212,194
What is android? How you can create app for that?
Android is a mobile operating system and a platform for mobile apps. It's created by Google, and the...
0
2022-10-05T22:02:23
https://dev.to/tiztechy/how-to-create-android-apps-without-coding-3k7o
android, javascript, beginners, programming
Android is a mobile operating system and a platform for mobile apps. It's created by Google, and the open-source code is available to download free of charge from the Android Open Source Project (AOSP). Android apps run on handheld electronic devices, such as smartphones and tablets, run on the internet, and can be used for communication, computation, and sensing. Android apps are created using a user interface development kit (UI kit) in a programming language such as C++ or Java. Creating Android apps requires knowledge about computer programming languages; however, it's possible to create apps without any coding skills. Checkout: [How to create an android app without coding](https://www.iztechy.com/how-to-create-an-app-on-android-without-coding/) Android is an open-source platform, which means it's available free of charge to download and use by anyone. Apps can be programmed and customized easily through the Android SDK. Apps are also feasible when creating HTML5 games or when creating web applications using Node.js or Python. Essentially, Android apps are programmable and customizable. Anyone with an internet connection and a device running Android can create apps for the operating system without prior experience in computer programming languages. It's easy to create Android apps without prior coding experience. First, you need an Android device- either a smartphone or a tablet- to work on. After that, you must download various development kits from the Google Play store to work on your project. Each kit has unique features that make developing an app easier; for example, some kits include an IDE (short for integrated development environment) for code management and profiling. To develop an app, you must have access to the Google Play store and an internet connection. Following these steps will help you create Android apps without prior experience in computer programming languages. A lot of things can be done with Android without needing to know coding; developers have built some pretty interesting applications without any prior programming experience. For example: self-driving cars run on Google's Android system; these cars navigate using maps, sensors for detecting movement and changing conditions, and their artificial intelligence software called AIsubsequently, developers don't need any coding knowledge to create these apps. Additionally: many medical devices use Android in clinical settings; these include electrocardiogram devices used in diagnosing heart conditions and stress measurement scales used in weight loss programs. Because of all the possibilities with Android, anyone can create useful applications without needing any programming knowledge whatsoever. It's possible to create Android apps without needing any prior coding experience by relying on tools provided by Google or by third-party developers. These tools include a code editor - such as emEditor - a compiler - such as emcompiler - and other components that support developing mobile applications using Java language syntax files (a project file - such as LEMP - for organizing source code) or C++ language syntax files (for organizing source code). All components are necessary for working with the Java Studio IDE software or the Eclipse IDE software. These programs let you design your application interfaces with custom layouts using XML files that define your application's user interface (UI). You can then export those designs into .xml files that define your application's source code with the appropriate syntax highlighting and language features for compiling into binary code that runs on your device when you're ready to release it for public use. These tools make it easy to create mobile applications without requiring any prior programming experience or knowledge of computer languages. It's possible to create Android apps without needing any prior coding experience- just download various development kits from the Google Play store and tools from third-parties like emcompiler or emEditor. It's also possible to create Android apps without needing any computer programming knowledge- just follow the steps outlined above!
tiztechy
1,212,316
Learn The Code Used For Apps on Android and iOS Platforms
The most challenging aspect of app development is convincing users to make the most of their...
0
2022-10-06T03:43:13
https://dev.to/ferry/learn-the-code-used-for-apps-on-android-and-ios-platforms-1kcd
android, ios, mobile, startup
The most challenging aspect of app development is convincing users to make the most of their development time and money. According to Statista, 25% of consumers leave apps after their first use, while 71% of mobile users worldwide admit to deleting apps three months after downloading them. How did it happen? one of these factors emerges in the user experience of using your app. Based on the site Whispir, over 44 per cent of errors were reported by active users. Reinforced by a survey from Blancco, 58% of iOS apps crash due to insufficient testing. It's terrible, isn't it? How to prevent that event from happening in your application? Know the programming language you are using so the occurrence of these errors can be minimized from the start of your development. If there are no errors or crashes, the user will have a great experience with your application. Furthermore, this article will give you new insights into what tech stack is safe and quality for developing applications. ## Developing apps for the iOS platform The iOS platform is a proprietary platform created by Apple. The iOS platform is available for phone devices (iPhone) and tablet devices (iPad). You can develop apps for the iOS platform and then target the same app to iPhones and iPad. Two programming languages are used in developing iOS platform applications: Objective-C and Swift. Here is the explanation. ## Objective-C Objective-C was the first language supported by Apple for developing iOS mobile apps. It is a stable and mature language, having been in widespread use for several years. However, since Apple introduced Swift, Objective-C's popularity for new iOS mobile development has plummeted. ## Swift Swift is a more accessible, straightforward, and concise language compared to Objective-C. Although Swift and Objective-C can coexist, i.e., libraries written in Objective-C and Objective-C utilities can be used in Swift. **Read Also**: [How to Creating a Social Media App Super Simple](https://www.emveep.com/blog/create-social-media-app/) ## Developing mobile apps for the Android platform Android is an open-source platform mainly developed and promoted by Google. Android devices come in many form factors, from phones to tablets, as different manufacturers with multiple models address other user preferences. Applications can be built for Android devices using the native Android SDK with Java and Kotlin or cross-platform technologies written with the SDK framework but targeted at Android. ## Java Java has been the default language for writing Android applications since the Android platform was introduced in 2008. Java is a modern and purely object-oriented language (compared to C++) and was quickly adopted by the Android platform. As a result, it is the most widely used language for Android application development. ## Kotlin Last year Google declared it the "language of choice for Android app developers is Kotlin." Those who prefer both Java and Kotlin boast trade-offs because they both compile to bytecode, making them great for developers who want to migrate to newer languages over time or enjoy the flexibility of choosing one over the other. **Read Also**: [App Development Cost: Is It Cheaper to Outsource or In-House?](https://www.emveep.com/blog/app-development-cost/) ## Developing mobile apps for both the iOS and Android platforms What if you want to build on two platforms at once? Fortunately, some technologies allow you to write in one language or framework and target applications for both platforms. The following is a list of tech stacks that support building applications on two platforms. ## Darts and Flutter You use Google's Flutter framework to write mobile apps for iOS and Android. To use Flutter, you need to learn the Dart programming language. One of Flutter's unique abilities is that it comes with its own library of UI widget frameworks based on Google's Material design and iOS-like UI widgets. ## JavaScript and React Native React Native uses JavaScript as a programming language to write mobile apps. No HTML is used in writing React Native apps. This code is then interpreted at runtime and executed using a "bridge" to access the device's native SDK capabilities. In addition, react Native apps use the platform's native UI libraries to render UI components, which causes the UI to be completely native. ## C# and Xamarin C# is an object-oriented programming language developed by Microsoft. The Xamarin framework (acquired by Microsoft) allows you to program in C# against the .NET framework. The .NET framework is implemented on the iOS platform using an open-source implementation called mono. C# code is cross-compiled and runs natively on iOS or Android devices. This allows for seamless execution, which is very similar to native development. In addition, there are unique extensions named Xamarin.iOS, and Xamarin. You can use Android to access native iOS and Android capabilities that can be called from C#. For iOS, you need XCode on a Mac machine to create installable iOS apps. ## The Bottom Line The point is to understand every product or software that you develop. Of course, in development, it is challenging to avoid bugs and errors that will occur. However, if you know the profile of the programming language you are using, this will minimize mistakes and bugs that arise. So, what application are you planning to make? Let's complete your development process. **More Resources**: [7 Secrets You'll Get from Developing Custom Apps](https://www.emveep.com/blog/7-secret-you-get-from-developing-custom-apps/) [App Development Cost: Is It Cheaper to Outsource or In-House?](https://www.emveep.com/blog/app-development-cost/) [How to Creating a Social Media App Super Simple](https://www.emveep.com/blog/create-social-media-app/)
ferry
1,212,322
Image search engine with React JS - React Query 🔋
This time we will make an image search engine with the help of Unsplash API and React Query, with...
0
2022-10-07T12:28:13
https://dev.to/franklin030601/image-search-engine-with-react-js-react-query-39
react, javascript, tutorial, beginners
This time we will make an image search engine with the help of [Unsplash API](https://unsplash.com/) and **React Query**, with which you will notice a big change in your applications, with so few lines of code, React Query will improve the performance of your application! > 🚨 Note: This post requires you to know the basics of **React with TypeScript (basic hooks)**. > Any kind of feedback is welcome, thanks and I hope you enjoy the article.🤗 &nbsp; ## Table of Contents. > 📌 [Technologies to be used.](#1) > 📌 [Creating the project.](#2) > 📌 [First steps.](#3) > 📌 [Creating the form.](#4) >> 📌 [Handling the form submit event.](#5) > > 📌 [Creating the cards and doing the image search.](#6) >> 📌 [Making the request to the API.](#7) > > 📌 [Conclusion.](#8) >> 📌 [Demo of the application.](#9) >> 📌 [Source code.](#10) &nbsp; ## 💧 Technologies to be used. <a id="1"></a> - ▶️ React JS (v 18) - ▶️ Vite JS - ▶️ TypeScript - ▶️ React Query - ▶️ Axios - ▶️ [Unsplash API](https://unsplash.com/) - ▶️ CSS vanilla (You can find the styles in the repository at the end of this post) ## 💧 Creating the project. <a id="2"></a> We will name the project: **`search-images`** (optional, you can name it whatever you like). ```bash npm init vite@latest ``` We create the project with Vite JS and select React with TypeScript. Then we run the following command to navigate to the directory just created. ```bash cd search-images ``` Then we install the dependencies. ```bash npm install ``` Then we open the project in a code editor (in my case VS code). ```bash code . ``` ## 💧 First steps. <a id="3"></a> We create the following folders: - **src/components** - **src/interfaces** - **src/hooks** - **src/utils** Inside the **src/App.tsx** file we delete everything and create a component that displays a `hello world`. ```tsx const App = () => { return ( <div>Hello world</div> ) } export default App ``` Then we create inside the folder **src/components** the file **Title.tsx** and add the following code, which only shows a simple title. ```tsx export const Title = () => { return ( <> <h1>Search Image</h1> <hr /> </> ) } ``` Inside that same folder we are going to create a **Loading.tsx** file and add the following that will act as loading when the information is loaded. ```ts export const Loading = () => { return ( <div className="loading"> <div className="spinner"></div> <span>Loading...</span> </div> ) } ``` At once we are going to set the API response interface, inside the folder **src/interfaces** we create a file `index.ts` and add the following interfaces. ```ts export interface ResponseAPI { results: Result[]; } export interface Result { id: string; description: null | string; alt_description: null | string; urls: Urls; likes: number; } export interface Urls { small: string; } ``` The API returns more information but I only need that for the moment. Once we have the title, let's place it in the **src/App.tsx** file. ```tsx import { Title } from './components/Title'; const App = () => { return ( <div> <Title /> </div> ) } export default App ``` and it would look something like this 👀 (you can check the styles in the code on Github, the link is at the end of this article). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c6btiy0lf5t8g21do1k8.png) ## 💧 Creating the form. <a id="4"></a> Inside the folder **src/components** we create the file `Form.tsx` and add the following form. ```tsx export const Form = () => { return ( <form> <input type="text" placeholder="Example: superman" /> <button>Search</button> </form> ) } ``` Now let's place it in **src/App.tsx**. ```tsx import { Title } from './components/Title'; import { Form } from './components/Form'; const App = () => { return ( <div> <Title /> <Form/> </div> ) } export default App ``` And it should look something like this 👀. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7nv7ugv6864je25pbp9c.png) ### 💧 Handling the form submit event. <a id="5"></a> We are going to pass to the onSubmit event of the form a function named **handleSubmit**. ```tsx export const Form = () => { return ( <form onSubmit={handleSubmit}> <input type="text" placeholder="Example: superman" /> <button>Search</button> </form> ) } ``` This function will do the following. It will receive the event, in which we will have all the necessary to recover all the data of each input inside the form that in this case is only one input. ```ts const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { } ``` First, we prevent the default behavior ```ts const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() } ``` Then we create a variable (target) and we are going to set the target property of the event, so that it helps us with the autocompletion. ```ts const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() const target = e.target as HTMLFormElement; } ``` Now we are going to use the fromEntries function of the Object instance sending a new instance of FormData which in turn receives the target property of the event. This will return us each one of the values inside our form. And which we can destructure. Although it doesn't help us the autocompletion to destructure each value of the input ```ts const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() const target = e.target as HTMLFormElement; const { form } = Object.fromEntries(new FormData(target)) } ``` By the way, note that I destruct a property called **form** and where do I get that from? Well that will depend on the **value** you have given to your **name** property in the input. ```html <input type="text" placeholder="Example: superman" name="form" /> ``` Well, we have already obtained the value of the input, now we are going to validate that if its length is 0, that it does nothing. And if that condition is not met, then we will have our keyword to search for images. By the way, also delete the form and put the focus on the input. ```ts const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() const target = e.target as HTMLFormElement; const { form } = Object.fromEntries(new FormData(target)) if (form.toString().trim().length === 0) return target.reset() target.focus() } ``` Now what we will use a state for, is to maintain that input value. We create a state. And we send it the value of our input ```tsx const [query, setQuery] = useState('') const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() const target = e.target as HTMLFormElement; const { form } = Object.fromEntries(new FormData(target)) if (form.toString().trim().length === 0) return setQuery(form.toString()) target.reset() target.focus() } ``` All good, but now the problem is that we have it all in the **Form.tsx** component and we need to share the **query** state to communicate what image we are going to look for. So the best thing to do is to move this code, first to a custom hook. Inside the folder **src/hook** we create a file **index.tsx** and add the following function: ```tsx export const useFormQuery = () => { } ``` We move the **handleSubmit** function inside the hook and also the state. And we return the value of the state (**query**) and the function **handleSubmit**. ```tsx import { useState } from 'react'; export const useFormQuery = () => { const [query, setQuery] = useState('') const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() const target = e.target as HTMLFormElement; const { form } = Object.fromEntries(new FormData(target)) if (form.toString().trim().length === 0) return setQuery(form.toString()) target.reset() target.focus() } return { query, handleSubmit } } ``` Then let's call the hook in the parent component of **Form.tsx** which is **src/App.tsx** and pass to Form.tsx the function **handleSubmit**. ```tsx import { Title } from './components/Title'; import { Form } from './components/Form'; import { useFormQuery } from "./hooks"; const App = () => { const { handleSubmit, query } = useFormQuery() return ( <div> <Title /> <Form handleSubmit={handleSubmit} /> </div> ) } export default App ``` and to the **Form.tsx** component we add the following interface. ```tsx interface IForm { handleSubmit: (e: React.FormEvent<HTMLFormElement>) => void } export const Form = ({ handleSubmit }: IForm) => { return ( <form onSubmit={handleSubmit}> <input type="text" name="form" placeholder="Example: superman" /> <button>Search</button> </form> ) } ``` ## 💧 Creating the cards and doing the image search. <a id="6"></a> Go to the **src/components** folder and create 2 new files. 1 - **Card.tsx** Here we will only make a component that receives as props the information of the image. The interface Result has already been defined before. ```tsx import { Result } from "../interface" interface ICard { res: Result } export const Card = ({ res }: ICard) => { return ( <div> <img src={res.urls.small} alt={res.alt_description || 'photo'} loading="lazy" /> <div className="hidden"> <h4>{res.description}</h4> <b>{res.likes} ❤️</b> </div> </div> ) } ``` 2 - **GridResults.tsx** For the moment we are only going to make the shell of this component. This component will receive the query (image to search) by props. This is where the request to the API will be made and display the cards. ```tsx import { Card } from './Card'; interface IGridResults { query: string } export const GridResults = ({ query }: IGridResults) => { return ( <> <p className='no-results'> Results with: <b>{query}</b> </p> <div className='grid'> {/* TODO: map to data and show cards */} </div> </> ) } ``` Now let's use our **GridResults.tsx** in **src/App.tsx**. We will display it conditionally, where if the value of the **query** state (the image to search for) has a length greater than 0, then the component is displayed and shows the results that match the search. ```tsx import { Title } from './components/Title'; import { Form } from './components/Form'; import { GridResults } from './components/GridResults'; import { useFormQuery } from "./hooks"; const App = () => { const { handleSubmit, query } = useFormQuery() return ( <div> <Title /> <Form handleSubmit={handleSubmit} /> {query.length > 0 && <GridResults query={query} />} </div> ) } export default App ``` ### 💧 Making the request to the API. <a id="7"></a> To make the request, we will do it in a better way, instead of doing a typical fetch with useEffect. We will use axios and [react query](https://tanstack.com/query/v4/?from=reactQueryV3&original=https://react-query-v3.tanstack.com/) React Query makes it easy to fetch, cache and manage data. And that's what the React team recommends instead of doing a simple fetch request inside a useEffect. Now let's go to the terminal to install these dependencies: ``` npm install @tanstack/react-query axios ``` After installing the dependencies, we need to wrap our app with the React query provider. To do this, we go to the highest point of our app, which is the **src/main.tsx** file. First we create the React Query client. We wrap the App component with the **QueryClientProvider** and send it in the **client** prop our **queryClient**. ```tsx import React from 'react' import ReactDOM from 'react-dom/client' import { QueryClient, QueryClientProvider } from '@tanstack/react-query' import App from './App' import './index.css' const queryClient = new QueryClient() ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render( <React.StrictMode> <QueryClientProvider client={queryClient}> <App /> </QueryClientProvider> </React.StrictMode> ) ``` Now in the **GridResults.tsx** component ... We will use a react-query hook that is the **useQuery**, which receives 3 parameters, but for the moment we will only use the first two. - The first parameter is the queryKey which is an array of values (arrays with values as simple as a string or complex as an object), it is used to identify the data that was stored in the cache. In this case we send an array of with the value of the query. ```ts useQuery([query]) ``` - The second parameter is the queryFn, it is the function that makes the request and returns a promise already resolved with the data or an error. For this we are going to create our function, in the **src/utils** folder we create the **index.ts** file and create a function. This function is asynchronous and receives a query of type string and returns a promise of type **ResponseAPI**, ```ts export const getImages = async (query: string): Promise<ResponseAPI> => { } ``` We build the URL, it is worth mentioning that we need an API Key to use this API. Just create an [Unsplash account](https://unsplash.com/join). Create an app and get the access key. ```ts export const getImages = async (query: string): Promise<ResponseAPI> => { const url = `https://api.unsplash.com/search/photos?query=${query}&client_id=${ACCESS_KEY}` } ``` Then we do a try/catch in case something goes wrong. Inside the **try** we make the request with the help of axios. We do a get and send the url, unstructure the data property and return it. In the **catch** we will only throw an error sending the message. ```ts import axios from 'axios'; import { ResponseAPI } from "../interface" import { AxiosError } from 'axios'; const ACCESS_KEY = import.meta.env.VITE_API_KEY as string export const getImages = async (query: string): Promise<ResponseAPI> => { const url = `https://api.unsplash.com/search/photos?query=${query}&client_id=${ACCESS_KEY}` try { const { data } = await axios.get(url) return data } catch (error) { throw new Error((error as AxiosError).message) } } ``` Now if we are going to use our function **getImages**, we send it to the hook. But, as this function receives a parameter, we need to send it in the following way: we create a new function that returns the getImages and we send the query that arrives to us by props ❌ Don't do it that way. ```ts useQuery([query], getImages(query)) ``` ✅ Do it like this. ```ts useQuery([query], () => getImages(query)) ``` And to have typing we are going to put that the data is of type ResponseAPI. ```ts useQuery<ResponseAPI>([query], () => getImages(query)) ``` Finally, we deconstruct what we need from the hook - **data**: The data returned by our **getImages** function. - **isLoading**: boolean value, that tells us when a request is being made. - **error**: the error message, if there is one, by default is undefined. - **isError**: boolean value, that indicates if there is an error. ```ts const { data, isLoading, error, isError } = useQuery<ResponseAPI>([query], () => getImages(query)) ``` Then it would look like this. ```tsx import { Card } from './Card'; interface IGridResults { query: string } export const GridResults = ({ query }: IGridResults) => { const { data, isLoading, error, isError } = useQuery<ResponseAPI>(['images', query], () => getImages(query)) return ( <> <p className='no-results'> Results with: <b>{query}</b> </p> <div className='grid'> {/* TODO: map to data and show cards */} </div> </> ) } ``` Now that we have the data, let's show a few components here. 1 - First a condition, to know if isLoading is true, we show the component **Loading.tsx**. 2 - Second, at the end of the loading, we evaluate if there is an error, and if there is, we show the error. 3 - Then we make a condition inside the p element where if there are no search results, we display one text or another. 4 - Finally, we go through the data to show the images. ```tsx import { useQuery } from '@tanstack/react-query'; import { Card } from './Card'; import { Loading } from './Loading'; import { getImages } from "../utils" import { ResponseAPI } from '../interface'; interface IGridResults { query: string } export const GridResults = ({ query }: IGridResults) => { const { data, isLoading, error, isError } = useQuery<ResponseAPI>([query], () => getImages(query)) if (isLoading) return <Loading /> if (isError) return <p>{(error as AxiosError).message}</p> return ( <> <p className='no-results'> {data && data.results.length === 0 ? 'No results with: ' : 'Results with: '} <b>{query}</b> </p> <div className='grid'> {data.results.map(res => (<Card key={res.id} res={res} />))} </div> </> ) } ``` And that's it, we could leave it like that and it would look very nice. Showing the loading: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2odohrc3m8co6t75itm.png) Showing search results: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3r4d8x4qy7ve8o61i0sq.png) But I would like to block the form while the loading is active. For this the **Form.tsx** component must receive another prop which is **isLoading** and place it in the **disable** property values of both the input and the button. ```tsx interface IForm { handleSubmit: (e: React.FormEvent<HTMLFormElement>) => void isLoading: boolean } export const Form = ({ handleSubmit, isLoading }: IForm) => { return ( <form onSubmit={handleSubmit}> <input type="text" name="form" disabled={isLoading} placeholder="Example: superman" /> <button disabled={isLoading}>Search</button> </form> ) } ``` In the hook **useFormQuery.ts** we create a new state that will start with the value false. ```ts const [isLoading, setIsLoading] = useState(false) ``` And a function to update this status: ```ts const handleLoading = (loading: boolean) => setIsLoading(loading) ``` And we return the value of **isLoading** and the **handleLoading** function. ```tsx import { useState } from 'react'; export const useFormQuery = () => { const [query, setQuery] = useState('') const [isLoading, setIsLoading] = useState(false) const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault() const target = e.target as HTMLFormElement; const { form } = Object.fromEntries(new FormData(target)) if (form.toString().trim().length === 0) return setQuery(form.toString()) target.reset() target.focus() } const handleLoading = (loading: boolean) => setIsLoading(loading) return { query, isLoading, handleSubmit, handleLoading } } ``` In **src/App.tsx** we unstructure isLoading and handleSubmit of the hook. And isLoading we send it to the **Form** component and the function we send it to the **GridResults** component. ```tsx import { Title } from './components/Title'; import { Form } from './components/Form'; import { GridResults } from './components/GridResults'; import { useFormQuery } from "./hooks"; const App = () => { const { handleLoading, handleSubmit, isLoading, query } = useFormQuery() return ( <div> <Title /> <Form handleSubmit={handleSubmit} isLoading={isLoading} /> {query.length > 0 && <GridResults query={query} handleLoading={handleLoading} />} </div> ) } export default App ``` In the component **GridResults.tsx** we are going to receive the new prop that is **handleLoading**, we unstructure it, and inside the component we make a useEffect before the conditions, and inside the useEffect we execute **handleLoading** and we send the value of **isLoading** that gives us the hook **useQuery** and the useEffect will be executed every time that the value **isLoading** changes, for that reason we place it as dependency of the useEffect. ```tsx import { useEffect } from 'react'; import { AxiosError } from 'axios'; import { useQuery } from '@tanstack/react-query'; import { Card } from './Card'; import { Loading } from './Loading'; import { getImages } from "../utils" import { ResponseAPI } from '../interface'; interface IGridResults { handleLoading: (e: boolean) => void query: string } export const GridResults = ({ query, handleLoading }: IGridResults) => { const { data, isLoading, error, isError } = useQuery<ResponseAPI>([query], () => getImages(query)) useEffect(() => handleLoading(isLoading), [isLoading]) if (isLoading) return <Loading /> if (isError) return <p>{(error as AxiosError).message}</p> return ( <> <p className='no-results'> {data && data.results.length === 0 ? 'No results with: ' : 'Results with: '} <b>{query}</b> </p> <div className='grid'> {data.results.map(res => (<Card key={res.id} res={res} />))} </div> </> ) } ``` And ready, this way we will block the form when the request is being executed. ## 💧 Conclusion. <a id="8"></a> I hope you liked this post and that it helped you to understand a new approach to make requests with **react-query**! and grow your interest in this library that is very used and very useful, with which you notice incredible changes in the performance of your app. 🤗 If you know of any other different or better way to perform this application, please feel free to comment. 🙌. > **I invite you to check my portfolio in case you are interested in contacting me for a project!. [Franklin Martinez Lucas](https://franklin-dev.netlify.app/)** > 🔵 Don't forget to follow me also on twitter: [**@Frankomtz361**](https://twitter.com/Frankomtz361) {%embed https://twitter.com/Frankomtz361/status/1575330172819193857?s=20&t=AVAVQ1am_AkFb2TyMo9S-A %} ### 💧 Demo of the application. <a id="9"></a> [https://search-image-unsplash.netlify.app](https://search-image-unsplash.netlify.app/) ### 💧 Source code. <a id="10"></a> {%embed https://github.com/Franklin361/search-images %}
franklin030601
1,212,452
Mapping Records to arrays
I needed this functionality to parse a configuration regardless of whether it was supplied as an...
0
2022-10-06T08:14:05
https://dev.to/brense/ever-needed-to-map-a-record-to-an-array-ml1
typescript, webdev
I needed this functionality to parse a configuration regardless of whether it was supplied as an array or a Record. This function converts the Record into an array of the same type: ```typescript function mapRecordToArray<T, R extends Omit<T, K>, K extends keyof T>(record: Record<string, R>, recordKeyName: K) { return Object.keys(record).map(key => ({ ...record[key as keyof typeof record], [recordKeyName]: key })) as T[] } ``` It can be used like this: ```typescript type Field = { name: string, label: string, } type Collection = { name: string, label: string, fields: Record<string, Omit<Field, 'name'>> | Field[] } const collections:Collections[] = [] // ... collections.forEach(collection => { const fields = Array.isArray(collection.fields) ? collection.fields : mapRecordToArray(collection.fields, 'name') }) // Function typings will look like this: // function mapRecordToArray<Field, Omit<Field, "name">, "name">(record: Record<string, Omit<Field, "name">>, recordKeyName: "name"): Field[] // Notice the return type: Field[] ```
brense
1,212,479
CSS defaults styles
A personal CSS project to embed in your webpage that set bunch of styles to your HTML elements. You...
0
2022-10-06T09:14:34
https://dev.to/ptibat/css-defaults-styles-7f
A personal CSS project to embed in your webpage that set bunch of styles to your HTML elements. You can get it on Github : {% embed https://github.com/ptibat/css_defaults %}
ptibat
1,214,713
How to create a video streaming app using React and Vime
Video streaming has transformed the way video media is delivered to us online as it allows users to...
0
2022-12-20T12:41:42
https://dev.to/gbadeboife/how-to-create-a-video-streaming-app-using-react-and-vime-4fb3
javascript, react, webdev, tutorial
Video streaming has transformed the way video media is delivered to us online as it allows users to watch videos without the having to download them. This is highly convenient as it saves us the time spent downloading a video and the storage space required for downloaded content. It is a key resource for information sharing in today's world, serving educational, entertainment, professional, and other functions. ## Vime [Vime](https://vimejs.com/) is a simple React framework that provides a flexible, expandable media player that can be used with a variety of Javascript frameworks like React, Vue, Angular, and Svelte. This project will utilize a sample clip from the Vime [documentation](https://vimejs.com/getting-started/player). ## Prerequisites for this project - Knowledge of CSS, and React and React hooks - Latest version of Node.js installed This project's source code can be found on [Github](https://github.com/Gbadeboife/React-video-streaming-app-), and a live demo of the app is hosted on [Vercel](https://react-video-streaming-app.vercel.app/). ## Setting up the project First, we create a new React app with the following line of code: ``` npx create-react-app react-streaming-app ``` After that, run the following line of code in the terminal to include [Vime](https://vimejs.com/) in the project: ``` npm i @vime/core @vime/react ``` Running `npm start` will launch the project on the local development server at http://localhost:3000/ in our browser. #Constructing our components Now that we've installed our app's dependencies, we create a `components` folder to store the two components that will be used in this project, `Home` and `Video1`. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k78jmftpyh1oi5q5g3tr.png) First of all, we create the `Home` component which accesses all videos available in our app. ```jsx //components/Home.js import React from "react"; import { BrowserRouter as Router, Link } from "react-router-dom"; function Home(){ return( <div className="home"> <Link to="/Video" > <img src="https://media.vimejs.com/poster.png"/> <p>Video 1</p></Link> </div> ) } export default Home ``` To prevent the site from refreshing when the video link is clicked, we utilize React Router. In `Home.js`, The React Router's 'BrowserRouter' and 'Link' are imported, `Link` creates an anchor tag that leads to the page specified in it's `to` prop, in this case `/video` which stores the video to be used. ```jsx //components/Videoplayer.js import React from "react"; import { Player, DefaultUi, Video } from "@vime/react"; function VideoPlayer() { return ( <div className="player"> <Player> //this inserts our video into the app. <Video crossOrigin="" poster="https://media.vimejs.com/poster.png"> //specify location of video to be used <source data-src="https://media.vimejs.com/720p.mp4" type="video/mp4" /> <track default kind="subtitles" src="https://media.vimejs.com/subs/english.vtt" srcLang="en" label="English" /> </Video> //this loads the default UI of the vime framework <DefaultUi ></DefaultUi> </Player> </div> ); } export default VideoPlayer ``` In The `VideoPlayer` component above, we import the dependencies needed to display the video from `@vime/react` and then create the body for the video. [Providers](https://vimejs.com/getting-started/providers) load the video to be used into the app and Vime supports a couple including Youtube, Vimeo, and Dash. The `poster` prop of `Video` fetches a thumbnail to be displayed while the video is loading, `source` contains the link to the clip to be loaded, `DefaultUi` ensures that the default interface of vime is used for the video; note that a few dependencies can be imported to create a custom User Interface for the video. Now that our components are now defined, they are imported into `App.js` where they are displayed. ## Importing defined components into `App` component ```jsx //App.js import React from "react"; import { BrowserRouter as Router, Route,Switch} from "react-router-dom"; import VideoPlayer from "./components/Video"; import "./App.css" import Home from "./components/Home"; function App(){ return( <div className="player"> <Router> <Switch> //displays only on the home page <Route exact path="/" ><Home/></Route> //displays while the path of the app is `Video` <Route path="/Video"><VideoPlayer/></Route> </Switch> </Router> </div> ) } export default App ``` In the the `App` component, `Router` and `Route` are both imported and a `Router` component is required to house the Routes being used. `Route` contains a component which is rendered when the `path` matches the url of the page, meaning the `/` path of the first Route matches the initial and additional pages of the app, making the component it contains render on every page; because of this, we include `exact` in the `Route` so that it only appears on the initial page. The second route has a `path` of `/Video` which can be visited when the `Link` in the `Home` component is clicked. ## Styling the app ```css .home{ width: 30%; text-align: center; border: 1px solid black; margin: auto; padding-top: 10px; background-color: rgb(173, 212, 207); border-radius: 5px; font-weight: 700; box-shadow: #73ffe0 0px 0px 20px 10px; } .home p{ font-size: 20px; } .home img{ width: 95%; } span{ font-size: 30px; position: absolute; top: 0; z-index: 5; left: 0; color: white; padding: 30px 100px; } a{ text-decoration: none; } .player{ padding-top: 10vh; } .video{ width: 70%; margin: auto; } .message{ border: 1px solid black; font-size: 40px; } .message button{ padding: 15px 30px; } ``` Let's have a peek at the finished product now that our app has been styled. ![](https://i.imgur.com/jPFFQKj.gif) ## Conclusion This article demonstrates how a video streaming app can be created using React and the Vime framework, which includes several features. Feel free to expand the app by adding new videos and features. ## Resources - [Vime documentation](https://vimejs.com/)
gbadeboife
1,248,560
Introdução a Cuelang
Aposto que nesse momento uma frase paira na sua cabeça: "Mais uma linguagem de...
0
2022-11-08T19:22:09
https://dev.to/eminetto/introducao-a-cuelang-2bgc
cuelang, cue
--- title: Introdução a Cuelang published: true description: tags: cuelang, cue # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2022-11-08 19:20 +0000 --- Aposto que nesse momento uma frase paira na sua cabeça: > "Mais uma linguagem de programação"? Calma, calma, vem comigo que vai fazer sentido :) Diferente de outras linguagens como Go ou Rust, que são de "propósito geral", a [CUE](https://cuelang.org) pussui alguns propósitos bem específicos. O seu nome na verdade é uma sigla que significa "Configure Unify Execute" e segundo a documentação oficial: > Embora a linguagem não seja uma linguagem de programação de uso geral, ela possui muitas aplicações, como validação e modelagem de dados, configuração, consulta, geração de código e até script. Ela é descrita como um "superset de JSON" e fortemente inspirada em Go. Ou como eu gosto de pensar: > "imagine que Go e JSON tiveram um tórrido romançe e o fruto dessa união foi CUE" :D Neste post eu vou apresentar dois cenários onde a linguagem pode ser usada, mas a [documentação oficial](https://cuelang.org/docs/) tem mais exemplos e uma boa quantia de informação importante a ser consultada. ## Validando dados O primeiro cenário onde CUE se destaca é na validação de dados. Ela possui [suporte nativo](https://cuelang.org/docs/integrations/) para validar YAML, JSON, Protobuf, entre outros. Vou usar como case alguns exemplos de [arquivos de configuração](https://doc.traefik.io/traefik/user-guides/crd-acme/) do projeto Traefik, um API Gateway. O YAML a seguir define uma rota válida para o Traefik: ```yaml apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: simpleingressroute namespace: default spec: entryPoints: - web routes: - match: Host(`your.example.com`) && PathPrefix(`/notls`) kind: Rule services: - name: whoami port: 80 ``` Com essa informação é possível definirmos uma nova rota no API Gateway, mas se algo estiver errado podemos causar alguns problemas. Por isso é importante termos uma forma fácil de detectarmos problemas em arquivos de configuração como esse. E é aí que a CUE mostra sua força. O primeiro passo é termos a linguagem instalada na máquina. Como estou usando macOS bastou executar o comando: ```bash brew install cue-lang/tap/cue ``` Na [documentação](https://cuelang.org/docs/install/) oficial é possível ver como fazer a instalação em outros sistemas operacionais. Agora podemos usar o comando `cue` para transformar esse YAML em um `schema` da linguagem CUE: ```bash cue import traefik-simple.yaml ``` É criado um arquivo chamado `traefik-simple.cue` com o conteúdo: ```go apiVersion: "traefik.containo.us/v1alpha1" kind: "IngressRoute" metadata: { name: "simpleingressroute" namespace: "default" } spec: { entryPoints: [ "web", ] routes: [{ match: "Host(`your.example.com`) && PathPrefix(`/notls`)" kind: "Rule" services: [{ name: "whoami" port: 80 }] }] } ``` Ele é uma tradução literal do YAML para CUE, mas vamos editá-lo para criarmos algumas regras de validação. O conteúdo final do `traefik-simple.cue` ficou desta forma: ```go apiVersion: "traefik.containo.us/v1alpha1" kind: "IngressRoute" metadata: { name: string namespace: string } spec: { entryPoints: [ "web", ] routes: [{ match: string kind: "Rule" services: [{ name: string port: >0 & <= 65535 }] }] } ``` Alguns dos itens ficaram exatamente iguais, como `apiVersion: "traefik.containo.us/v1alpha1"` e `kind: "IngressRoute"`. Isso significa que esses são os valores exatos que estão esperados em todos os arquivos que serão validados por esse `schema`. Qualquer valor diferente destes vai ser considerado um erro. Outras informações foram alteradas, como: ```go metadata: { name: string namespace: string } ``` Neste trecho definimos que o conteúdo do `name`, por exemplo, pode ser qualquer `string` válida. No trecho `port: >0 & <= 65535` estamos fazendo uma validação importante ao definir que este campo só pode aceitar um número que seja maior do que 0 e menor ou igual a 65535. Agora é possível validar se o conteúdo do YAML está de acordo com o `schema` usando o comando: ```bash cue vet traefik-simple.cue traefik-simple.yaml ``` Se tudo estiver correto nada é apresentado na linha de comando. Para demonstrar o funcionamento eu fiz uma alteração no `traefik-simple.yaml` mudando o valor do `port` para `0`. Ao executar o comando novamente é possível ver o erro: ```bash cue vet traefik-simple.cue traefik-simple.yaml spec.routes.0.services.0.port: invalid value 0 (out of bound >0): ./traefik-simple.cue:16:10 ./traefik-simple.yaml:14:18 ``` Se alterarmos algum dos valores esperados, como por exemplo `kind: IngressRoute` para algo diferente, como `kind: Ingressroute` o resultado é um erro de validação: ```go cue vet traefik-simple.cue traefik-simple.yaml kind: conflicting values "IngressRoute" and "Ingressroute": ./traefik-simple.cue:2:13 ./traefik-simple.yaml:2:8 ``` Desta forma é muito fácil encontrar algum erro em uma configuração de rotas do Traefik. O mesmo pode ser aplicado para outros formatos como JSON, Protobuf, arquivos do Kubernetes, etc. Vejo um cenário muito claro de uso desse poder de validação de dados: adicionar um passo em CI/CDs para usar CUE e validar configurações em tempo de `build`, evitando problemas em `deploy` e execução de aplicações. Outro cenário é adicionar os comandos em um `hook` de Git, para validar as configurações ainda em ambiente de desenvolvimento. Outra característica interessante da CUE é a possibilidade de criarmos `packages`, que contém uma série de `schemas` e que podem ser compartilhados entre projetos, da mesma forma que um `package` de Go. Na [documentação oficial](https://cuelang.org/docs/concepts/packages/#packages) é possível ver como user esse recurso, bem como usar alguns `packages` [nativos](https://cuelang.org/docs/concepts/packages/#builtin-packages) da linguagem, como `strigs`, `lists`, `regex` etc. Vamos usar um `package` no próximo exemplo. ## Configurando aplicações Outro cenário de uso da CUE é como linguagem de configuração de aplicações. Quem me conhece sabe que eu não tenho nenhum apreço por YAML (para dizer o mínimo) então qualquer outra opção chama minha atenção. Mas CUE tem algumas vantagens interessantes como: - por ser baseado em JSON torna a leitura e escrita muito mais simples (opinião minha) - resolve alguns problemas de JSON como a falta de comentários, o que é uma vantagem para YAML - por ser uma linguagem completa, é possível usar `if`, `loop`, pacotes embutidos na linguagem, herança de tipos, etc. Para este exemplo o primeiro passo foi criar um pacote para armazenar nossa configuração. Para isso criei um diretório chamado `config` e dentro dele um arquivo chamado `config.cue` com o conteúdo: ```go package config db: { user: "db_user" password: "password" host: "127.0.0.1" port: 3306 } metric: { host: "http://localhost" port: 9091 } langs: [ "pt_br", "en", "es", ] ``` O próximo passo foi criar a aplicação que faz a leitura da configuração: ```go package main import ( "fmt" "cuelang.org/go/cue" "cuelang.org/go/cue/load" ) type Config struct { DB struct { User string Password string Host string Port int } Metric struct { Host string Port int } Langs []string } // LoadConfig loads the Cue config files, starting in the dirname directory. func LoadConfig(dirname string) (*Config, error) { cueConfig := &load.Config{ Dir: dirname, } buildInstances := load.Instances([]string{}, cueConfig) runtimeInstances := cue.Build(buildInstances) instance := runtimeInstances[0] var config Config err := instance.Value().Decode(&config) if err != nil { return nil, err } return &config, nil } func main() { c, err := LoadConfig("config/") if err != nil { panic("error reading config") } //a struct foi preenchida com os valores fmt.Println(c.DB.Host) } ``` Uma vantagem do conceito de `package` da CUE é que podemos quebrar a nossa configuração em arquivos menores, cada um com sua funcionalidade. Por exemplo, dentro do diretório `config` eu dividi o `config.cue` em arquivos distintos: *config/db.cue* ```go package config db: { user: "db_user" password: "password" host: "127.0.0.1" port: 3306 } ``` *config/metric.cue* ```go package config metric: { host: "http://localhost" port: 9091 } ``` *config/lang.cue* ```go package config langs: [ "pt_br", "en", "es", ] ``` E não foi necessário alterar nada no arquivo `main.go` para que as configurações sejam carregadas. Com isso podemos ter uma separação melhor dos conteúdos das configurações, sem impacto no código da aplicação. ## Conclusão Neste post eu apenas "arranhei a superfície" do que é possível fazer com a CUE. Ela vem [chamando atenção](https://twitter.com/kelseyhightower/status/1329620139382243328?s=61&t=mVll7YR0fRVtNeZLEVwKnA) e sendo adotada em projetos importantes como o [Istio](https://istio.io/), que usa para gerar `schemes` OpenAPI e CRDs para Kubernetes, e o [Dagger](https://docs.dagger.io/1215/what-is-cue/). Me parece uma ferramenta que pode ser muito útil para uma série de projetos, em especial devido ao seu poder de validação de dados. E como um substituto para YAML, para minha alegria pessoal :D Publicado originalmente em [https://eltonminetto.dev](https://eltonminetto.dev/post/2022-11-08-intro-cuelang/) no dia 08/11/2022.
eminetto
1,250,304
Designing a Mechatronic Popcorn System
For the 2019 WorldSkills UK Mechatronic Team Selection competition in March, I wanted to design a...
0
2022-11-09T23:17:30
https://dev.to/calumk/designing-a-mechatronic-popcorn-mps-1b08
mechatronics, worldskills, robotics, plc
For the 2019 WorldSkills UK Mechatronic Team Selection competition in March, I wanted to design a modular production line that — that produced popcorn (and drinks) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t8br76bcm2rtgohgcqrl.jpeg) The System needed to be capable of dispensing & weighing popcorn kernals, transferring them into a popcorn popper, and then popping them ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjzah88kbfppiyr0pyg5.gif) The system (as seen in the GIF) was comprised of 5 tasks, spread over 3 days (14 hours build time) The competitors build up the system, starting with the simplest system, and then moving on to the more complex features as the days progress. The system ejects a pot from a magazine, checks if it is the right way up usign an analog IR height sensor, then transfers it to the filling area, on station 2. It then either fills with popcorn, or liquid, depending on the choice selected from the HMI. — The the pot is transfered back to the 1st station, and then onto the 3rd station. At this point the system allows you to pump the popcorn down to the popcorn popper, and then the popcorn popper is turned on using a networked plug socket (240v) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7dm5bnle3ug3buhbk3b.jpeg) The system included a Popcorn popper, and a pneumatic transfer system ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/refltifix9sbihfkstw7.jpg) It included a peristaltic pump, for transferring liquid to the pots in precise amounts And then Finally, a solution was needed to allow the 24v system to control the 240v popper. A Standard relay could not be used, because the competition rules prohibit 240v work. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0nk60csmhfvp5d1me51.jpeg) If you made it this far, you probably deserve a video of the finished system. This video was shot by my 2019 competitors Jack Dakin and Danny Slater — Please excuse the mess, it was a long week. {% embed https://twitter.com/Calumk/status/1156528806640926720?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1156528806640926720%7Ctwgr%5Eac39dc36e999e12e2dd16a029276f2dd6b81e7f9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcdn.embedly.com%2Fwidgets%2Fmedia.html%3Ftype%3Dtext2Fhtmlkey%3Da19fcc184b9711e1b4764040d3dc5c07schema%3Dtwitterurl%3Dhttps3A%2F%2Ftwitter.com%2Fcalumk%2Fstatus%2F1156528806640926720image%3Dhttps3A%2F%2Fi.embed.ly%2F1%2Fimage3Furl3Dhttps253A252F252Fpbs.twimg.com252Fext_tw_video_thumb252F1156528531918262272252Fpu252Fimg252Fjbw70mhFLsgU2ufc.jpg26key3Da19fcc184b9711e1b4764040d3dc5c07 %} --- Bonus Video — Coco the cat, unimpressed our living room was full of equipment. {% embed https://twitter.com/Calumk/status/1156529785767649280?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1156529785767649280%7Ctwgr%5E15ef47fdb8d88e0d2b995614f5cd77176759205a%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fcdn.embedly.com%2Fwidgets%2Fmedia.html%3Ftype%3Dtext2Fhtmlkey%3Da19fcc184b9711e1b4764040d3dc5c07schema%3Dtwitterurl%3Dhttps3A%2F%2Ftwitter.com%2Fcalumk%2Fstatus%2F1156529785767649280image%3Dhttps3A%2F%2Fi.embed.ly%2F1%2Fimage3Furl3Dhttps253A252F252Fpbs.twimg.com252Fext_tw_video_thumb252F1156529557245169665252Fpu252Fimg252FzlKv-mkc5rjDV8Yn.jpg26key3Da19fcc184b9711e1b4764040d3dc5c07 %}
calumk
1,251,674
Yarn Workspace Scripts Refactor - A Case Study
It happens to all of us - you implement a solution only to realize later on that it’s not robust...
0
2022-11-11T10:18:16
https://dev.to/mbarzeev/yarn-workspace-scripts-refactor-a-case-study-2f25
yarn, tutorial, webdev, testing
It happens to all of us - you implement a solution only to realize later on that it’s not robust enough and probably could use a good refactor. Same happened to me and I thought this is a good opportunity to share with you the reasons which made me refactor my code and how I practically did that. A case study if you will. The solution I’m talking about is a script I wrote a while ago to assist me in generating a unified unit test coverage report for all the packages under a single monorepo. You can read more about it in details [here](https://dev.to/mbarzeev/aggregating-unit-test-coverage-for-all-monorepos-packages-20c6), but let me TL;DR how it works for you: The script went over each package in the monorepo and executed `yarn test –coverage –silent`. After all the reports were generated, each in its own package location, the script copied the reports into a pre-created directory called `.nyc_output` at the project’s root. Once done, the script executed an `[nyc](https://www.npmjs.com/package/nyc)` CLI command to generate a report from them all. And it worked well :) “So what’s wrong with it?” you might ask. Well… it did not scale well. Actually, it did not work well at all. It starts with the fact that the script was looking for the packages in a specific, hard-coded, location - “packages”. What if I have packages in another location as well? Another issue with it was that the script currently does not search for **all** packages (including nested ones), and so it will ignore nested packages reports altogether. On top of these, I also hate the fact that the script “knows” what command to run in order to generate the reports. What if we’re not using “yarn”? What if we’re not using Jest? All these make the script very limited, and so it was time to boost it up a little, and luckily enough we have just the tools to make it happen. Let’s start! *** All the code can be found under my [Pedalboard monorepo at Github](https://github.com/mbarzeev/pedalboard). ## Generating the coverage using native Yarn workspace API I’m starting with inspecting the npm script which launches the coverage report aggregation: ```json "coverage:combined": "pedalboard-scripts aggregatePackagesCoverage && nyc report --reporter lcov" ``` My first mission is to generate a coverage report in each package without needing the `pedalboard-scripts`. For that I will use a Yarn’s Workspace feature which runs a command on each managed package/workspace within a monorepo - [foreach](https://yarnpkg.com/cli/workspaces/foreach). I will remove all the `coverage` directories from each package to make sure I’m not seeing old results, and change the script to the following: ```json "coverage:combined": "yarn workspaces foreach -pvA run test --coverage --silent" ``` Just to remind what’s going on, params-wise: “p” is for running in parallel, “v” to have some verbose output and “A” is for running on all the workspaces. When I run this script now, from the project’s root, a “coverage” directory is being created in each package. Awesome! ## Wait… this can be simple As you can see I dropped off the other part of the script above, which is the part where the unified report gets generated. At this point I thought that it was time to modify the `pedalboard-scripts aggregatePackagesCoverage` script, but hold on… do I really need that script now? Let’s go over it step by step: A part of what my old script did was to create the `.nyc_output` directory, but I don’t need the script for that, do I? I can create this directory with a simple command: ```bash mkdir -p .nyc_output ``` And so I add this command to follow the initial coverage generation: ```json "coverage:combined": "yarn workspaces foreach -pvA run test --coverage --silent && mkdir -p .nyc_output" ``` Ok, now that we have this directory created, we need to collect all the `coverage-final.json` files from each package into it, and change its name so they won’t overwrite each other. My first go at this was naive - I thought I could do that, again, with `yarn workspace foreach`, but I gave up when I realized that there is no easy way to extract the package name in each run (yo, Yarn people, that’s a good feature right there ;)) in order to rename each file when copied. I know there is probably a way, but looking at the length of the script at hand I got a little sick… ## The collectFiles script The solution I chose was to introduce another script to my scripts package, called “collectFiles” and what this script does is collect files according to a glob pattern and copy them to a target directory. Here how the script looks like: ```javascript const yargs = require('yargs/yargs'); const glob = require('glob'); const fs = require('fs'); const path = require('path'); const GREEN = '\x1b[32m%s\x1b[0m'; async function collectFiles({pattern, target}) { if (!pattern || !target) throw new Error('Missing either pattern or target params'); console.log(GREEN, `Collecting files... into ${target}`); glob(pattern, {}, (err, files) => { if (err) throw err; files.forEach((file, index) => { fs.copyFileSync(file, path.resolve(target, `${index}-${path.basename(file)}`)); }); }); console.log(GREEN, `Done.`); } const args = yargs(process.argv.slice(2)).argv; collectFiles(args); ``` I’m using the “[glob](https://www.npmjs.com/package/glob)” package here to make things easier for me - it searches the pattern, and then returns a list of files on which I can traverse and copy to the desired destination. As you can see this script gets 2 arguments - `pattern` and `target`. Since all these files have the same name I append the index as a prefix to the name just to make sure they do not overwrite each other in the target directory. The report generator does not mind. ## Split for flexibility & readability Nobody likes long script commands in their package.json, and I’m no different. I decided to split the big script into 3 new scripts: ```json coverage:all - this generates the reports for each workspace (package) coverage:collect - this collects the coverage-final.json files into a single dir coverage:combined - call the scripts above and generates the report in the end "coverage:all": "yarn workspaces foreach -pvR run test --coverage --silent", "coverage:collect": "mkdir -p .nyc_output && pedalboard-scripts collectFiles --pattern='packages/**/coverage-final.json' --target='.nyc_output'", "coverage:combined": "yarn coverage:all && yarn coverage:collect && nyc report --reporter lcov" ``` And… That’s it. When I run the `yarn coverage:combined` script the reports get generated like they used to but now I don’t have to worry whether I forgot to include some nested workspace, and I have the power to change how the reports for each pack is generated with ease. I hope you find this useful for you. As always if you have questions or other ideas how to make this better, please share them with the rest of us in the comments below :) As mentioned, all the code can be found under my [Pedalboard monorepo at Github](https://github.com/mbarzeev/pedalboard). *Hey! If you liked what you've just read check out <a href="https://twitter.com/mattibarzeev?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">@mattibarzeev</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> on Twitter* :beers: <small><small><small>Photo by <a href="https://unsplash.com/@raimondklavins?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Raimond Klavins</a> on <a href="https://unsplash.com/s/photos/generate?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></small></small></small>
mbarzeev
1,252,422
bootcamp is here around the counter. less than 24 hours, i need your advise, kind nervous 😂 please! i had only the prework nuts!
A post by americanoame
0
2022-11-11T07:32:19
https://dev.to/americanoame/bootcamp-is-here-around-the-counter-less-than-24-hours-4dhe
americanoame
1,252,457
Installation of Embulk on Ubuntu 20.04
Environment Ubuntu 20.04.3 LTS Instruction (1) Check Java version $ java...
0
2022-11-11T09:04:35
https://dev.to/tomoyk/installation-of-embulk-on-ubuntu-2004-100e
ubuntu, embulk
## Environment - Ubuntu 20.04.3 LTS ## Instruction (1) Check Java version ``` $ java -version Command 'java' not found, but can be installed with: sudo apt install default-jre # version 2:1.11-72, or sudo apt install openjdk-11-jre-headless # version 11.0.17+8-1ubuntu2~20.04 sudo apt install openjdk-13-jre-headless # version 13.0.7+5-0ubuntu1~20.04 sudo apt install openjdk-16-jre-headless # version 16.0.1+9-1~20.04 sudo apt install openjdk-17-jre-headless # version 17.0.5+8-2ubuntu1~20.04 sudo apt install openjdk-8-jre-headless # version 8u352-ga-1~20.04 ``` Not installed java (2) Install Java 8 EMbulk only operates with Java 8. ``` sudo apt install openjdk-8-jre-headless ``` (3) Check Java version ``` $ java -version openjdk version "1.8.0_352" OpenJDK Runtime Environment (build 1.8.0_352-8u352-ga-1~20.04-b08) OpenJDK 64-Bit Server VM (build 25.352-b08, mixed mode) ``` (4) Install Embulk ``` curl --create-dirs -o ~/.embulk/bin/embulk -L "https://dl.embulk.org/embulk-latest.jar" chmod +x ~/.embulk/bin/embulk echo 'export PATH="$HOME/.embulk/bin:$PATH"' >> ~/.bashrc source ~/.bashrc ``` Ref: [Embulk](https://www.embulk.org/) (5) Check embulk command ``` which embulk ```
tomoyk
1,252,817
Object Oriented Programming (OOP) Concepts
The high-level programming languages are broadly categorized into two categories:...
0
2022-11-11T15:41:59
https://dev.to/sagary2j/high-level-object-oriented-programmingoop-concepts-f0b
python, devops, programming, oop
The high-level programming languages are broadly categorized into two categories: - Procedure-oriented programming (POP) language. - Object-oriented programming (OOP) language. ## Procedure-Oriented Programming Language In the procedure-oriented approach, the problem is viewed as a sequence of things to be done such as reading, calculation, and printing. Procedure-oriented programming consists of writing a list of instructions or actions for the computer to follow and organizing this instruction into groups known as functions. ![POP](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfjth7y2cu7wyn942kdn.png) ### Characteristics of procedure-oriented programming: 1. Emphasis is on doing things(algorithm) 2. Large programs are divided into smaller programs known as functions. 3. Most of the functions share global data 4. Data move openly around the system from function to function 5. Function transforms data from one form to another. 6. Employs top-down approach in program design ### The disadvantage of procedure-oriented programming languages is: 1. Global data access 2. It does not model real word problems very well 3. No data hiding ## Object Oriented Programing Language “Object-oriented programming is an approach that provides a way of modularizing programs by creating partitioned memory area for both data and functions that can be used as templates for creating copies of such modules on demand”. ![OOP](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3j8w4ui3wq6pro9n2vc.png) ### Characteristics of Object-Oriented programming: 1. Emphasis is on doing rather than procedure. 2. programs are divided into what are known as objects. 3. Data structures are designed such that they characterize the objects. 4. Functions that operate on the data of an object are tied together in the data structure. 5. Data is hidden and can’t be accessed by external functions. 6. Objects may communicate with each other through functions. 7. New data and functions can be easily added. 8. Follows bottom-up approach in program design. ## Procedure Oriented Programming (POP) Vs Object Oriented Programming (OOP) | Procedure Oriented Programming | Object Oriented Programming | | --------- |:---------:| | Program is divided into small parts called functions. | Program is divided into parts called objects. | | Importance is not given to data but to functions as well as the sequence of actions to be done. | Importance is given to the data rather than procedures or functions because it works in the real world. | | Top-Down approach. | Bottom-Up approach. | | It does not have any access specifier. | OOP has access specifiers named Public, Private, Protected, etc.| | Data can move freely from function to function in the system. | Objects can move and communicate with each other through member functions. | | Adding new data and functions in POP is not so easy. | OOP provides an easy way to add new data and functions. | | Most function uses Global data for sharing that can be accessed freely from function to function in the system. | In OOP, data cannot move easily from function to function, it can be kept public or private so we can control the access of data. | | It does not have any proper way of hiding data, so it is less secure. | OOP provides Data Hiding so provides more security. | | Overloading is not possible. | In OOP, overloading is possible in the form of Function Overloading and Operator Overloading. | | Examples of Procedure Oriented Programming are C, VB, FORTRAN, and Pascal. | Example of Object-Oriented Programming are C++, JAVA, VB.NET, C#.NET. | ## Object-Oriented Programming Principles - Encapsulation - Data abstraction - Polymorphism - Inheritance - Dynamic binding - Message Passing ``` Encapsulation ``` Wrapping data and functions together as a single unit is known as encapsulation. By default, data is not accessible to the outside world, and they are only accessible through the functions which are wrapped in a class. prevention of data direct access by the program is called data hiding or information hiding. ``` Data abstraction ``` Abstraction refers to the act of representing essential features without including the background details or explanation. Classes use the concept of abstraction and are defined as a list of attributes such as size, weight, cost, and functions to operate on these attributes. They encapsulate all essential properties of the object that are to be created. The attributes are called data members as they hold data and the functions which operate on these data are called member functions. Classes use the concept of data abstraction, so they are called abstract data types (ADT). ``` Polymorphism ``` Polymorphism comes from the Greek word “poly” and “morphism”. “poly” means many and “morphism” means form i.e., many forms. Polymorphism means the ability to take more than one form. For example, an operation has different behavior in different instances. The behavior depends upon the type of data used in the operation. Different ways to achieve polymorphism are: 1) Function overloading 2) Operator overloading ``` Inheritance ``` Inheritance is the process by which one object can acquire the properties of another. Inheritance is the most promising concept of OOP, which helps realize the goal of constructing software from reusable parts, rather than hand-coding every system from scratch. Inheritance not only supports reuse across systems but also directly facilitates extensibility within a system. Inheritance coupled with polymorphism and dynamic binding minimizes the amount of existing code to be modified while enhancing a system. When the class child, inherits the class parent, the class child is referred to as a derived class (sub-class) and the class parent as a base class (superclass). In this case, the class child has two parts: a **derived part** and an **incremental part**. The derived part is inherited from the class parent. The incremental part is the new code written specifically for the class child. ``` Dynamic binding: ``` Binding refers to the linking of a procedure call to the code to be executed in response to the call. Dynamic binding (or late binding) means the code associated with a given procedure call is not known until the time of the call at run time. ``` Message passing: ``` An object-oriented program consists of a set of objects that communicate with each other. Objects communicate with each other by sending and receiving information. A message for an object is a request for the execution of a procedure and therefore invokes the function that is called for an object and generates a result. ## Key Takeaways: - **OOP:** Object-Oriented Programming. A programming paradigm or approach is used to analyze and solve problems that are based on the representation of real-world objects in the system. - **Class:** one of the building blocks of Object-Oriented Programming that acts as a “blueprint” where the data and the actions of the objects are defined. - **Instance:** a concrete object that is created from the class “blueprint”. - **Method:** an “action” defined in the class that the instances of the class can perform. It is very similar to a function but closely related to instances such that instances can call methods and methods can act on the individual data of the instances.
sagary2j
1,252,827
Cumulocity Web Development Tutorial - Part 1: Start your journey
Starting your journey as a web developer in Cumulocity can be quite overwhelming in the beginning....
0
2022-11-24T11:03:28
https://tech.forums.softwareag.com/t/cumulocity-web-development-tutorial-part-1-start-your-journey/259613
iot, tutorial, webdeveloper, programming
--- title: Cumulocity Web Development Tutorial - Part 1: Start your journey published: true date: 2022-11-11 13:42:29 UTC tags: iot, tutorial, webdeveloper, programming canonical_url: https://tech.forums.softwareag.com/t/cumulocity-web-development-tutorial-part-1-start-your-journey/259613 --- Starting your journey as a web developer in Cumulocity can be quite overwhelming in the beginning. There are a lot of concepts to discover and sometimes you simply get crushed by the amount of information available in the documentation. ![overwhelming](https://aws1.discourse-cdn.com/techcommunity/original/3X/f/2/f28a7bb057d24d6a7f9a005354aa6e5bddb7e678.gif) To make your journey easier I have created this tutorial series. This tutorial series targets web developers, who want learn how they can implement custom applications, widgets and plugins for Cumulocity. The various articles of this series will teach fundamental concepts about web development in the context of Cumulocity and point out best practices. As a prerequisite it is assumed that you already have an understanding about [Angular](https://angular.io/docs) and [Typescript](https://www.typescriptlang.org/docs/). Furthermore, you need to have a Cumulocity tenant and also basic knowledge about [Cumulocity and its data model](https://cumulocity.com/guides/concepts/domain-model/). ## What topics can you expect? The tutorial series is divided into multiple articles. Each article focuses on a specific topic accompanied by code samples. These code samples are shared in [this public github repository](https://github.com/SoftwareAG/cumulocity-web-development-tutorial), which allows you to review the code for an individual article. There are a couple of articles planned. This first article discusses general concept about web development in Cumulocity. In addition, you will set up your development environment and create your first custom web application. In the next article you will learn more about how you can extend the custom web application and create your first component. The third article demonstrates how you can use Cumulocity’s API to query data and display it in your web component. This web component will be converted into a Cumulocity widget in the fourth article. You will learn more about extending one of Cumulocity’s default applications as well. In the following articles you will extend this widget and get insight into the upcoming Microfrontend Framework. Sounds interesting? Then let’s get started! ![overwhelming](https://aws1.discourse-cdn.com/techcommunity/original/3X/c/a/cabee51d1a50cbee168a51e7dd79a19e949d4579.gif) ## Let’s get started You first need to install [Node.js](https://nodejs.org/en/) to not only have a proper javascript runtime, but also the necessary package manager (npm) to install additional required tools and dependencies. Make sure to install Node.js `14.x.x`. Installing any later version might cause issues when trying to run or build a Cumulocity web application. Once you have finished the setup of Node.js you can install the [@c8y/cli](https://www.npmjs.com/package/@c8y/cli) (command line interface) tool, which is provided by Cumulocity. The `@c8y/cli` supports you in creating, building and deploying your Cumulocity plugins and web applications. You can install the command line tools globally by running ``` npm install -g @c8y/cli ``` in your command prompt/terminal. This will install the latest official version, which is `1014.0.204` at the time of writing this article. As new versions of Cumulocity are released frequently you also need to update the cli tools regularly to have the latest version. Instead of installing the cli tools globally, I rather recommend to install these on demand using the `npx` command. `npx` is part of `npm` and allows you to run commands without having the package installed globally first. Furthermore, you can specify which version you want to have installed. Let’s create an empty custom web application which is based on version `@c8y/cli@next` (1016.6.0 at the time of writing): ``` npx @c8y/cli@next new ``` Running this command will install the `@c8y/cli` locally in a cache and execute the `new` command. You will be guided by a wizard to set up your application. The wizard asks you for the name of the application, from which version to scaffold the project from and which template should be used: [![image](https://aws1.discourse-cdn.com/techcommunity/original/3X/2/e/2e5ff6f0e333c650953a618a5756c6a193349bf7.png)](https://aws1.discourse-cdn.com/techcommunity/original/3X/2/e/2e5ff6f0e333c650953a618a5756c6a193349bf7.png "image") You can choose any name, e.g. `my-c8y-application`. Select version `latest`, in this case `1014.0.x`. Use the template `application` from which the project should be scaffolded from. The different templates are described in later articles. More information on creating new applications can be found in the [official documentation](https://cumulocity.com/guides/web/development-tools/#the-new-command). Once the Cumulocity project has been generated by the `@c8y/cli`, you must install the dependencies. Change directory in the command prompt to your new project and install all necessary dependencies: ``` cd my-c8y-application npm install ``` Once the installation is finished, you can start the Cumulocity web application: ``` npx c8ycli server -u <<C8Y-URL>> ``` Make sure to replace `<<C8Y-URL>>` with the URL of your Cumulocity instance. The `server` command will spin up a local web server and deploy the Cumulocity web application. The `-u` parameter specifies the Cumulocity instance to which all API requests should be proxied to. This means data is actually pulled from the configured Cumulocity instance. The same applies for the authentication. The application can be accessed in the browser via the URL: `http://localhost:9000/apps/my-c8y-application/`. In case you choose a different application name, you will see your application name instead of `my-c8y-application` in the URL. You will be greeted by the login screen. [![image](https://aws1.discourse-cdn.com/techcommunity/optimized/3X/c/6/c6aeae564583b47747b394d90e32e8245ba2edac_2_541x500.png)](https://aws1.discourse-cdn.com/techcommunity/original/3X/c/6/c6aeae564583b47747b394d90e32e8245ba2edac.png "image") Provide the tenant id and user credentials of you configured Cumulocity instance. The tenant id can be found in your configured Cumulocity tenant by clicking on your user at the top right of the header: ![image](https://aws1.discourse-cdn.com/techcommunity/original/3X/8/a/8aecf81f00eea695b257437529dba1e18bc644d8.png) When you successfully signed in to your local Cumulocity instance, you will see a blank application: [![image](https://aws1.discourse-cdn.com/techcommunity/optimized/3X/a/b/abf93cd3740263d3a5a2f446f2af00e346ba71a1_2_690x346.png)](https://aws1.discourse-cdn.com/techcommunity/original/3X/a/b/abf93cd3740263d3a5a2f446f2af00e346ba71a1.png "image") Great, you have set up your development environment and created your first Cumulocity web application. In the next articles, you will create plugins and widgets to bring some life to your web application. To stop your application, you can simply enter `[CTRL] + [C]` inside the terminal. ![alive](https://aws1.discourse-cdn.com/techcommunity/original/3X/b/7/b750f2737167b86c4949ea4479696a3f2edc65e7.gif) ## Conclusion & Next steps That’s it for part 1 of this tutorial series. You can find the sample application in the [github repository](https://github.com/SoftwareAG/cumulocity-web-development-tutorial/tree/main/part-01). In part 2 you will extend the empty application with a custom component. Furthermore, you will get to know some tips and best practices about c8y web development. If you like you can leave a comment if you want to see a specific topic to be covered in this tutorial series. [Read full topic](https://tech.forums.softwareag.com/t/cumulocity-web-development-tutorial-part-1-start-your-journey/259613)
techcomm_sag
1,253,442
How to think like a programmer: The human part
Welcome to the inaugural post of this series, designed to help you think like a programmer! I got the...
0
2022-11-11T23:57:55
https://blog.nikfp.com/how-to-think-like-a-programmer-the-human-part
softskills, beginners, motivation
--- title: How to think like a programmer: The human part published: true date: 2022-11-11 22:31:52 UTC tags: softskills, beginners, motivation canonical_url: https://blog.nikfp.com/how-to-think-like-a-programmer-the-human-part cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5t0qksijf3u9j4jemq0l.jpg --- Welcome to the inaugural post of this series, designed to help you think like a programmer! I got the idea for this series after a comment that I posted on a blog post got a good reaction, and someone asked me to express this in article form. So here we are. This series is designed for people that are early in their programming journeys, however intermediate developers will also find this series useful to help fill in some gaps and understand what might be valuable to learn next. None of the topics will be exhaustive, but instead will be designed to get you thinking in the right direction. My goal is to get you excited about the concepts I present, and inspire you to seek further knowledge on your own. So let's get started! ### Why the human part first? When you think about programming and how to think like a programmer, your first instinct will be to look at the concrete skills required to work with code and make things run. However in this post we're going to take a step back and look at an often overlooked aspect of programming: The human part. Why? Because at the center of every program is a human being that took the time and energy to put it together. In fact, that human is a requirement for the program to come in to existence in the first place! So logically, it makes sense to consider the person writing the code when you are thinking about writing code. It goes without saying that we are all human, but our own humanity also influences the way we approach technical work and how deeply we can think and understand things. Without considering the living, breathing, thinking part of programming (us), we limit ourselves to just a subset of our creative potential, and ultimately limit our own success. So through the following sections I would like to share some things I have found to be true. These ideas apply much more broadly than just programming, but in the world of code they are particularly true in my experience. ### First off, You gotta wanna! Perhaps the most overlooked part of being a proficient developer is that you have to enjoy the process of building things and understanding things. First and foremost, your role is to bring something to life out of nothing. You will be thinking in concepts, many of which will be very foreign when you start. You will be challenged to make things that don't exist, without a blueprint, and you will need to be creative to do so. You will be pushed to gain a deeper understanding of everything you touch, so that you can make it better and become better for yourself in the process. And to do any of this well, you _must_ find joy in it. Finding enjoyment in the work you are doing is critical, because without it you won't have the drive, curiosity, and perseverance required to get through the tough times of writing code. And there will be tough spots. Programming is hard. The mental anguish you can experience while looking for that missing semicolon, the heartbreak of a timeout error, or the sheer panic of an unexplained stack overflow can all be staggering, especially when you have a deadline to hit or a critical bug to fix. You'll need the grit to push through the crazy times. It can also go the other way. Sometimes you just have to write a lot of boilerplate. It can feel repetitive, slow, and not engaging. You'll want to go do other things. It can get boring, but you need to have the will to push through the slow times. Sometimes your code won't work the way you expect (or at all) and you won't know why. You can spend hours changing one small thing here, another small thing there, and still getting strange results. You can feel confused for days on end. You have to have the patience to work through the problem. Sometimes a concept will simply be over your head. You have to have the humility to accept that you don't know everything, start small, and fill in your knowledge gaps the best you can. I really don't want to scare anyone off (in fact quite the opposite), but this is important to think about, because in programming you are going to be frustrated a lot. It's the nature of the beast. You are stitching together technical concepts, each with their own limitations, into a system that can do the thing it's required to do. You might have days on end of no productivity. You might have a particularly hard problem that you can't solve. You might have crippling imposter syndrome. You might have all of these, all at once. You are not alone! The flip side is you might have incredibly productive periods, where everything clicks and things just flow together. Problems are easier to solve, bugs are less frequent, and you are less inclined to turn away and do other things. The number one thing that will keep you going is enjoying what you are doing. From my experience, when I expect code to work that isn't, or I expect code to behave differently than it is, I become almost obsessive until I can understand why. (to the frustration of my wife at times!) I'll roll the problem around in my mind - sometimes for days - and turn to Google to see what others have to say. At some point, something will click or some information will surface, and I will be able to solve it or at least move in the right direction. I then have a moment of joy that I was able to overcome the problem, and go looking for another one. And I write code because I love to write code. I find the creativity within technical limitations fascinating. I feel that all programmers should approach it like this. The money is secondary, and I should point out that I don't get paid to write code. (Someday maybe) In fact, all of the best developers I follow would continue to write code even if they weren't paid at all. In their spare time, this is exactly what most of them are doing with open source, because it's fun to them. So if you want to be great, do it for the joy of it! ### Remember that you are human. Like it or not, you have basic needs that you must attend to for peak performance. Sitting in front of a computer for 18 hours straight is not meeting those needs. You need rest. You need variety. You need healthy food and physical activity. This is for everything, not just programming, but it's particularly important for people that work with code. An example of this for me is that I frequently need to get up from my desk and just go for a walk. Sometimes it's just back and forth inside. Sometimes I go outside. Stepping back from time to time allows me to reset, and come back stronger. I also find that the more I focus on eating healthy, getting enough sleep, and getting some exercise, the better I feel AND the better I perform when I'm working, or writing, or coding. It's all interconnected. You are the sum of all of your parts, body and mind. Taking care of each aspect in turn improves the whole. So take the time to be good to yourself. Don't expect to learn everything in a day or build everything in a day. Don't expect good results from long hours of coding and nothing else. Don't get overwhelmed and give up, but instead give yourself the space and the time to do things right, at a pace that is natural to the way your mind works, as much as you can. Feed your mind and your body, then exercise both, then get enough rest for both. Find the balance that works and is sustainable, and you can set yourself up for continuous growth and success. ### The most complex things are just big collections of simple things..... If I asked you what your definition of a computer program is, what would your answer be? For me, a program is a series of instructions designed to take information in, work with it, and give information back out. That's it. Simple? In theory, yes. But when you look at what that might mean in practice it can quickly become very complicated. The inbound information can be very complex, or there could be huge amounts of it, and the outbound information might need to be equally complex, and there also might be huge amounts of it. Some things might be incomplete, or malicious, or utter nonsense. You might need to efficiently restructure information that seems very foreign to you. There might be restrictions on what information can come in and when, and what information can go out and when. There might be specific reasons for code to behave a certain way, and then some parameters change and it should behave a different way. Pretty soon, the simple, "Info in, Info out" program is actually a complex monster. How do you build something like that? You come at the problem like a programmer! The fundamental method of solving a problem with code is to break it down into simpler problems and solve them one by one. If the simpler problems are still too complex, break them down into simpler problems. Repeat until you are able to solve each problem, one at a time. By doing this exercise you are able to build systems far larger than what you can hold in your mind at any given moment. You can account for changing conditions, and change the behavior of your code. You can create flexibility and resilience. This makes your programs more powerful, and makes you feel powerful! ### But make sure you understand the big picture. The essence of programming is that you are solving complex problems by using a collection of tools and concepts, to solve collections of simple problems, building up toward a complete solution. But before you can do this, you have to be able to back up and just _look_ at the whole problem. This takes a great deal of patience, because your first instinct will be to start writing code and see what you can come up with. However, taking the time to really understand the problem before you work toward a solution means you are approaching it with a level of thought and maturity that allows you to develop a complete solution. In other words, you have to think about things from the perspective of how to build it, but _also_ from the perspective of the people using it, and _also_ from the perspective of how it fits into a larger system. Take the standard Grep program as an example. For those who don't know, grep is a program designed to search for a matching pattern of characters based on an input. And for many people, that is enough of an explanation. They might even start implementing their own version based on this. They might be shortsighted in doing so, however. A second view is that Grep is a program _to help people find what they are looking for._ The important subtlety here is the people it needs to work for. Now, I don't know if you have ever used Grep before, but if you have it's pretty clear that you can't use it without first reading a manual or instructions of some sort to get you started. It's not the most intuitive. It's likely that the people that built it were originally building something strictly for like minded people, so the layperson wasn't accounted for, and as a result they built a powerful tool that can sometimes be very confusing to use. Where all this is going is toward this point: Before you can start breaking the larger problem down, you need to understand the problem. Not just from the viewpoint of the person writing the code, but from the viewpoint of the person using it, and how it fits into the ecosystem it will exist in. Doing this will help you build programs that people actually want to use, and that perform the way they need to perform. ### Everybody learns in their own way There are 2 very important aspects that this section needs to cover. The first is that you need to understand what methods of learning are most effective for you. It could be through books, through activities, through watching videos, or many other methods. In programming you will always be learning, so getting a grasp on how to personalize your learning path to fit you best will supercharge your growth. Then, as much as possible, you need to tailor your learning environment and your learning plan to fit your learning style. This means that you need to be aware of distractions while you are learning, such as outside influences and your own habits. Do you have a quite place to think and learn? Do you tend to procrastinate? Do you have all the things you will need close at hand? Do you keep getting notifications? Etc, etc. For me as an example, I have found that I do best with a quick tutorial, and then just diving in and actively working with things. This is how I learned to write code: by getting some basics down, then getting myself into problems and getting myself back out of them. It's an unstructured approach that tends to work best with the way my mind works. For others though, a more structured approach with tutorials or books might be best. I have zero opinion on which method is better, because I don't believe one is better than another. The important thing is that you keep learning in a way that fits you. I have also found that I do best with mellow instrumental music as a backdrop. This isolates me from the distractions around me and allows me to get in to a state of flow. I do my best work when I have long stretches of time without phone calls, notifications, or people interrupting me. This seems to be true for most people, but some people love having the energy of others around them while they work. It helps them to maintain their own energy levels and stay productive. The end goal is always the same no matter how you go about it. You want to retain the technical knowledge you need to do the work (like how a language works and what syntax to use), but you also need to retain the higher level principles, concepts and patterns that allow you do the work in the first place. Setting yourself up for success and knowing what is most effective for you is critical to achieving these 2 milestones quickly. So take the time to understand what works from you, from methods to locations to how loud the music is when you work. Doing so will help you immensely. The second learning related thing you need to keep in mind is that _not everyone else learns the way you do!_ It's all too easy to look at a learning resource you have available and think that is doesn't make sense to you or it doesn't fit the way you would like. You might even be tempted to bring that to the attention of the person that created the resource. But before you do, make sure you consider that the resource you are using might have been designed for someone that learns in a different style, or might have been designed to target as broad an audience as possible. Things like presentation, pacing, ordering of subjects, and the depth of examples might have had a lot of thought put into them, and might be targeted to the group they connect with on purpose, and you might happen to be outside that group. I encourage you to have discussions with the creators of these resources, but keep it positive and connect with the mindset of genuinely wanting to help. They are creating for a reason: They want to be helpful themselves. With this in mind, any assistance you can provide is likely welcome and often needed for them to improve. Another layer to this is any time you need to teach something to someone else. Try to be aware of how they learn, how fast they integrate new information, and the signs of when they are becoming overloaded. It's not always obvious, and people generally want to do well and aren't as willing to admit when they have run out of steam as you might think. Try to adjust your teaching style, pacing, and break points to the audience you have, if possible. You will learn a lot about how other people think and about communication in general this way, and it will help you to improve overall. In short, be human about learning and teaching, and empathize with the people teaching you and learning from you. ### Onward! In this post we covered just a few of the human parts of writing code. There are many more and I encourage you to continuously work on this aspect of being a programmer, and being a human in general. Understanding how to be healthy and productive will help you to live happier and achieve more. Stay tuned for the next post in the series, where we'll dive in to the deceptive world of variables. You might be surprised at what you learn.
nikfp
1,253,471
Running graphic apps in Docker: AWS WorkSpaces
How to run GUI apps using X.Org and Docker in Linux or Windows
0
2022-12-29T12:19:18
https://dev.to/cloudx/running-graphic-apps-in-docker-aws-workspaces-1jj3
tutorial, docker, aws, security
--- title: 'Running graphic apps in Docker: AWS WorkSpaces' published: true description: How to run GUI apps using X.Org and Docker in Linux or Windows tags: 'tutorial, docker, aws, security' cover_image: 'https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/navarroaxel/assets/gui-apps-docker-cover.jpg' id: 1253471 --- As a Linux user, I need to connect to an Amazon WorkSpace but the client app can only be run in specific old, but maintained, versions of Ubuntu. I've tried with the latest Ubuntu version but it haven't worked, so created a virtual machine with Ubuntu Focal 20.04 but this setup used 14GB of disk space 😵 only to open a window application. This didn't make any sense and also included issues when sharing the clipboard from the host to the VM and then to the virtual desktop hosted in Amazon. 🙄 Then, I remembered that we can run graphic applications using Docker. Let's see how to connect a container to the host's [X Window System](https://en.wikipedia.org/wiki/X_Window_System). ## Creating the Docker image The following Dockerfile builds the image using Ubuntu Focal 20.04, and installs the Amazon WorkSpace client: ```Dockerfile FROM ubuntu:focal ENV DEBIAN_FRONTEND=noninteractive RUN apt update && \ apt install -y libusb-1.0-0 wget xauth && \ wget -q -O - https://workspaces-client-linux-public-key.s3-us-west-2.amazonaws.com/ADB332E7.asc | apt-key add - && \ echo "deb [arch=amd64] https://d3nt0h4h6pmmc4.cloudfront.net/ubuntu focal main" > /etc/apt/sources.list.d/amazon-workspaces-clients.list && \ apt remove -y wget && \ apt update && \ apt install -y workspacesclient && \ touch /root/.Xauthority && \ rm -rf /var/lib/apt/lists/* ``` 💡 `libusb-1.0-0` is a dependency of Amazon WorkSpaces. The `DEBIAN_FRONTEND=noninteractive` environment is used to skip the Ubuntu’s prompt asking for our time zone. Build the image with the following command: ```bash docker build -t workspace . ``` The Docker image's size is 1.2GB, a reduction of 91% in comparison to the 14GB of the virtual machine. ## Starting the container Now, we can run the container! ```bash docker run --name ws --network=host -e DISPLAY -v /tmp/.X11-unix -it workspace ``` We need to share the `.X11-unix/` directory where the connection [socket](https://en.wikipedia.org/wiki/Unix_domain_socket) is located, where `DISPLAY` is the name of the host's display. The `--network=host` argument makes the container use the same host's network so the container does not get its own IP address. This avoids the container being network isolated. 🪛 ## Starting the GUI application Before starting the app we must authenticate the X Window System of the container with the host. Get the authorization entry of the host using the `xauth` command: ```bash xauth list ``` Copy the output of the command and run this in the container (replacing the placeholders with the values from the clipboard): ```bash xauth add <display_name> <protocol_name> <hex_key> ``` Now you are able to start GUI applications inside the container! 🎉 ```bash /opt/workspacesclient/workspacesclient ``` ## Alternative: use VNC instead of X11 We need to make a few changes to run a VNC server in the container: ```Dockerfile FROM ubuntu:focal ENV DEBIAN_FRONTEND=noninteractive RUN apt update && \ apt install -y libusb-1.0-0 gpg wget x11vnc xvfb && \ wget -q -O - https://workspaces-client-linux-public-key.s3-us-west-2.amazonaws.com/ADB332E7.asc | apt-key add - && \ echo "deb [arch=amd64] https://d3nt0h4h6pmmc4.cloudfront.net/ubuntu focal main" > /etc/apt/sources.list.d/amazon-workspaces-clients.list && \ apt remove -y wget && \ apt update && \ apt install -y workspacesclient && \ rm -rf /var/lib/apt/lists/* RUN echo "exec /opt/workspacesclient/workspacesclient" > ~/.xinitrc && \ chmod +x ~/.xinitrc CMD ["x11vnc", "-create", "-forever"] ``` The `x11vnc` command will start an X11 session in the container launching the application specified in the `~/.xinitrc` file. ```bash docker build -t workspace . ``` Then, we can run the container: ```bash docker run --name ws -it workspace ``` But how can we know the IP address of the container? Just check it with the following command: ```bash docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ws ``` Now you can connect to the VNC server. 🔌 ## Conclusion We can run graphic applications that work in specific Linux distros using Docker 🐋, by connecting directly to our X Window System. This uses less memory and disk space than a VM. You can get the same in Windows as the host using [Xming](https://sourceforge.net/projects/xming/). `xauth` could be used in conjunction with SSH to run remote GUI apps 😉 in our local, but that's for another post. Alternatively, you can start a VNC server in a container allowing more flexibility either if you use macOS, or if you can't connect using X11 because you're using [Wayland](https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)). But this doesn't feel like a native window on your OS.
navarroaxel
1,253,684
Protecting MySQL Beyond mysql_secure_installation: A Guide
MySQL is a complex beast to tame. Part of that is because the RDBMS has a lot of settings and...
0
2022-11-13T08:00:00
https://breachdirectory.com/blog/mysql_secure_installation/
mysql, database, security, webdev
<!-- wp:paragraph --> <p>MySQL is a complex beast to tame. Part of that is because the RDBMS has a lot of settings and parameters that can be configured to make it able to perform at the very best of its ability, but another side of the reason is that its protection and security are more than clicking a couple of buttons on the screen.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>One of the most popular ways to secure MySQL (and MariaDB, for that matter) is by using mysql_secure_installation or mariadb_secure_installation. Both commands are shell scripts unique to either MySQL or MariaDB and as of MariaDB 10.4.6, both of them are symlinks as well – those shell scripts allow us to improve the security of our MySQL-based databases by letting us set a strong password for initial (“root”) accounts, letting us make root accounts accessible only locally, letting us remove anonymous accounts, and letting us remove the initial test database.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Setting a password for initial accounts is important because a strong password protects us from data breaches both now and in the future, anonymous accounts are dangerous because users can use those accounts to connect to our databases without specifying a password, and removing the test database is important because it can, by default, be accessed by anonymous users too. Both <code>mysql_secure_installation</code> and <code>mariadb_secure_installation</code> can be run by simply executing the commands in the terminal with the syntax <code>mysql_secure_installation</code> or <code>mariadb_secure_installation</code> respectively.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>However, protecting MySQL is so much more than just using mysql_secure_installation. Bear with us and we will explain what you need to do to be safe.</p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2>mysql_secure_installtion: Best Practices</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>As with most things related to the web, the security of MySQL begins with a couple of best practices. These include running <code>mysql_secure_installation</code> as per the advice above, setting up strong passwords for all user accounts, keeping an eye on the applications running on the server and making sure they’re safe, keeping the server itself locked up and safe, and regularly assessing the security of our database instances. However, best practices alone won’t get us very far – we also need to secure access to our databases and take proper care of the privileges within them.</p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2>Access Control and User Privileges</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>The way mysql_secure_installation begins to secure our infrastructure is by letting us specify a safe password for the root account in our database. That’s a good start, but we should also keep in mind that all accounts – no matter if they’re initial or not – need to have privileges. To ensure the security of your MySQL instance and the security of the applications behind it, consider granting only necessary privileges to all users; evaluate the needs and capabilities of your project and choose accordingly. A good practice is to use a set of privileges ranging from the lowest to the highest levels of security. Everything then could be arranged as follows:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><li>The set of privileges concerning the lowest level of security should contain pretty much all of the privileges that can be specified – either specifying <code>ALL</code> or <code>CREATE</code>, <code>DELETE</code>, <code>DROP</code>, <code>EXECUTE</code>, <code>INSERT</code>, <code>SELECT</code>, <code>SHOW DATABASES</code>, and <code>UPDATE</code> will do. If needed, specify the <code>GRANT OPTION</code> privilege. Such a security level is intended to be allocated to the accounts that we trust the most and that manage our databases at a day-to-day level. The people in charge of such accounts could include experienced database administrators, database-minded software engineers, or even security engineers dealing with databases – it all depends on the company. However, bear in mind that these accounts can cause as much destruction as they can cause peace of mind.</li><li>The set of privileges concerning the medium level of security could include the <code>CREATE</code>, <code>DELETE</code>, <code>DROP</code>, <code>INSERT</code>, <code>SELECT</code>, <code>UPDATE</code>, and <code>SHOW DATABASES</code> privileges – such a set of privileges would allow people to create databases and tables, delete data, drop databases and tables, and also insert, select, and update data within tables. That’s more than enough for any unsophisticated maintenance operation that pertains to updates, data insertion or deletion, or other operations – however, for the highest level of security, we’d still have to move up a notch.</li><li>The set of privileges with the highest level of security should contain only the privileges that allow users to administrate MySQL at the simplest level possible. These might include only CRUD (Create, Read, Update, Delete) privileges: in most cases, simply granting <code>INSERT</code>, <code>SELECT</code>, <code>UPDATE</code>, and <code>DELETE</code> will be enough. In some cases, granting even fewer privileges (e.g. only <code>SELECT</code>) would be feasible (always think what the account is intended for: if the account only runs <code>SELECT</code> queries, only the <code>SELECT</code> privilege will do.)</li></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p>These kinds of privileges will help us when completing all kinds of operations – from basic maintenance and essential functions to preventing high-profile security breaches. Employ a firewall and you should be good to go!</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Before you hit the ground running, though, it may be a good idea to be mindful of the security plugins offered by MySQL.</p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2>Security Plugins</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>As far as security plugins are concerned, they fall in one of the following categories:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><li>Plugins that help us authenticate with MySQL;</li><li>Plugins that validate passwords and ensure their security;</li><li>Plugins that help keep our connections safe (e.g. protect MySQL from bruteforce attempts by delaying server responses after many failed logins);</li><li>Keyring plugins that enable MySQL to store potentially sensitive information so that it can be accessed at a later date;</li><li>Enterprise plugins – MySQL also offers two enterprise-grade plugins and those usually either enable users to pursue audit-related activities or secure them from attacks targeting their MySQL infrastructure. Two of such plugins that come to mind include the MySQL Enterprise Audit and MySQL Enterprise Firewall plugins.</li></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p>Authentication plugins let us implement multiple different authentication methods into MySQL, password validation plugins require the passwords of accounts to adhere to certain security policies, connection-control plugins help us fend off bruteforce attacks, keyring plugins can communicate with multiple different cloud providers (AWS, Hashicord, Oracle Cloud Infrastructure) to store keyring data.&nbsp; Keyring data can also be stored in a local encrypted file (keyrings store sensitive data in a file stored locally or in a cloud for later retrieval.)</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Enterprise plugins, however, come at a steep price (licenses can cost upwards of $5000), but they can be immensely useful for database-minded companies that need to either:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><li>Monitor and block malicious activity targeting their databases (MySQL Enterprise Firewall)</li><li>Meet regulatory necessities and demands and achieve security compliance as a result (think ISO 27001 and the like) by regularly auditing their databases and securely storing audit files (MySQL Enterprise Audit)</li><li>Provide multiple sophisticated and advanced ways to back up our data and keep it safe from prying eyes (MySQL Enterprise Backup.)</li></ul> <!-- /wp:list --> <!-- wp:heading --> <h2>Securing Data with Data Breach Search Engines</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Some of the advice above will help you secure your MySQL instances, some of it will keep your applications safe from nefarious parties as well; However, your infrastructure won’t be fully secure unless you take care of the security of your organization and employees as well. The <a href="https://breachdirectory.com/search" target="_blank" rel="noopener">BreachDirectory data breach search engine</a> and <a href="https://breachdirectory.com/" target="_blank" rel="noopener">BreachDirectory API</a> can keep your organization safe by providing reliable and quick access to leaked data found in data breaches and providing up-to-date information on how to protect yourself, your loved ones, and your employees from the growing threat of data breaches.</p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2>Summary</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>mysql_secure_installation is a decent first step if we want to secure our MySQL infrastructure and the data behind it; however, if we are serious about the security of our systems, we need to employ a couple of key additional measures including taking care of secure access, assigning only those privileges that are necessary, and using security plugins to further the security posture of our databases.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>For those who are even more into security, <a href="https://breachdirectory.com/search" target="_blank" rel="noopener">data breach search engines like the one provided by BreachDirectory</a> will help in protecting their key online assets from identity theft and data breaches – <a href="https://breachdirectory.com/" target="_blank" rel="noopener">the BreachDirectory API</a> will ensure that their organization doesn’t become the next victim of identity theft. We hope that you’ve enjoyed reading this article, come back to <a href="https://breachdirectory.com/blog" target="_blank" rel="noopener">the BreachDirectory blog</a> to learn more about the newest developments in the cybersecurity space, and until next time.</p> <!-- /wp:paragraph -->
breachdirectory
1,253,716
What is the best website for practicing C++ problems?
Fundamentally, C++ is a programming language utilized for algorithmic critical thinking and bit...
0
2022-11-12T04:20:08
https://dev.to/ridhisingla001/what-is-the-best-website-for-practicing-c-problems-21kg
cpp, programming, productivity, beginners
Fundamentally, C++ is a programming language utilized for algorithmic critical thinking and bit advancement of Os. At the point when you come to HTML, it is a prearranging language that is utilized for the front-end advancement of a site page. So, for software development, you need to learn all the algorithms required and corresponding data structures. OOPS will also play a key role in your code. The majority of the understudies just go to youtube and start learning the language trust me expecting youtube is enough for this there were more capable people out there. All things considered, that is the technique for learning the language yet it's adequate not to know what to do you want to find the right resource and need to start your learning. Here are my endeavored and attempted thoughts:- **GFG Documentation:- **Don't lean toward it if you are a youngster go for an electronic course since documentation is fantastic yet for the people who at this point have some idea with respect forthright. • Basically don't capitulate to unobtrusive resources out there because there is reliably an inspiration driving why they are humble pick the best one then, follow it, and if you really give your cerebrum to it you will going to learn. • Sorting out some way to program can be hard for some, even with respectably straightforward programming lingos. C++ is one of the "bread and butter" coding tongues, and there are a ton of programming destinations that can help you with learning C++ in vain. Could we explore why you'd have to learn C++ programming and find where you can learn it on the web? There are a great deal of other straightforward programming vernaculars for amateurs to learn. Why pick C++ explicitly? • C++ is a staggering programming language that uses a "trust the programmer" aphorism. This arrangement fabricates the chance for goofs to appear during aggregating, in any case, it in like manner gives the engineer more noteworthy versatility by the manner in which they code. Along these lines, most undertakings utilize something like a touch of C++ code, or its cousin C. Learning C++ is moreover valuable in another way - - as C++ is fundamentally equivalent to C, you can grasp and (by and large) code in C too. To be shown C++ rather than poring over locales, endeavor Udemy. Udemy is interesting corresponding to considering from a site, as you'll have a teacher who will help with tending to your requests and guide you. This is an incredible choice if you end up looking in disorder at dividers of code and need someone to guide you through it. • You can examine **Udemy's** rundown of courses to notice the one generally proper to you. To check whether C++ is great for you, we propose the C++ Tutorial for Complete Beginners course. It's free and will urge you how to program using C++. To make a pass at something different all around, we also propose Beginning C++ Programming- - - From Beginner to Beyond. North of 70,000 people have taken this course, which holds a 4.5/5 rating at the hour of creating, and is instructed by someone with 25+ extensive stretches of C++ experience. It covers generally that you need to be comfortable with C++, from comments and factors to enter yield streams. There's even a section given to setting up C++ and sorting out the compiler bumbles you'll find during your coding experience. **Conclusion-** Well assuming that you are searching for a task in the C++ area it's great. Presumably, you know the essential things OOPS ideas (Inheritance, Polymorphism, Data Hiding, Overloading, etc..), Data constructions, and calculations which is great. Attempt to become familiar with the utilization of those things, additionally get to be aware of Standard Template Library, Namespaces, Design Patterns, and SOLID Principles. As a fresher to begin in an organization fundamental information is adequate, and outstanding things you will learn at work. Being familiar with the Qt system, a GUI planning device that depends on C++ will give more choices in the business. QT, C++, and Embedded Linux/Unix are the ideal mix to work in the business. Much appreciated and regarded.
ridhisingla001
1,253,991
Deploy Promtail as a Sidecar to you Main App.
Hello, in this tutorial the goal is to describe the steps needed to deploy Promtail as a Sidecar...
0
2022-11-12T11:48:27
https://dev.to/tvelmachos/deploy-promtail-as-a-sidecar-to-you-main-app-2fk5
devops, kubernetes, observabillity, grafana
Hello, in this tutorial the goal is to describe the steps needed to deploy Promtail as a Sidecar container to your app in order to ship only the logs you will need to the Log Management System in our case we will use [Grafana Loki](https://grafana.com/oss/loki/). Before we start, I would like to explain to you the reasoning behind the use of the two Kubernetes Objects a [Configmap](https://t-velmachos.notion.site/1df9773802444753973d8c15d4047a61) and [emptyDir Volume](https://kubernetes.io/docs/concepts/storage/volumes/). So we will use a emptyDir Volume create a shared temporary space between Containers in which the app logs will reside, also we will use a Configmap to store the necessary configuration used by Promtail in order to know which files need to monitor and where we want to ship the Logs in our case the Loki Url. So, lets Dive in… Step1. Create the ConfigMap to store the configuration (promtail.yaml) for Promtail. ``` apiVersion: v1 kind: ConfigMap metadata: name: promtail-sidecar-config-map data: promtail.yaml: | server: http_listen_port: 9080 grpc_listen_port: 0 log_level: "debug" positions: filename: /tmp/positions.yaml clients: # Specify target - url: http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push scrape_configs: - job_name: "<app-name>" static_configs: - targets: - localhost labels: app: "storage-service" environment: "<environment-name>" __path__: /app/logs/*.log # Any file .log in the EmptyDir Volume. ``` Step2. Make the necessary changes in the Deployment Manifest. ``` apiVersion: apps/v1 kind: Deployment metadata: name: <app-service> labels: app: <app-service> spec: replicas: 1 selector: matchLabels: app: <app-service> template: metadata: labels: app: <app-service> spec: containers: - name: <app-service> image: <your-name>/<app-service> imagePullPolicy: Always ports: - containerPort: <app-port> readinessProbe: exec: command: ["<your health-check>"] initialDelaySeconds: 5 livenessProbe: exec: command: ["<your health-check>"] initialDelaySeconds: 10 env: - name: <ENV-VAR-1> valueFrom: configMapKeyRef: name: <app-service>-config-map key: appName - name: <ENV-VAR-2> valueFrom: secretKeyRef: name: <app-service>-secret key: <secret-key> volumeMounts: - name: shared-logs # shared space monitored with Promtail mountPath: /app/logs # Sidecar Container Promtail - name: promtail image: grafana/promtail:master args: - "-config.file=/etc/promtail/promtail.yaml" # Found in the ConfigMap volumeMounts: - name: config mountPath: /etc/promtail - name: shared-logs # shared space mountPath: /app/logs imagePullSecrets: - name: <registry-secret> # if needed volumes: - name: config configMap: name: promtail-sidecar-config-map - name: shared-logs # shared space monitored with Promtail emptyDir: sizeLimit: 500Mi ``` I hope you like the tutorial, if you do give a thumps up! and follow me in [Twitter](https://twitter.com/TVelmachos), also you can subscribe to my [Newsletter](https://dashboard.mailerlite.com/forms/167581/67759331736553243/share) in order to avoid missing any of the upcoming tutorials. #### Media Attribution I would like to thank [Clark Tibbs](https://unsplash.com/@clarktibbs) for designing the awesome [photo ](https://unsplash.com/photos/oqStl2L5oxI)I am using in my posts.
tvelmachos
1,254,159
4 Tips for How to Work with Critical Feedback
What to do if you get negative feedback at work? How to make sure you career stays on track and your...
0
2022-11-12T14:28:54
https://dev.to/dadyasasha/4-tips-for-how-to-work-with-critical-feedback-353c
career, softskills, feedback, management
What to do if you get negative feedback at work? How to make sure you career stays on track and your manager is actually happy with you? Getting critical (negative) feedback might be painful but if you approach it constructively it can be beneficial for you, your manager and entire organization. In this video I share my approach for working with bad feedback from your manager. Working with feedback is one of the most important soft skills software engineer or tester may have. {% embed https://youtu.be/QhqN22nv0qM %}
dadyasasha
1,254,375
Free Tailwind CSS Site Templates For Your Your Next Project
I love Tailwind CSS. A lot of people love Tailwind CSS. So I made a site to share my Tailwind CSS...
0
2022-11-12T20:20:37
https://dev.to/wes_walke/free-tailwind-css-site-templates-for-your-your-next-project-58h3
tailwindcss, webdev, design
I love Tailwind CSS. A lot of people love Tailwind CSS. So I made a site to share my Tailwind CSS site templates I've made over the years. Use them on your next project, to learn, to showoff off to your friends and pretend you made it. Doesn't matter to me. Enjoy :) https://tailwindsites.com/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7cb34pr526rr1850s7ra.png)
wes_walke
1,254,875
【Laravel】How to change redirect URL in authentication
When you are making app with Laravel and using automatically generated files made by "php artisan...
0
2022-11-13T07:00:16
https://dev.to/aquacat/laravel-how-to-change-redirect-url-in-authentication-236n
laravel
When you are making app with Laravel and using automatically generated files made by "php artisan make:auth," you might want to know how to change their redirect url. app>Http>Controllers>Auth>LoginController.php, for eaxample. There is a code as below. ``` protected $redirectTo = '/home'; ``` When user has logged in correctly, the page is redirected to '**/home**.' You need to change '/home' to whatever you want to redirect to.
aquacat
1,255,318
How to get current date in JavaScript
There are 2 ways you can get current date in JavaScript. You can just create a new Date object...
11,096
2022-11-15T13:48:00
https://dev.to/coderslang/how-to-get-current-date-in-javascript-57je
codenewbie, javascript, webdev, beginners
There are 2 ways you can get current date in JavaScript. You can just create a new `Date` object without any arguments, or you can use the function `Date.now`. <!--more--> So both `new Date()` and `Date.now()` can be used to get current date in JS. Let's log the results to the console. ```js console.log(new Date()); console.log(Date.now()); ``` If the first case, we'll see a date + time in UTC timezone, and with `Date.now` we'll get the number of milliseconds passed after Jan 1, 1970. ```text 2022-01-13T15:19:32.557Z 1642087172563 ``` Both results represent current date in JavaScript and can be easily compared. A date object can be converted to milliseconds since Jan 1, 1970 by using a `getTime` method. ```js console.log(new Date().getTime()); // 1642087361849 ``` And you can convert milliseconds returned by `Date.now()` to a Date object, although it's pretty redundant. If you just need current date, it's better to call `new Date()` without arguments. ```js console.log(new Date(Date.now())); // 2022-01-13T15:24:55.969Z ```
coderslang
1,257,381
JavaScript Events
JavaScript Events: An event is a signal that something happened. All DOM generated...
0
2022-11-15T08:15:18
https://dev.to/sadiqshah786/javascript-events-22hn
javascript, beginners, programming, webdev
## JavaScript Events: _An event is a signal that something happened._ All DOM generated signals. # Types of Events There are many Events in JavaScript but some events discuss in this article. ### - Mouse Events 1. Click : Its performed when mouse click on any element. and touchscreen user can tabs to perform this event. 2. contextmenu : Its performed when mouse right click on any element. 3. mouseover : Its performed when mouseover the element. 4. mouseout : Its performed when mouseout the element. 5. mouseup : Its performed when mouse pressed. 6. mousedown: Its performed when mouse released over an element. ### - Keyboard Events 1.keydown and keyup – when a keyboard key is pressed and released. ### - Form Element Events 1. submit: when the visitor submits a <form> 2. focus: when the visitor focuses on an element, e.g. on an <input> ### - Document Events: 1. DOMContentLoaded: when the HTML is loaded and processed, DOM is fully built. ### - CSS events: 1. transitionend: when a CSS-animation finishes.
sadiqshah786
1,255,611
Tired of doing all the initial setup that a fullstack typescript project requires?
Start your next project with the dna architecture, and be good to go with the best opensource tools...
0
2022-11-13T18:41:28
https://dev.to/cesarsalesgomes/tired-of-doing-all-the-initial-setup-that-a-fullstack-typescript-project-requires-2b5p
react, directus, nest, typesafety
Start your next project with the **dna** architecture, and be good to go with the best opensource tools of each layer ensuring an initial typesafe enviroment: Repo: https://github.com/cesarsalesgomes/dna Backend: Nestjs / Directus Frontend: React / Tailwind
cesarsalesgomes
1,256,355
Where to impliment Password Encryption in node.js
Security is one of the most essential features of any application. We mainly impliment this through...
0
2022-11-14T09:03:51
https://dev.to/jane49cloud/where-to-impliment-password-encryption-in-nodejs-4e7k
javascript, node
Security is one of the most essential features of any application. We mainly impliment this through hashing password. Today I will focus on implimenting this in a MERN app. The basic structure of a neat node application has the following basic folders and files ###### connection(database) ###### models (defines properties of models.) ###### conrollers (handlers of Crud operations) ###### routes(defines paths || Url mapping) #### Encryption in controllers Assuming you are already connected to the database and have created a users model, install bcrypt `npm i bcryptjs` ####### backend/controllers/users.js ```javascript const User = require('../models/user') //path to your user model const bcrypt = require("bcryptjs") const registerUser = async(req,res)=>{ const {name, email, password} = req.body //first check if the user exists const userExists = await User.findOne({ email }); if (userExists) { response.status(400); console.log("Email address already exists") } //Encrpt password before you save const salt = await bcrypt.genSalt(10); const hashedPassword = await bcrypt.hash(password, salt); } //create user try { const user = await User.create({ name, password: hashedPassword, email }); res.json(user); console.log(user, "User created successfully...") } catch (error) { console.log(error, `User not created...`) }; module.exports = { registerUser, }; ``` import the registerUser controller and route your user register path and test your application. I used postman to register a user and got the following json response: ```json { "name": "Luke Graham", "email": "graham@gmail.com", "password": "$2a$10$rQrhLmwoEGaCMq9qR9cLaOvVBtA/FvoSSmt1tGj0gq7tFyscDS5jK", "photo": "https://i.ibb.co/4pDNDk1/avatar.png", "phone": "+245", "bio": "bio should be at most 250 characters", "_id": "6371eb2668c624ddb1bd72d9", "createdAt": "2022-11-14T07:15:50.078Z", "updatedAt": "2022-11-14T07:15:50.078Z", "__v": 0 } ``` The password was hashed successfully. Do not give a maxLength to the password in the model because the hashed password gives a long string. Validate in the frontend. The above method is valid. However, there are moreother controllers that require password. **registerUser loginUser updatePassword forgotPassword ** for each of the above handlers you need to configure encryption. Therefore it is easier to perfom the encription in the model file. Use a `schema.pre()` function that changes the password before saving. This is how it is implimented: #######backend/models/users.js ```javascript const mongoose = require("mongoose"); const bcrypt = require("bcryptjs"); const userSchema = new mongoose.Schema({ name: { type: String, required: true }, email: { type: String, required: true, unique: true, trim: true, match: [ /^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/, "Please enter a valid email address", ], }, password: { type: String, required: true, minLength: [6, "password must be at least 6 characters"], }, photo: { type: String, required: true , default: "https://i.ibb.co/4pDNDk1/avatar.png"}, // person avatar phone:{type:String, default: "+245"}, bio:{ type:String, maxLength:[250, "bio must be at most 250 characters"], default: "bio should be at most 250 characters" } }, { timestamps:true } ); //Encrypt password before saving to database userSchema.pre("save", async function(next){ if(!this.isModified("password")){ return next() } const salt = await bcrypt.genSalt(10); const hashedPassword = await bcrypt.hash(this.password, salt);//this.passord (points to the password in this file) this.password = hashedPassword next() }) module.exports = mongoose.model("User", userSchema); ``` Just a few adjustments in the model and password is pashed before the schema is saved. There are a few changes in the controller file #######backend/controller/user ```javascript const User = require('../models/user') //path to your user model const registerUser = async(req,res)=>{ const {name, email, password} = req.body //first check if the user exists const userExists = await User.findOne({ email }); if (userExists) { response.status(400); console.log("Email address already exists") } //create user try { const user = await User.create({ name, password, email }); res.json(user); console.log(user, "User created successfully...") } catch (error) { console.log(error, `User not created...`) }; module.exports = { registerUser, }; ``` route your application and test. My response: ```json { "name": "Mercy", "email": "mercy@gmail.com", "password": "$2a$10$iqzkaO1wTco.3z1KK8Ij9u9sy2DtLViRwL5lvgeHDcQ31wPfCo9jK", "photo": "https://i.ibb.co/4pDNDk1/avatar.png", "phone": "+245", "bio": "bio should be at most 250 characters", "_id": "6371fe23225d25b4e3e8220c", "createdAt": "2022-11-14T08:36:51.311Z", "updatedAt": "2022-11-14T08:36:51.311Z", "__v": 0 } ``` The password was successfully hashed. Hoping this helps understund encryption better and how to use it better. Was this article helpful? Need clarification? Leave a comment
jane49cloud
1,256,503
Hola
Mi nombre es Graciela, soy principiante y deseo aprender.
0
2022-11-14T15:17:52
https://dev.to/gracesl/hola-266j
Mi nombre es Graciela, soy principiante y deseo aprender.
gracesl
1,256,534
Maximum call stack size exceeded
I faced this problem with NextJs recently and the solution was quite simple. I had a component Name...
0
2022-11-14T16:48:13
https://dev.to/theindianappguy/maximum-call-stack-size-exceeded-5f2n
I faced this problem with NextJs recently and the solution was quite simple. I had a component Name PeopleInfo and the page with the same name, So when i tried importing the component into the page this error happened.
theindianappguy
1,256,770
Porque isso tá aqui e como? Pensamentos sobre modelos de Pull Request
Bom, todo desenvolvedor uma hora ou outra vai se deparar com as seguintes situações: Voltar de...
0
2022-11-15T03:37:56
https://dev.to/wesleynepo/porque-isso-ta-aqui-e-como-pensamentos-sobre-modelos-de-pull-request-3enf
Bom, todo desenvolvedor uma hora ou outra vai se deparar com as seguintes situações: - Voltar de férias e ter que se atualizar sobre os projetos; - Lidar com incidente pós virada de versão; - Revisar uma nova alteração e não ter como falar com autor; Todas essas situações tem um ponto comum, são resolvidas com uma boa documentação anexada, mas como e onde fazer isso sem criar um processo impeditivo e burocrático? ## De desenvolvedor para desenvolvedor... Documentação está totalmente relacionada a DX [(Developer Experience)](https://developerexperience.io/practices/good-developer-experience) e se são os desenvolvedores que produzem o código, manter a documentação viva é a forma que os mesmos tem de manter a comunicação alinhada entre o time. Mas... eu fiz código e quem vai revisar vai ler ele eu deveria mesmo documentar mais? A resposta é sim, quem revisa ou vai ter que revisitar o PR nem sempre vai ter todo o contexto da alteração e quais necessidades ela atendeu, ou como faz para replicar/testar, por isso é imprescindível documentar afinal ninguém vai querer ser acionado de madrugada ou quer ter de esperar um dia inteiro para tirar dúvida sobre o motivo de ter sido feito daquela forma pela falta de contexto nos PRs. E vamos ser sinceros, o nosso garbage collector é muito bom, limpa a nossa memória que é uma beleza. ## MODELOS DE PR Assim modelos de PR surgem como uma alternativa para manter a documentação viva e a comunicação dentro do time, tanto para garantir requisitos mínimos acordados, as decisões tomadas e os resultados esperados, assim facilitando o entendimento e a rastreabilidade de alterações. **Benefícios** - Reduz a necessidade de comunicação síncrona - Explicita as alterações e os motivos - Facilita o teste e a revisão - Cria um padrão de desenvolvimento **Como implantar?** Comece sempre com o simples e evolua o modelo junto com a equipe conforme o processo de adoção acontece, de nada adianta ter um modelo completo com todos cenários possíveis se o time não conseguiu se adequar ao uso e adaptar a sua realidade. **Ideal seria iniciar com:** - Breve resumo da alteração - Passo a passo de como testar - Pontos importantes/críticos da implementação/decisão - Referência do ticket/card do seu software de gestão ![Modelo de pull request demonstrando campos e exemplos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqsopt215csbrk73rq4z.png) E ao longo das retrospectivas levantar com o time se há sugestões de novas seções, checklists entre outras necessidades mais relacionadas ao time podendo evoluir para um cenário onde existem dois ou mais modelos distintos um por exemplo com peculiaridades de PR de BUG, FEAT, mas nada impede de começar apenas com o resumo e incrementando ai vai da decisão do time. Bom essa é a minha ideia de como os modelos de PR são importantes dentro de um projeto, e você já implanta esses modelos como funciona ai?
wesleynepo
1,257,441
Hey, check this amazing tool!
Looking for a tool that will help you with everything at one place? Well, the solution is...
0
2022-11-15T10:41:32
https://dev.to/random_toolkit/hey-check-this-amazing-tool-5nk
socialmedia, webdev, devops, randomtools
Looking for a tool that will help you with everything at one place? Well, the solution is here. Random tool (https://randomtools.io/reddit-comment-search/) is software to make your daily life easier. It comes with developer tools such as - SQL formatter, CSS beautifier, URL encoder and decoder, and Lorem Ipsum generator that can help you with website development. In addition, you'll find tools like Basic image correction, Grayscale image, AutoGamma correction and a lot more to make editing easier. You'll also find some cool social media tools and math tools to help you with more challenging mathematical problems. Random tools also help you with website analytics.
random_toolkit
1,257,866
Introduction to Data Analysis
What is Data analysis? Data analysis is a process of inspecting, cleansing, transforming, and...
0
2022-11-15T14:51:08
https://dev.to/manawariqbal/introduction-to-data-analysis-3alp
machinelearning, datascience, beginners, tutorial
**What is Data analysis?** Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. **Tools used in Data Analysis :** Auto-managed closed tools-> Qwiklabs, Tableau,Looker, Zoho Analytics The programming language used: Python,R,Julia **Why Python for Data Analysis? **-very simple and intuitive to learn. -correct language -powerful libraries -free and open source -Amazing community,docs, and conferences **When to choose R language? **- When R studio is needed -When dealing with advanced statistical methods. -When extreme performance is needed. **Data analysis Process:** 1:Data extraction-> SQL,Scrapping ,File format(CSV,JSON,XML),Consulting APIs,Buying Data,Distributed database 2:Data cleaning -> • Missing values and empty data • Data imputation • Incorrect types Incorrect or invalid values • Outliers and non relevant data ● Statistical sanitization 3. Data Wrangling-> Hierarchical Data Handling categorical data Reshaping and transforming structures Indexing data for quick access Merging,combining and joining data 4:Analysis-> • Exploration • Building statistical models • Visualization and representations . Correlation vs Causation analysis • Hypothesis testing ● Statistical analysis • Reporting 5:Actions-> • Building Machine Learning Models Feature Engineering • Moving ML into production • Building ETL pipelines • Live dashboard and reporting • Decision making and real-life tests **PYTHON ECOSYSTEM: **The libraries we can use ... pandas: The cornerstone of our Data Analysis job with Python matplotlib:The foundational library for visualizations.Other libraries we'll use will be built on top of matplotlib. numpy:The numeric library that serves as the foundation of all calculations in Python. seaborn:A statistical visualization tool built on top of matplotlib. statsmodels:A library with many advanced statistical functions. scipy:Advanced scientific computing, including functions for optimization,linear algebra, image processing and much more. scikit-learn:The most popular machine learning library for Python (not deep learning)
manawariqbal
1,257,909
All about Kafka
This post will give a brief about Kafka technology and suitable for beginner audience. Moving ahead I...
0
2022-11-15T16:10:24
https://dev.to/poojave/all-about-kafka-part-1-18im
kafka
This post will give a brief about Kafka technology and suitable for beginner audience. Moving ahead I will be sharing Knowledge on top of it. ## What you should expect by this post? Background - What kafka is needed? What it takes to understand Kafka? Downside of using kafka? How kafka works? Best practices How can you be more familiar with Kafka? ## 1. Background - Why is kafka needed? "Realtime" "Ordering" "Persistence" "Scalability" "Distributed" are the core requirement use cases of Kafka. Few examples of such usecases: - Financial transaction in stock exchanges - Building logging/analytics systems - Chat application - [Flash sales](https://www.youtube.com/watch?v=DdHh7CNFVpI) ## 2. What it takes to understand Kafka? Kafka was developed by Linkedin first and donated to Apache later as "open source" project. How do we operate before Kafka? We somehow use to manage with Messaging queue only. How messaging queues are different from kafka? - No flexibility of data retention period cum persistence. - Clients can read messages as per their connivence unlike queue. This interface helps to understand kafka. [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vx17qhr5i01ozq7d3eg7.png)](https://softwaremill.com/kafka-visualisation/) [Basic demo](https://drive.google.com/file/d/1i5LMwzdCgbJr7pJpWVsvRyWn-mJ2Q3gM/view?usp=share_link) [Data on Local set up](https://drive.google.com/file/d/10NcRPgymcHCPOJeD7r9tcxOweerHnBgq/view?usp=share_link) Use [this](https://dev.to/subhransu/realtime-chat-app-using-kafka-springboot-reactjs-and-websockets-lc) as reference ## 3. Downside of using kafka? - Difficult to manage in production - Difficult to manage while migration from on-premise to cloud, one cloud provider to another cloud provider - You need experts like Kafka developer or database developer in some cases. Hence, Choose wisely to use Kafka unless you see the specific need for it and proper scale. ## 4. How kafka works? Kafka works in two ways. 1. [FREE] Self managed - Apache Kafka 2. [PAID] Fully managed - [Confluent Kafka](https://www.confluent.io/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b64kqq5bvmd2bpqignnj.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75q2w8r3vrybqnnz4176.png) Here Consumer Group 1 has two consumer A & B. And the partitions are distributed amongst these. Whereas Consumer group 2 has one consumer only. It subscribe to all the partitions. In order to make kafka highly available, The partitions are distributed across brokers and the replicated copies of data are maintained. With this we have the leader and follower concept, which essentially means all the updates are first received by leader server and then by rest of followers. If leader server goes down, new leader is elected amongst follower brokers. Please note that this is one of the responsibility of Zookeeper. [Kafka Simulation](https://drive.google.com/file/d/1A5GThgChu78dKem_wIz09M8EYBNESTq3/view?usp=share_link) ## 5. Best practices 1. Replication factor 2. Partition count ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x6fman0m7mtsjmgqkja4.png) More replication factor and huge partition count will demand you more CPU and memory. Be conscious in choosing it. 3. Retention period ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/twhoia9teh4xto2qyv5a.png) Retention period can vary from minutes to hours to day and in some cases it can go to infinite as well. But It would cause extra disc on the brokers. 4. Clean up policy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c76pcmdb8e7s0gi2sbnm.png) Disable things like automatic topic. You can also add policies like automatic topic deletion if not seen any data from alst 30 days. 5. Compression ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f127gwyxzlo4smjdfw8z.png) Compression and decompression will cause extra CPU cycles at both consumer and producer end. ## 6. KIP(Kafka improvement process)- 500 1. Still under road to success 2. Available by Kafka version 2.8.0. 3. One of the server acts as a metadata management house like zookeeper. 4. Overhead of maintaining same deployment at two places cluster & zookeeper. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ssnj4evzcpbuiqa5hno.png) ## 7. How can you be familiar with Kafka? Read about kafka via [official docs](https://kafka.apache.org/documentation/#gettingStarted) only. You can contribute in issues solving by using [this](https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-14358?filter=allopenissues) jira dashboard. Read and keep yourself up-to-date by following [kafka summit](https://www.kafka-summit.org/), [videos](https://kafka.apache.org/videos) and awesome [community](https://www.confluent.io/community/ask-the-community/). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqr5steippzk8z3jvjb1.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nlx8tsuk71vn0aowa27b.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hi82dse0firgdumk95ic.png) Use [Kadeck](https://www.kadeck.com/download) for free to maintain your data/topics and visualise it as GUI.
poojave
1,257,938
DynamoDB 101
This post is written considering you're a fresher to DynamoDB and it only explains the basic concepts...
0
2022-11-15T17:04:03
https://dev.to/ckmonish2000/intro-to-dynamodb-with-node-keb
aws, serverless, node, database
This post is written considering you're a fresher to DynamoDB and it only explains the basic concepts that's required to perform CRUD operations. The language used in this post in node.js but the underlying concepts are same no matter which language you choose. ## What is DynamoDB? - DynamoDB is a fully managed No-SQL database as a service provided by AWS. - DynamoDB is super-fast where you get responses in single digit seconds. - It's infinitely scaleable on demand - The data values in DynamoDB are stored on an SSD which makes it fast, reliable and extremely secure. ## How is the Data Stored? - The items in DynamoDB are stored in tables. - Each item in the table contains 2 things a key (which is again of 2 type) and a value which are in the document format. - A table can store infinite number of items, but each item must not exceed the size of 400kb. - Your tables have 2 types of keys a primary key which is mandatory (Think of it like the primary key in SQL) and then we have a sort key which is optional (think of it as the foreign key in SQL). - If you have a primary key alone then each value must be unique but if you're pairing it with a sort key the combination of your primary and sort key must be unique (this helps you filter values efficiently). ## Setup DynamoDB Please follow the [previous post](https://dev.to/ckmonish2000/lambda-function-with-dynamodb-node-36i6) to setup IAM user with the required. - Search for DynamoDB in your AWS console and click on create table. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5c3q8imkw8heu6x79sap.png) - To follow along create a table name Todo with the Partition key called ID and hit create table ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5l3xgk45vfz3b9sns0b2.png) ## Server Setup Now let's setup our node environment: - first create a package.json by running `yarn init -y` - add `"type": "module"` in package.json ![module support npm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k8w664466emr4nosr7xs.png) - now install the dependencies ``` yarn add express dotenv aws-sdk nodemon ``` - now add the scripts section in package.json ``` "scripts": { "start": "nodemon index" } ``` - create 3 files in your project directory index.js,Database.js and .env. - In .env add the your IAM user secret, access key and your AWS account region. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/37000uj9tvbodaig84wr.png) - Now setup your express server ![express.js server setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w7l91lx4kfo92n75a3y.png) - Go into the Database.js file and let setup our DynamoDB client which allows us to interact with our tables. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5a97t6u1htb6geopreoj.png) the `aws.config.update` is used to authorize the aws-sdk to access your tables in DynamoDB. the `const client = new aws.DynamoDB.DocumentClient();` is used to initialize the DynamoDB client. finally, the constant Table_Names is used to define the DynamoDB table name we just created. - now create a variable called DB and export it. ``` let DB = {}; export default DB; ``` ## CRUD Operations Before we start coding let's understand few methods provided by the DynamoDB client. - `put` this method is used to create a new entry in the DB or edit and existing entry. - `scan` this method is used to all the entries of a given table. - `get` this method is used to get an element by their partition key or a combination partition and sort key. - `delete` this method deletes a given element. All the above methods use a callback in order to convert it into a promise we chain a method called `promise`. In all the above methods accept an object as an argument where we have to pass in the TableName attributes. ``` const params = { TableName: Table_Names, } ``` ## <u>Add item to table</u> ``` addItem: async (item) => { const params = { TableName: Table_Names, Item: item, } try { return await client.put(params).promise() } catch (err) { return (err) } }, ``` - In the above code the item parameter can be any JSON object of your choice. ## <u>Get all items from the table</u> ``` getAllItems: async () => { const params = { TableName: Table_Names, } try { return await client.scan(params).promise() } catch (err) { return (err) } } ``` ## <u>Get item by id</u> ``` getMemberByID: async (id) => { const params = { TableName: Table_Names, Key: { ID: id, } } try { return await client.get(params).promise() } catch (err) { return (err) } }, ``` - In the above function the parameter id is a string which needs to be the ID of the item you want to fetch. - We use the Key attribute to match the Partition key ID with the parameter id (if you have a sort key that also goes here). ## <u>Delete item by id</u> ``` deleteItem: async (id) => { const params = { TableName: Table_Names, Key: { ID: id } } try { return await client.delete(params).promise() } catch (err) { return (err) } } ``` finally you database.js should look like this: ``` import aws from "aws-sdk"; aws.config.update({ accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, region: process.env.AWS_REGION }) const client = new aws.DynamoDB.DocumentClient(); const Table_Names = "Todo" const DB = { addItem: async (item) => { const params = { TableName: Table_Names, Item: item, } try { return await client.put(params).promise() } catch (err) { return (err) } }, getMemberByID: async (id) => { const params = { TableName: Table_Names, Key: { ID: id, } } try { return await client.get(params).promise() } catch (err) { return (err) } }, deleteItem: async (id) => { const params = { TableName: Table_Names, Key: { ID: id } } try { return await client.delete(params).promise() } catch (err) { return (err) } }, getAllItems: async () => { const params = { TableName: Table_Names, } try { return await client.scan(params).promise() } catch (err) { return (err) } } } export default DB; ``` ## Create routes go to your index.js file and create routes like down below ``` import express from "express"; import env from "dotenv"; import DB from "./Database.js" env.config(); const app = express(); app.use(express.json()) app.get("/", async (req, res) => { const data = await DB.getAllItems(); res.json(data) }) app.post("/", async (req, res) => { const data = await DB.addItem(req.body) res.json(data) }) app.get("/:id", async (req, res) => { const data = await DB.getMemberByID(req.params.id) res.json(data) }) app.delete("/:id", async (req, res) => { const data = await DB.deleteItem(req.params.id) res.json(data) }) app.listen(3000, () => { console.log("listening on 3000") }) ``` Now you can test the apis using Postman and they should workfine. I will try to cover more topics related to DynamoDB while building projects with lambda and DynamoDB. Thanks for time.
ckmonish2000
1,258,276
Today is the beginning of my coding journey 🙌
Today marks the first day on my coding bootcamp and to be honest im so excited but at the same time...
0
2022-11-15T20:12:18
https://dev.to/paschalcodes/today-is-the-beginning-of-my-coding-journey-3c6o
beginners, webdev, javascript, programming
Today marks the first day on my coding bootcamp and to be honest im so excited but at the same time I'm **scared** of the unknow and what awaits me in the deep dark internet world of learning how to code 😭 lol So if there's any tips or advice you guys can give to someone coming into this field from a complete beginners perspective then all would be appreciated!! **_Thanks and happy coding_**
paschalcodes
1,258,526
What editor theme do you use ? 🧑‍🎨🎨
I use Bearded Theme along with Bearded Icons . What is your theme of choice ?
0
2022-11-16T00:34:06
https://dev.to/fadhilsaheer/what-editor-theme-do-you-use--10h3
vscode, javascript, programming, productivity
I use [Bearded Theme](https://marketplace.visualstudio.com/items?itemName=BeardedBear.beardedtheme) along with [Bearded Icons ](https://marketplace.visualstudio.com/items?itemName=BeardedBear.beardedicons). What is your theme of choice ?
fadhilsaheer
1,258,595
Data Indexing, Replication, and Sharding: Basic Concepts
A database is a collection of information that is structured for easy access. It mainly runs in a...
20,359
2022-11-16T03:24:29
https://pragyasapkota.medium.com/data-indexing-replication-and-sharding-basic-concepts-7376db7f245a
database, indexing, replication, sharding
A database is a collection of information that is structured for easy access. It mainly runs in a computer system and is controlled by a database management system (DBMS). Let’s see some concepts of the database here — Indexing, Replication, and Sharding respectively. ## Indexing The database can have a large amount of data with up to millions of records. In the time of need, the disorganized data with no index is very hard to retrieve and the whole database would have to be iterated one by one. And if it’s old data, then that would be an absolute nightmare. The solution to getting out of this complication is INDEX. Database Indexing is a kind of data structure that helps with fast retrieval of the information held in the database. We use indexes to look up those data which is assigned at the time the information is stored. When the data is too large to be able to search for data iteratively, we use database indexing. This is a core necessity to a [relational database](https://dev.to/pragyasapkota/relational-database-43l4) and is offered on [non-relational databases](https://dev.to/pragyasapkota/non-relational-database-387p) as well. We have a very optimized lookup time when the data is indexed. ## Replication Replication means making copies of things to duplicate them. In a database, the term replication is heard when we learn scaling. We can duplicate our database so that if the database overloads and crash at some point, the other duplicated database handles the load, and we can avoid system failure. This creates redundancy in the system which will maintain high availability in the system. ![Indexing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lrb1a69d9zlrhd1pqcv.jpeg) We can have the data replication both synchronously and asynchronously. When chosen the synchronous way, the replicated database updates in sync with the changes in the main database. You can allocate a time interval where your main database and the replica database can be synchronized and updated. One other thing to ensure is that if the write operation to the replica fails somehow, the write operation to the main database also fails. This falls under the feature Atomicity as we discussed earlier in the article — [Relational Database](https://dev.to/pragyasapkota/relational-database-43l4). However, the dispute that might occur in the replication is when the data is too large, and the only concern is to make the system more available but not to improve [latency and throughput](https://dev.to/pragyasapkota/latency-and-throughput-340h). And thus, we chunk down the data which leads us to Sharding. ## Sharding Data sharding means breaking the huge database into smaller databases so that the latency and throughput are maintained after the database replication. You can choose how you want your data to be broken. There are two types of ways to shard your data — horizontal and vertical sharding. In horizontal sharding, the rows of the same table are stored in multiple database nodes whereas, in vertical sharding, different tables and columns are stored in a separate database. ![Sharding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4575qgaylnx1ebi3drk3.jpeg) {% embed https://github.com/aygarp-modsiw/System-Design-Concepts %} **_I hope this article was helpful to you._** **_Please don’t forget to follow me!!!_** **_Any kind of feedback or comment is welcome!!!_** **_Thank you for your time and support!!!!_** **_Keep Reading!! Keep Learning!!!_**
pragyasapkota
1,258,611
memo, 2022-04-07
이클립스 window ⇨ other ⇨ search ⇨ containing text : 검색하고싶은것 + working set : 검색 범위 ctrl + D :...
0
2022-11-16T04:25:27
https://dev.to/sunj/memo-2022-04-07-1kkk
java, eclipse
이클립스 - window ⇨ other ⇨ search ⇨ containing text : 검색하고싶은것 + working set : 검색 범위 - ctrl + D : 라인삭제 - revert : 원상복구 - 충돌&오류 난것 ⇨ revert하거나 overwrite 하기
sunj
1,258,951
what is array map() in JavaScript
map() method creates a new array with the results of calling a function for every array element. A...
0
2022-11-16T09:18:35
https://dev.to/wizdomtek/what-is-array-map-in-javascript-39ne
javascript, webdev, beginners, programming
**map()** method creates a new array with the results of calling a function for every array element. A function that is executed for every element in the array is passed into **map()** which has these parameters: * current element * the index of the current element * array that map() is being called on An example of using map() would be to square every element in an array ```javascript let numbers = [2, 4, 6, 8, 10]; // function to return the square of a number function square(number) { return number * number; } // apply square() function to each item of the numbers list let square_numbers = numbers.map(square); console.log(square_numbers); // Output: [ 4, 16, 36, 64, 100 ] ``` Another example is converting an array of data into JSX. ```javascript const food = [ { id: 0, name: 'orange', color: 'orange' }, { id: 1, name: 'banana', color: 'yellow' } ]; food.map(food => { return ( <div key={food.id}> <div> {food.name} - {food.color} </div> </div> ); }); ``` ## summary of map() Use **map()** when something needs to be done on every element in an array.
wizdomtek
1,258,982
The Perks of Combining Angular With ASP.Net Core
It's excellent to be aware of the unique advantages of these great techs in their respective fields,...
0
2022-11-16T10:17:33
https://dev.to/rachgrey/the-perks-of-combining-angular-with-aspnet-core-332n
angular, aspnet, webdev, programming
It's excellent to be aware of the unique advantages of these great techs in their respective fields, Angular for the front end and ASP.NET Core for the back end, but don't you want your business apps to shine? This blog will discuss how merging Angular with asp.net core can give you the best of both worlds. You may benefit most from choosing Angular with ASP.NET Core combination for the back end of your web app. ## Benefits of Angular with ASP.NET Core Combination ### Independent Codebase Your Angular code is free and independent of the.NET code while developing Angular apps using asp.net core as the backend. It can be hosted in a different repository if you incorporate it later. Now since you have control over the server side, this capability is also helpful if you create a mobile version in the future. ### Affordable rollout and cost-efficiency Despite Windows having a sizable market share on desktop computers, Linux remains the undisputed leader in the enterprise server and cloud sectors. .NET Core is accessible and functional practically everywhere, as you may already be aware. Running.NET Core web apps on Linux computers is advised while using open-source Angular. ### Speedy Development The Angular and.NET Core combination is a terrific choice if you want to quickly build a successful Net Core web app. The abundance of free and paid libraries offered by both frameworks can significantly speed up the development process. You should remember that Angular was developed using TypeScript, a dynamically typed language similar to C#. Because of this similarity, fewer mistakes exist, and you can use the same classes with only slight modifications. ### Plasticity of UI Stack Additionally, you can use Angular in conjunction with native ASP.NET MVC UI! Thanks to a few additions from different libraries and the adaptability of the most recent ASP.NET Razor version. This dramatically shifts functionality from the client to the server if you prefer back-end programming to front-end development. When paired with server-side rendering, it offers the chance to attain the most remarkable performance for the first page load. ## .Net Core for Backend: Why Is It a Saviour? In terms of statistics, according to the Stack Overflow annual Survey, 77.2% of respondents named.NET Core as their preferred non-web framework. Additionally, 3,700 companies and more than 60,000 developers have contributed to the.NET Core community. Without a doubt,.NET core has a promising future. Even future investments in.NET are expected to be concentrated on.NET Core. Overall, the launch of the open-source, cross-platform network, NET Core, has fundamentally changed Microsoft's approach to application development. ### Benefits - Simple and Reliable Maintenance - High Performance - Cross-platform - Open-source Angular Js Frontend Will Be the Key to Your Success the Angular framework was created to make it simpler to design user interfaces and overcome the limitations of other technologies in a single, cohesive package. Today, millions of people support Angular projects. Many companies distribute courses, provide training, and create Angular libraries. In the upcoming years, Angular is anticipated to gain more popularity thanks to its distinctive upgrades and core import features like: - Two-way Data Binding - Quick Development - Community - Component-based Structure ## Conclusion Of all the certainties, there isn't a magic elixir that can manage each firm, regardless of its needs and objectives. But in 99% of cases, the [Angular with ASP.NET Core](https://www.bacancytechnology.com/blog/angular-with-asp-net-core) stack works successfully. Angular will enhance your app with its modular structure, templating system, and asynchronous nature, and.NET Core will boost its performance and security. Combining the two has significant advantages for organizations and their particular goals.
rachgrey
1,259,215
BE A 10X BY UTILISING A WIKI.
As hard as it is to debug and solve issue. One of the more common mistakes we make as developers, is...
0
2022-11-16T12:15:12
https://dev.to/fortunembulazi/be-a-10x-by-utilising-a-wiki-2h2n
webdev, javascript, programming, productivity
As hard as it is to debug and solve issue. One of the more common mistakes we make as developers, is to solve a problem and forget about it. Only to find a similar problem in your next challenge or project. I **myself** am no stranger to this issue and it wasn't up until I track all the issues I faced on a wiki and how I solved them. This helps me to remember what I did by documenting it in some way but mostly helps with solving issues way faster because most issues are always recurring. So by keeping track of what you solved and how, you don't have to think the next time you face a similar issue since you have a wiki to remind you. **A wiki will help you be more productive, you may not realise it now but you'll thank your self for doing it in the future.**
fortunembulazi
1,259,539
Quick tip: Using Deno and npm to persist and query data in SingleStoreDB
Abstract This short article will show how to install and use Deno, a modern runtime for...
0
2022-11-16T16:08:44
https://dev.to/singlestore/quick-tip-using-deno-and-npm-to-persist-and-query-data-in-singlestoredb-4co2
singlestoredb, deno, npm, javascript
## Abstract This short article will show how to install and use [Deno](https://deno.land/), a modern runtime for JavaScript and TypeScript. We'll use Deno to run a small program to connect to SingleStoreDB and perform some simple database operations. ## Create a SingleStoreDB Cloud account A [previous article](https://dev.to/veryfatboy/quick-tip-using-dbt-with-singlestoredb-161g) showed the steps required to create a free SingleStoreDB Cloud account. We'll use **Deno Demo Group** as our Workspace Group Name and **deno-demo** as our Workspace Name. We'll make a note of our **password** and **host** name. ## Install Deno Installation of Deno is straightforward on a Linux platform: ```shell curl -fsSL https://deno.land/install.sh | sh ``` Once installed, we also need the following: ``` export DENO_INSTALL="/path/to/.deno" export PATH="$DENO_INSTALL/bin:$PATH" ``` We'll replace `/path/to/` with the actual path to the installation directory. We can check if the installation was successful by running the following command: ```shell deno --version ``` ## Create and Read operations We'll use an [example](https://github.com/denoland/manual/blob/main/node/how_to_with_npm/mysql2.md) from GitHub and create a small JavaScript file, `s2_test.js`, as follows: ```javascript import mysql from "npm:mysql2@^2.3.3/promise"; const connection = await mysql.createConnection({ host: "<host>", user: "admin", password: "<password>", }); await connection.query("DROP DATABASE IF EXISTS denos"); await connection.query("CREATE DATABASE denos"); await connection.query("USE denos"); await connection.query( "CREATE TABLE dinosaurs (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, description VARCHAR(255))", ); await connection.query( "INSERT INTO dinosaurs (id, name, description) VALUES (1, 'Aardonyx', 'An early stage in the evolution of sauropods.'), (2, 'Abelisaurus', 'Abels lizard has been reconstructed from a single skull.'), (3, 'Deno', 'The fastest dinosaur that ever lived.')", ); const [results, fields] = await connection.query("SELECT * FROM dinosaurs ORDER BY id"); console.log(results); const [result, field] = await connection.query( "SELECT description FROM dinosaurs WHERE name = 'Deno'", ); console.log(result); await connection.end(); ``` We'll replace the `<host>` and `<password>` with the values from our SingleStoreDB Cloud account. After running our program: ```shell deno run s2_test.js ``` the output should be as follows: ```json [ { id: 1, name: "Aardonyx", description: "An early stage in the evolution of sauropods." }, { id: 2, name: "Abelisaurus", description: "Abels lizard has been reconstructed from a single skull." }, { id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." } ] [ { description: "The fastest dinosaur that ever lived." } ] ``` ## Summary In this short article, we have quickly tested Deno with SingleStoreDB.
veryfatboy
1,260,224
Reflect: PR2 of Release 0.3
The issue that I was not able to fix... This week I made a contribution to Telescope, and...
0
2022-11-17T05:29:26
https://dev.to/liutng/reflect-to-pr2-of-release-03-4j9c
## The issue that I was not able to fix... This week I made a contribution to Telescope, and this is my first-time creating a Pull Request for Telescope which means I had zero knowledge of this project regarding its system structure design and it actually caused me to under-estimate my first attempt on issue [#3639](https://github.com/Seneca-CDOT/telescope/issues/3639) which is later to be proven infeasible both for me and my co-contributor Piotr with our limited knowledge on this project. When we were first assigned this issue, we thought changing it would be as straightforward as we thought it would be if we had designed it. After a few reverse engineering from the HTTP requests of the frontend, we located the code that we need to change in the backend, however, it is not easy to change any of the data structure since the fact that this project uses many layers of data, which requires us to not only change the data field of `posts` in Radis but we also need to add a column to the database. After a few poking around, we remeasured the workload of this issue and soon found out that we might not be able to create a Pull Request to fix this issue before the due of Release 0.3 which is Nov.18th, and both of us switched to another issue. ## The second issue I chose and how I fixed it The second issue I chose was [#3615](https://github.com/Seneca-CDOT/telescope/issues/3615). Since my previous involvement with this project, I realized the part that I need to modify was posts.js in the Posts Service, here is how I made modified the code. 1. As per the suggested solution made by David, I added a query param `expand` to the request `/post/` to use this extra query param to control if the returned data should contain detailed information(which are author, title, and publishDate) about the feed. 2. I used an if-else statement to check if the `expand` equals 1, and if it equals 1, I will call `Post.byId(id)` on each post in the post array to query their extra information from Radis, otherwise, it will return the original data as before. The thing that I was struggling with in this step was `Post.byId(id)` is executed asynchronously, so I need to use an asynchronous function to call it. After a few google searches, I realized I could use `await Promise.all(async()=>{})` to execute this asynchronous function synchronously. 3. Return the data that is created inside the if-else statement. ## Post-fixing jobs After finishing coding, I updated the document for the posts and also I wrote some tests to test my code, however, for the scenario when the request query param has `expand=1`, although this API works pretty well in actuality, the test still fails. This issue is still confusing me up until now when this reflect is written but I will find out the culprit that causes this test failure. **UPDATE**: I have already found the cause of this problem and fixed it. The problem appeared to be in the data of feed in each post. If I were writing feed content directly to post, it will fail since the feed object will not be recognized by `post::ensureFeed()`, the feed object will become null so the whole test will fail. To fix it, I simply replace the post's feed object with the feed id, so that `post::ensureFeed()` will always fetch the correspondent feed from Radis. ## Thanks for our fellow developers In the end, I want to shout out to Tue who helped us to solve the issue in which the posts don't show even though the database is already well-set. We won't be able to make the whole service run if there wasn't your help.
liutng
1,260,516
What the CRUD Active Record
The following will be a tutorial on CRUD methods used in ruby Active Record to...
0
2022-11-17T09:14:49
https://dev.to/cedsengine/what-the-crud-active-record-1cd2
programming, ruby, database, help
_The following will be a tutorial on CRUD methods used in ruby Active Record to manipulate(read/write)data. Learn more about Active Record here [Active Record documentation](https://guides.rubyonrails.org/active_record_basics.html)_ **CRUD?** What is it, what does it mean? If you aren't familiar already with CRUD, I can help explain the concepts briefly. CRUD is an acronym for CREATE, READ, UPDATE and DELETE. **Why CRUD?** CRUD is used heavily in programming as a way to communicate with stored data. These so called stored data usually come from API's and or databases. To access or manipulate the information we use CRUD. **CRUD examples** Below I will demonstrate the usage of the crud methods and the output of these invoked methods. I recommend referring to the Active Record documentation for any clarifications! [CRUD methods](https://guides.rubyonrails.org/active_record_basics.html#crud-reading-and-writing-data) --- #Create In active record, we can create objects using either the .new method or the .create method. We use the .create method to create and save a new object. Normally in Active Record the .create method is invoked on a Ruby class to create instances of said class. This method takes in a argument in the form of variables or hash. ``` //.create method adele_album = Album.create(song1: "hello", song2: "someone like you") or adele_song = Song.create("hello") // .new method album = Album.new album.song1 = "hello" album.song2 = "someone like you" album.save ``` When creating a new object with the .new method it is required to use the save method for the object to persist in the database unlike the .create method which creates and saves the object all together. --- #Read Active Records makes retrieving data from the database fairly easy, think of it as a getter method since it returns based on the method invoked. for example: ``` //the .all method will return all instances of album. Album.all => [song1: "hello", song2: "someone like you"] ``` ``` //the .find method returns based on the id passed in as a argument. Album.find(2) => [song2: "someone like you"] ``` Keep in mind when using Active Record 'id' are autonomously generated by active record when objects are created with the .create method or when saved aft6edr using the .new method. ``` //the first and last method returns the first object/instance //in the class and the last method returns the last //object/instance Album.first =>[song1: "hello"] Album.last =>[song2: "someone like you"] ``` --- #Update Before we are able to update a object it must first be read. This will give us the exact object we want to update. Update can be done with the .save method or the .update method. .save method is best used for objects not already present in the database, the .update is best used on a object present in a database that needs to be altered. ``` //for objects not already in the database we use .save song = Album.find_by(song1: "hello") song.song1 = "Rolling in the deep" song.save or song = Album.find_by(song1: "hello") song.update(song1: "Rolling in the deep") ``` check the database to make sure changes were made. --- #delete When deleting objects you can read the object or pass in the object as a argument depending on the delete method used. Below will be several examples of the delete method. ``` //when using the .destroy method, read the object first. first_song = Album.first first_song.destroy //the code above can be short handed Album.first.destroy //we can be specific with the key and values of a class //instance we want to destroy by using the .destroy_by method //this method takes in an argument of what you want deleted //however the destroy_by will delete all instances that have //the argument passed in, in this example all song1 keys with //the value "hello" will be deleted. Album.destroy_by(song1: "hello") //to delete all instances in a class use the .destroy_all //method Album.destroy_all //all instances in the Album class will be deleted. ```
cedsengine
1,260,519
Basics of Z-transform With Graphical Representation
Laplace transformation analyzes linear time-invariant (LTI) systems that operate continuously....
0
2022-11-17T09:58:55
https://dev.to/kellygreene/basics-of-z-transform-with-graphical-representation-234h
ztransform, graphicalrepresentation, basics, signalsandsystems
[Laplace transformation](https://byjus.com/maths/laplace-transform/) analyzes linear time-invariant (LTI) systems that operate continuously. Additionally, the z-transform is used to analyze the discrete-time LTI system. A mathematical expression of a complex-valued variable called Z, the Z-transform is mostly used as a numerical tool to convert from the time domain to the frequency domain. Any discrete temporal signal x (n), which is referred to by X (z), has the following z-transform: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmcimf59r0accawnehja.png) As the summation index number n shifts from - to +, the Z transform is a non-finite power series. But it is helpful for z-values where the aggregate is finite (bounded). In this context, "region of convergence" (ROC) refers to the set of z-values for which the function f (z) has a finite upper bound. ## What is Z-transform? There are many applications for the [z-transform in MATLAB](https://www.theengineeringprojects.com/2022/09/introduction-to-z-transform-in-signal-and-systems-with-matlab.html) in studying discrete signals and systems. We are familiar with continuous or analog signals in the temporal domain. However, digital processing is the foundation for contemporary communication and systems. As a result, we are compelled to convert our analog impulses to digital signals. The first step is to convert the analog signal into a digital representation by taking samples at a rate more significant than the Nyquist sampling rate. The passage of time between them is discrete. Each sample happens at t=nTs, where Ts stands for sampling time. Following sampling, we must quantize the data to be stored, analyzed, or sent, assigning each sample to one of M possible levels. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uu8czirbdwpva1tmzsks.jpg) ## Definition Let's say that the sequence is as follows: y[n] = y0, y1, y2,..... The sequence, in this case, contains samples of analog signals at each location. This sequence's z transform is described as follows: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0m9lm92fguuloo85evxs.png) A correct definition of Y(z) as a function of z requires converging the infinite series. In the same way, as s is just a complex variable in the Laplace transform, z is also a complex variable. Still, unlike n, it is continuous, making the two transformations equivalent. On the other hand, not all sequences or z-values result in the z-transform converging. The zone of convergence is a collection of z values at which the z-transform converges (ROC). We shall now witness various transformations of well-known signals. **Unit impulse** This short yet crucial sequence can be stated as follows: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnpug28q05kxo0co6kk6.png) By applying the z transform definition, we get: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ndon1ubdalzstw4htum.png) In this instance, the entire z-plane serves as the ROC. Based on the definition, we know that Z(n-k) = z-k if and only if n-k is negative. **Unit step** Another typical sequence is this one. The definition of a unit step is: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jw2y3qp4wufdpwkpeiiz.png) Using z-transforms, we may observe the following: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3chcgeodyimekx3r4us.png) Making |z-1| < 1 is the only way to get this geometric series to a convergent state. Its ROC is this. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9o1d4odyz0ecmis9175.png) **Geometric sequence** The provided geometric sequence is: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pmf56y06mgurcz1h49q.png) I'll use the definition once more: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f377sef4mx8n27tzgq28.png) This guy converges if |az-1| < 1. So, ROC is |z| > |a|. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl2p8r4n2b9zogts8ox6.png) ## Z-transform Plot ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syd6ryd8yl9il0zakaum.jpg) In the preceding illustration, we can see the z-transform diagram together with the region of convergence (ROC). The z-transform is composed of actual and fictitious parts. A complex z-plane is a figure that contrasts an imaginary component with a real one. The circle above has a radius of 1, hence the term "unit circle." A function's ROC and its poles and zeros can be shown on the complex z-plane. Z is a complex variable that is represented by the polar form as: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lp7xsd9u6ghr6jg8ar0.png) Where; r = the circle's radius ω = a given sequence's angular frequency ## Z-transform Properties **1. Linearity** As defined by the linearity property, if ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t48cmkz5mtp4yctii09b.png) and ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hb299w6g283r95hi7jww.png) then ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1jl5cgne4tqyorfu95ka.png) Given the above, it follows that the Z-Transform of a linear mix of two signals is equivalent to the linear mixture of the Z-Transforms of the two individual signals. **2. Time shifting** According to the time-shifting feature, if ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50fpusadbrslhug5v61v.png) then ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckdp3hlzp7z3l6u8wxfi.png) To reiterate what was said previously, a z-transform multiplied by a z-k element is equivalent to a circular transfer of the pattern by k samples. **3. Scaling** This characteristic states that if ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0j9qqsle95ym0uvvtsr.png) then ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ceflvztmtl3ku6h83ihp.png) The z-scaling transform of a function is equivalent to the time domain's multiplication by a factor a. **4. Time reversal Property** According to the Time reversal feature, if ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgnngtwqcymjb3txvs9x.png) then ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2p22iozx9glvgvwrhyg1.png) It suggests that folding a specific sequence into the z domain is equivalent to replacing z with z-1. ## Merits of Z-transform - The z-transform helps calculate the [discrete fourier transform (DFT)](https://www.allaboutcircuits.com/technical-articles/an-introduction-to-the-discrete-fourier-transform/). - Numerous digital filters are analyzed and created using Z-transform in this way. - The Z-transform is used for various tasks, including linear filtering, locating linear convolution, and cross-correlating different sequences. - You can use the z-transform to categorize systems as stable, casual, unstable , or anti-causal. ## Conclusion Z-transform is beneficial for studying signals that have been discretized in time. Consequently, you get a series of numbers in the temporal domain. By applying the z transform, we may examine these sequences' stability, frequency response, and other properties in the frequency domain (also known as the z domain). As a result, applying z transforms on continuous signals is equivalent to applying Laplace transforms.
kellygreene
1,260,524
Dedicated features of Oracle E-business Suite automated testing solutions
Oracle E-business Suite serves as a dedicated solution that is adopted by business organizations for...
0
2022-11-17T09:08:23
https://newshunt360.com/dedicated-features-of-oracle-e-business-suite-automated-testing-solutions/
Oracle E-business Suite serves as a dedicated solution that is adopted by business organizations for various operations. Providing necessary storage space, network facilities, and other solutions, the Oracle E-business Suite serves as the best solution, which is adopted by businesses all around the world. Proper implementation of the Oracle E-business Suite and successive upgradation requires continuous and rigorous testing. Manual testing can prove to be a time-consuming process that can lead to the loss of necessary resources for an organization. Therefore, a certified automated test can help deliver all the necessary solutions. Proper deployment of the Oracle E-business Suite and application updates can all be made possible through automated testing solutions. The automated tools help in the hassle-free implementation of Oracle EBS solutions and dedicated upgrades without causing any disturbance to the normal business work now. The [EBS test](http://www.opkey.com/Oracle-EBS-Automation) automation can help in reducing the inconvenience, cost as well as effort of business organizations concerned with Oracle EBS upgrades. There are various features of the Oracle EBS testing platform which are enumerated as follows: - Better test coverage: The automated testing solution can deliver better test coverage and help with parallel testing of various aspects. Thousands of tests can be run at the same time without causing any lag. These solutions are complete with all the necessary testing scripts that can be identified and used for carrying out the testing process without any errors. Every aspect related to an Oracle EBS solution can be analysed for overall test coverage. - Cloud-based automation solution: One of the essential aspects related to the Oracle EBS is that it is a fast cloud-based solution that can be adopted by businesses effectively and without any disruption to their normal workflow. The availability of testing scripts can enable hassle-free automation of the testing processes. - Hassle free migration: Migration from a traditional enterprise-based solution to a cloud-based solution can all be made possible through the Oracle EBS testing as well. Overall, business transformation can be made possible that can help in generating workflows and therefore help with the adoption of Oracle cloud-based platform with convenience. Apart from all the dedicated features mentioned above, the EBS offers dedicated multi-browser support and can even help in generating a dedicated impact assessment report as well. End-to-end integration, optimal test coverage, and availability of a single platform that can enable EBS implementation and upgrade can be made possible. Speedy testing, reduction in cost and effort, and seamless implementation of Oracle cloud-based solutions for an organization can all be made possible through a single solution. Opkey being a reliable organization, delivers the best Oracle EBS automated testing solutions. The solution delivered comes with the pre-build migration testing templates that can help with hassle-free migration to a cloud-based platform. All the necessary testing combinations can be implemented and used for migration to the Oracle cloud. The company is known for making available the best testing solutions and services that can help with comprehensive testing.
rohitbhandari102
1,260,844
A Beginner's Guide to Content Management Systems
Content management systems (CMS) are web applications that allow users to organise and manage their...
0
2022-11-17T14:16:07
https://dev.to/hr21don/a-beginners-guide-to-content-management-systems-34h6
opensource, beginners, webdev, tutorial
Content management systems (CMS) are web applications that allow users to organise and manage their digital content. It provides the user with a GUI (Graphical User Interface) to manage the website. To build a website, users do not need to have knowledge of databases or programming. In this post, we will define what a content management system is and cover the different types of CMS available. It will also emphasize the advantages and disadvantages of adopting a content management system. ## What is a Content Management System (CMS)? First, it's made up of two parts: - The front-end is the Content Management Application (CMA) which provides a simple interface for non-technical users to add, manage and remove web content from a website. - And the back-end is the Content Delivery Application (CDA), which delivers content requested by users from the server. ## How does a Content Management System work? A CMS works by allowing you to access your website's database using a simple GUI, which is often accessed via a web browser. You can access a variety of content management features from this interface, including: - Create and publish new pages on your website - Update or delete existing content and pages - Use pre-set categories, themes or templates to organise the layout of your pages - Ensure consistent presentation of content across your whole website - Manage your website's structure and navigation, including menus and sitemaps. - Manage editorial workflows and authorship permission levels. - Store and retrieve different types of content (eg text, images, podcasts, videos) in your database 👉 See how to [choose the best CMS](https://www.nibusinessinfo.co.uk/content/choose-best-cms-your-business) for your company. ## What are the different types of Content Management Systems? There are three broad types of CMS software: open source, proprietary and Software-as-a-Service CMS, including cloud-based solutions. **Open-Source CMS** An open-source CMS contains precisely what the name suggests: a source code that is open to the public view and free to use by anyone with constraints based on the licence type, the most prevalent of which are GPL and Apache. **Examples of Content Management Systems:** - WordPress (Cloud-Based CMS) - HubSpot (Cloud-Based CMS) - Joomla (On-Premise CMS) - Drupal (On-Premise CMS) - Wix (Cloud-Based CMS) 👉 See a full list of [Open-Source CMS](https://en.wikipedia.org/wiki/List_of_content_management_systems#Open_source_software). **Proprietary CMS** A proprietary CMS, as the name implies, is software that is the legal property of the business, group, or individual who built it. **Examples of Content Management Systems:** - Kentico - Microsoft Sharepoint - IBM Enterprise Content Management - Pulse CMS - Sitecore - Shopify 👉 See a full list of [Proprietary CMS](https://en.wikipedia.org/wiki/List_of_content_management_systems#Proprietary_software) **Software as a Service (SaaS) CMS** A SaaS CMS is a pre-built content management system that operates entirely in the cloud. It is usually accessible online without the need for any installation, upgrading, or maintenance. **Examples of Content Management Systems:** - Adobe Business Catalyst - Contentful - Huddle - Microsoft 365 - Oracle Content Management - Webflow 👉 See a full list of [Software as a Service CMS](https://en.wikipedia.org/wiki/List_of_content_management_systems#Software_as_a_service_(SaaS)) ## Advantages of Content Management Systems? CMS has key advantages over static HTML websites, including: **Quick Deployment** A CMS is the quickest way to speed up the development of websites. **Less Backend Coding** No programming knowledge is required because majority of them have drag-and-drop editors. **Ease of Maintenance** The main goal of using CMS is time management. CMS provides the functionality to create, manage or modify the content of the site. **Convenient for non-technical users** Anyone can use a CMS for basic operations like writing, publishing and adding media. **SEO-Friendly Features** CMS platforms offers extra plugins and tools to improve the security of your website. Using a permission-based system, the site's author can manage who has access to his site. **Improve Customer Services** The CMS's plugins are available which directly supports SEO optimisation strategies that will increase the traffic to your website. ## Disadvantages of Content Management Systems? Despite their many advantages, there are a few common issues to take into account before selecting a CMS. For example: **Plugins and widgets are required.** Most of the functionalities that users can use must be provided by plugins and widgets. **Hidden cost of plugins and widgets** Many plugins and widgets are expensive and can cost hundreds of dollars. **Slow Page Performance** The page performance of a CMS-created web page is noticeable slower than that of many other custom development solutions. **Cost of Maintenance** CMS systems must be kept up to date on a regular basis. Which can impact organisation differently depending on their size, site traffic and reputation. **Not highly scalable** With regard to open-source and freeware. Most systems can only support a certain number of users; as content and traffic grow, you'll need to tweak the CMS system or switch to something more powerful. **Limitation in functional requirements** A CMS system will fail to meet your functional needs if you have a larger project with various procedures, workflows, and stakeholders. ## Importance of CMS to your business If you need a CMS, then carefully consider the following benefits and prioritise your CMS requirements to find the right CMS that will, help you meet your business needs. 1. Streamline your regular web process. 2. Update your website remotely, as and when necessary. 3. Ensure the website has a consistent 'look and feel'. 4. Customise your website to match your specific business needs. 5. Reduce website maintenance costs. 6. Store archived content, either for future use or reference. 7. Use dynamic marketing to improve sales or user satisfaction. 8. Optimise your website and content for search engines or mobile use.
hr21don
1,260,896
Top 10 Bootstrap Themes
Website is always the front face to your business. Every user who gets to know about you goes through...
0
2022-11-17T13:45:41
https://www.lambdatest.com/blog/top-10-bootstrap-themes/
webdev, tutorial, testing
Website is always the front face to your business. Every user who gets to know about you goes through your website as the first line of enquiry. So, you must make sure that your website looks the best. Themes add a structure to your website. Nearly every CMS like wordpress, drupal, joomla etc. is build upon the aspect of changeable plug and play theming. But what about plain websites that are using just vanilla HTML, CSS and JS? For them, Bootstrap created a marketplace for Bootstrap based themes. These theme are built using latest HTML, CSS, and JavaScript code packages aimed at helping webmasters in web styling, deciding UI components, and layouts which can then be utilized to enhance a web project under development. In short, they are templates for websites which we can adapt and build upon. Now the question comes, which are the best themes out of thousands present on the marketplace? Here we have made some of that effort for you. Here are the top 10 bootstrap themes that we explored! ## 1. Wingman Landing Page & App Template: Powered by Bootstrap 4, Wingman is a collection of styled pages and components. This theme contains 10 unique Landing pages, coming soon pages and suits containing both photographic and illustration styles. ![](https://cdn-images-1.medium.com/max/3170/0*m9jNQHWfLeiBh9eq.png) ## 2. Material Kit PRO — Bootstrap 4 Material Design UI : Bootstrap 4 Material Design UI :- Material Kit PRO is a great design by team Creative Tim. This theme contains an ample number of parts built to adjust together and look amazing. Multiple options are put together to customize pixel perfect pages. ![](https://cdn-images-1.medium.com/max/3164/0*OKEPtM7OqzspnO3N.png) ***Check this out: [Selenium online](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=nov17_sd&utm_term=sd&utm_content=webpage) Testing — Test on Selenium Grid Cloud of 3000+ Desktop & Mobile Browsers.*** ## 3. Beagle — Responsive Admin Template: Responsive Admin Template :- Beagle is a beautiful admin template with a clean and fresh concept containing 1000 of beautiful features like, responsive web designs, Optimized CSS animation in mobile, Two sidebar, Fixed top bar, SAAS files and much more. ![](https://cdn-images-1.medium.com/max/3144/0*ri4V_NWPOgng4lh4.png) ## 4. Marketing : Marketing theme is built by the bootstrap team. It is designed to make building beautiful product, landing and corporate sites easier than ever. With bootstrap marketing theme it’s easy to build a site for any brand or style. ![](https://cdn-images-1.medium.com/max/3150/0*pXnx2r_FB5BcJFif.png) ![](https://cdn-images-1.medium.com/max/2000/0*Q5-zOgmc4fKgs-VK.jpg) ## 5. Boomerang — Bootstrap 4 Business & Corporate Theme: Boomerang is a clean, modern and responsive Bootstrap 4 template built using modern template tools like GULP,PUG, SASS, HTML5 and CSS3. Its package consists of two kind of templates: 1. simple HTML/CSS template and 2. GULP/PUG/SASS template. If you want to showcase your company portfolio, blogs, etc in professional manner this is the best suited theme for you. ![](https://cdn-images-1.medium.com/max/3152/0*7M6XwYHONBJQl7As.png) ![](https://cdn-images-1.medium.com/max/2000/0*yJmWOOft_vvqgwHN.png) ## 6. Now UI Kit PRO — Premium Bootstrap 4 Web Kit: Premium Bootstrap 4 Web Kit:- Now UI Kit PRO is a premium Bootstrap 4 kit which is another great example built by team Creative Tim. With over 1000 individual components this theme gives you the freedom of choosing and mixing. You can find out as many as 1000 components, 34 sections, and 11 example pages in a single theme. ![](https://cdn-images-1.medium.com/max/3152/0*Utn6Er0HdPqg2SfU.png) ## 7. Application : Application Theme is another amazing theme by the team who built bootstrap itself. It makes your web apps look simple with rich timelines, profiles, notifications, messaging, light boxes etc and was designed as its own extended version of Bootstrap. ![](https://cdn-images-1.medium.com/max/3162/0*BhtHOpYNmEj6j95h.png) ***Check this out: [Selenium Automation](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=nov17_sd&utm_term=sd&utm_content=webpage) testing Cloud — Test on Selenium Grid Cloud of 3000+ Desktop & Mobile Browsers.*** ## 8. Touche — Cafe & Restaurant Theme: Touche is a fully responsive HTML5 Template built beautifully along with a beautiful menu with unlimited categories, an image gallery, functional reservation, contact form, newsletter form powered by mailchimp. It has been built ideally for restaurants, cafes, bakeries, bars and other food related websites. ![](https://cdn-images-1.medium.com/max/3154/0*6wcTLSyeKdfoE_sc.png) ## 9. Milo — Magazine/Blog Theme: Milo is all for bloggers who are interested in writing blogs, articles. It’s a clean blog designed to optimize your reading experience as much a possible. It features you 6 homepage variants, Google Fonts, Quick start document, Demo pages etc. ![](https://cdn-images-1.medium.com/max/3140/0*vSHbWRm79odSMQpa.png) ![](https://cdn-images-1.medium.com/max/2000/0*yQmIyfpZ2WTGpJcL.jpg) ## 10. Spark — Responsive Admin Template : Spark is a premium theme developed with the latest and trusted technologies such as HTML5, CSS3, JQuery, NPM. It includes over 25+ responsive and customizable pages. It can be used for any type of web application such as admin dashboards, file management systems, project management system, leaderboard and much more. ![](https://cdn-images-1.medium.com/max/3146/0*iDmz0hRb0qG6HxDZ.png) If you are going on a venture to create a new web project, this list may ease your choice of themes. If you have anything else storming up in your mind, don’t forget to share it with us. ***Check this out: [Selenium Testing](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=nov17_sd&utm_term=sd&utm_content=webpage) Automation Cloud - Test on Selenium Grid Cloud of 3000+ Desktop & Mobile Browsers.***
surajkumaar
1,260,927
Laravel - Cashier - Stripe Subscriptions
I created a Laravel app using Cashier and Stripe. It uses "teams" as the subscription approach. I...
0
2022-11-17T15:03:59
https://dev.to/bkl256/laravel-cashier-stripe-subscriptions-3h6j
laravel, stripe, cashier, subscription
I created a Laravel app using Cashier and Stripe. It uses "teams" as the subscription approach. I have it working exactly as I want it for test. However, once I use production and actual payments, I am not getting any response (a null response) when checking for subscription. The subscribed call returns null. I added additional logging and subscriptions return null as well. I have updated keys to production keys. Any ideas why production site is returning nulls where the same code returns subscribed in test mode for Stripe?
bkl256