id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,892,631
How I configured YubiKey to sign and decrypt emails on Mac
Originally published on my blog As you see on my "resume" page, I uploaded my public key for...
0
2024-06-18T15:29:32
https://pabis.eu/blog/2024-06-18-YubiKey-OpenGPG-Mail-Sign-Decrypt.html
yubikey, gpg
_Originally published [on my blog](https://pabis.eu/blog/2024-06-18-YubiKey-OpenGPG-Mail-Sign-Decrypt.html)_ As you see on my ["resume"](https://pabis.eu/cv.html) page, I uploaded my public key for encrypting emails back when I was starting my personal website. However, moving private key from one place to another is never a good option. My YubiKey cost me 55€ and I barely use it for FIDO/U2F. Why not keep the private key in there and have it with me at all times? Required software (for Mac) --------------------------- We need some software on Mac to make this work. The process should be similar on Linux. Assuming you have [brew](https://brew.sh) installed, we will install the following packages: ```bash $ brew install gnupg pinentry-mac ykman ``` I will also use [Thunderbird](https://thunderbird.net) as it integrates with GnuPG directly (with a special cheat code you need to enter in settings). First thing to do is to set up GnuPG to ask for PIN code using a GUI. We will use `pinentry-mac` as our input program and restart GPG Agent. ```bash $ mkdir -p ~/.gnupg $ touch ~/.gnupg/gpg-agent.conf $ PINENTRY=$(which pinentry-mac) $ echo "pinentry-program $PINENTRY" >> ~/.gnupg/gpg-agent.conf $ gpgconf --kill gpg-agent $ gpgagent --daemon ``` Setting up PIN on YubiKey ------------------------- Now we can set our PIN codes on YubiKey for OpenPGP. You should do this and not use factory default codes. The default Admin PIN is `12345678` and the default PIN is `123456`. **Don't forget them!** ```bash $ ykman openpgp access change-admin-pin -a 12345678 # Enter your new 8-digit Admin PIN $ ykman openpgp access change-pin -P 123456 # Enter your new 6-digit PIN ``` Generating key pair on YubiKey ----------------------------- Now using GnuPG, we can interface with our YubiKey OpenPGP implementation to create a new key pair that will be safely stored on it. ```bash $ gpg --card-edit ``` You will enter the `gpg/card` prompt. Issue the following commands to the YubiKey and enter PIN (not-admin) code when asked. Do not back up the key as it is safer that way. Set some expiration date you like. Enter your name, email and comment. Finish by giving the Admin PIN when asked. ```bash gpg/card> admin gpg/card> generate Make off-card backup of encryption key? (Y/n) n Key is valid for? (0) 2y Real name: John Doe Email address: john@example.com Comment: my yubi pgp key ``` In case you want to use a different key size, before `generate` command, type `key-attr` and select key type you want. Verify with Admin PIN. Next export the public key to a file using GPG. Assuming that you don't have any other keys in your keyring for the same e-mail address, you can write the following. This will be the key you share with others. Otherwise you can run in card edit mode of GPG `list` and copy the key ID to export. ```bash $ gpg --armor --export john@example.com > mypublickey.asc $ gpg --import mypublickey.asc $ gpg --list-keys ``` Look for the key ID you have just imported and save it. We will need it later. It will look something like: ``` pub rsa4096 2020-05-04 [SC] [expired: 2024-05-03] THIS-IS-KEY-ID 01234ABCDEF01234... uid [ultimate] My Name <johndoe@example.com> ``` Cross-signing with an old key ---------------------------- This step is optional. In case you have some previous key pair that you used for GPG, you can copy over the new public key to your old keyring and sign it with the old private key. This way, you can have unbroken chain of trust. ```bash $ gpg --import mypublickey.asc $ gpg --list-keys # Find your old and new key IDs $ gpg --default-key OLDKEYID --sign-key NEWKEYID $ gpg --armor --export NEWKEYID > mypublickey-signed.asc ``` Configuring Thunderbird ----------------------- By default Thunderbird uses some internal GnuPG storage. We need to ask it to connect to the external GPG agent so that it can connect to our YubiKey. Open Thunderbird. Add your e-mail account if you didn't already. Click the three lines menu and select `Settings`. Scroll down to the bottom of the `General` page and click `Config Editor...`. ![Thunderbird Config Editor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17lc5velc5pkan7fy17f.jpg) Search for `mail.openpgp.allow_external_gnupg` and double click on it to set it to `true`. Quit Thunderbird with ⌘Q and open it again. Right click on the account name in the left panel and select `Settings`. Select `End-to-End Encryption` section and click `Add key`. Select `External through GnuPG` and enter the key ID. It won't show any confirmation if that is correct, you will have to just try to use it. ![External through GnuPG](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t49puk2t6b6m4ynuwqw7.jpg) Try sending encrypted e-mail to yourself with signature. Select `Encrypt` and `OpenPGP` at the top. If you see a warning that there's no key to encrypt this email, simply click `Resolve` and import the public key you exported previously (`mypublickey.asc`). ![Resolve unknown public key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8uikuvarfeyxijdi8rc2.jpg) When trying to send or open encrypted e-mail, the PIN dialog should pop up. Type the 6-digit PIN you set for the key. ![PIN dialog](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5brsdei48ugll7aslve.jpg) The problem with this setup is that sometimes Thunderbird freezes when trying to decrypt or sign the email. I need then to "Force Quit" it in Activity Monitor on Mac. With touch policy enabled (`ykman openpgp keys set-touch`) it is even more problematic but sometimes it works. Once you enter the PIN for both decryption and signing and don't disconnect the key, don't kill GPG agent and don't quit Thunderbird, it should work smoothly.
ppabis
1,892,633
🤑Bitcoin Price: Top Reasons Why $65K Support Can End Crypto Crash?
📊 The Bitcoin Fear and Greed Index remains high at 74, suggesting early stages of greed. Despite...
0
2024-06-18T15:28:36
https://dev.to/irmakork/bitcoin-price-top-reasons-why-65k-support-can-end-crypto-crash-1ajn
📊 The Bitcoin Fear and Greed Index remains high at 74, suggesting early stages of greed. Despite recent losses, some investors see BTC as a good buy at current levels. However, cautious optimism is advised as the market could see further sell-offs. 🔍 Over the past 7 days, Bitcoin has declined by 2.5%, adding to a 4.5% drop over the past two weeks. Support at $65,000 is crucial for a bullish outlook, potentially pushing BTC to $70,000. 📉 Analysts note that retail investors have not yet significantly impacted the market. Long-term holders provide a solid price base, but Bitcoin needs to breach key resistance levels at $68,000 and $70,000 for a bullish trend. 📈 Bitcoin is in a falling wedge pattern, indicating potential short-term recovery if $65,000 support holds. The RSI is at 40, suggesting a selling bias. If selling continues, BTC may drop to $63,000 or $60,000. Bulls face resistance at $68,000 and $70,000 before targeting new highs. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4zjsq540l3lw6arigml.png)
irmakork
1,892,632
🔥🔥🔥Top 5 Altcoins To Buy If Targeting 2X Recovery This Week
📈 Investors are scouting the market for altcoins to buy after Bitcoin's volatility caused a downturn....
0
2024-06-18T15:27:21
https://dev.to/irmakork/top-5-altcoins-to-buy-if-targeting-2x-recovery-this-week-1d8e
📈 Investors are scouting the market for altcoins to buy after Bitcoin's volatility caused a downturn. Despite the losses, some altcoins show recovery signals. Here are five altcoins that could rally by 100% this week. Notcoin (NOT) 🚀 Notcoin, a play-to-earn token, launched in mid-May and has shown strong performance with a $1.6 billion valuation. Trading at $0.0158, NOT saw a 3% increase in 24-hour volume. Increased activity on the TON blockchain, with USDT supply exceeding $580 million, boosts its prospects. JasmyCoin (JASMY) 💹 JasmyCoin allows users to own and monetize their data. After a pump linked to Apple rumors, JASMY found support at the 0.5 Fibonacci level and targets 2X gains. Trading at $0.0338, with a 0.4% increase in 24-hour volume, JASMY is a strong buy for potential 100% gains. Shiba Inu (SHIB) 🐕 Shiba Inu, known as the Dogecoin ‘killer,’ could rebound soon with bullish news and increased burn rates. Real-life adoption is rising, with SHIB accepted by DevourGO. Trading at $0.0000181, SHIB's 24-hour volume spiked by 148%, signaling strong interest. DOG•GO•TO•THE•MOON (Runes) (DOG) 🌕 DOG•GO•TO•THE•MOON (Runes), a meme coin on the Bitcoin network, has a $613 million market cap and trades at $0.006089. With a 45% increase in 24-hour volume, it aims to surpass the $1 billion mark. Pepe (PEPE) 🐸 Pepe, a top meme coin, is gearing up for another rally. Despite a recent 10.1% weekly drop, its 24-hour trading volume surged by 75.2% to $1 billion. PEPE trades at $0.00001071, showing increased market activity. 📉 While these altcoins have declined recently, they offer strong potential for attractive returns based on current trends and indicators. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdxgu5eb5nn24oa1nnr8.png)
irmakork
1,892,845
How to Use Tiptap's Collaboration Feature with Rails Action Cable
In this post, we’ll walk through setting up Tiptap’s collaboration feature with Rails Action Cable...
0
2024-06-19T15:34:44
https://www.geekyhub.in/post/implementing-a-google-doc-notion-like-collborative-editor-in-rails-react-tiptap/
rails, react, actioncable, tiptap
--- title: How to Use Tiptap's Collaboration Feature with Rails Action Cable published: true date: 2024-06-18 15:25:00 UTC tags: #rails #reactjs #actioncable #tiptap canonical_url: https://www.geekyhub.in/post/implementing-a-google-doc-notion-like-collborative-editor-in-rails-react-tiptap/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5n7n3xnvlnekqa3ldtty.jpg --- In this post, we’ll walk through setting up Tiptap’s collaboration feature with Rails Action Cable and ReactJs. Tiptap is a powerful headless editor built on ProseMirror, and when combined with Y.js, it allows for real-time collaborative editing. We’ll use Mantine component libaray, but it’s not mandatory for this setup. If you prefer to dive directly into the code, check out the example on [Github](https://github.com/vikas-0/collab_demo) {% embed https://www.youtube-nocookie.com/embed/HXpudWU5FxQ %} ### Prerequisites Ensure you have the following installed: - Ruby on Rails - Redis - Node.js and Yarn - Your preferred mehtod of React Setup with Rails ### Step 1: Setting Up Mantine First, we’ll set up Mantine for styling. Follow the [Mantine guide for Vite](https://mantine.dev/guides/vite/) to install the necessary packages: (Same method worked for me using esbuild in my setup. You can do it you own way or choose not to use Mantine) ```bash yarn add @mantine/core @mantine/hooks @mantine/tiptap @tabler/icons-react @tiptap/react @tiptap/extension-link @tiptap/starter-kit @tiptap/extension-placeholder @tiptap/extension-collaboration-cursor @tiptap/extension-collaboration yjs y-prosemirror yarn add --dev postcss postcss-preset-mantine postcss-simple-vars ``` > Note: This setup includes both Mantine and Tiptap. If you do not require Mantine, skip installing Mantine-related dependencies. ### Step 2: Install Rails Dependencies ```bash bundle add redis y-rb_actioncable y-rb ``` Here we are installing Y.js adapter for Ruby and Action Cable. ### Step 3: Configure Tiptap with Collaboration In the Tiptap setup, configure the StarterKit with history: false as the Collaboration extension comes with its own history management. Additionally, we’ll add a random color generator for collaboration cursors. ```javascript function getRandomColor() { const colors = ["#ff901f", "#ff2975", "#f222ff", "#8c1eff"]; const selectedIndex = Math.floor(Math.random() * (colors.length - 1)); return colors[selectedIndex]; } const editor = useEditor({ extensions: [ StarterKit.configure({ history: false }), Underline, Link, Superscript, SubScript, Highlight, TextAlign.configure({ types: ['heading', 'paragraph'] }), Placeholder.configure({ placeholder: 'This is placeholder' }), Collaboration.configure({ document: doc // Configure Y.Doc for collaboration }), CollaborationCursor.configure({ provider, user: { name: "Vikas", color: getRandomColor() } }) ] }); ``` Code to connect with websocket provided by ActionCable. Don’t worry about the channel creation now, we will create it later. Assuming channel name will be `SyncChannel` we will add following code. (Here id is hardcoded, as this is just a demo. we won’t be using proper auth in backend as well to keep things simple) ```javascript // ... other imports import { createConsumer } from "@rails/actioncable" import { WebsocketProvider } from "@y-rb/actioncable"; const consumer = createConsumer(); const doc = new Y.Doc() const provider = new WebsocketProvider( doc, consumer, "SyncChannel", { id: 1 } ); // ... other codes ``` You can see full frontend code in [App.jsx](https://github.com/vikas-0/collab_demo/blob/main/app/javascript/App.jsx). This contains everything in a single file which is not great but good enough for this case. ### Step 4: Set Up Rails Action Cable Create a new channel name `SyncChannel` at `app/channels/sync_channel.rb`. ```ruby # frozen_string_literal: true class SyncChannel < ApplicationCable::Channel include Y::Actioncable::Sync def subscribed # initiate sync & subscribe to updates, with optional persistence mechanism sync_for(session) { |id, update| save_doc(id, update) } end def receive(message) # broadcast update to all connected clients on all servers sync_to(session, message) end def doc @doc ||= load { |id| load_doc(id) } end private def session @session ||= Session.new(params[:id]) end def load_doc(id) data = REDIS.get(id) data = data.unpack("C*") unless data.nil? data end def save_doc(id, state) REDIS.set(id, state.pack("C*")) end end ``` This has Redis initialized as REDIS, replace it with your Redis variable name. We also created a Session model for `sync_for` mehtod. You can check documentation for sync\_for [here](https://y-crdt.github.io/yrb-actioncable/Y/Actioncable/Sync.html#sync_for-instance_method). ```ruby # frozen_string_literal: true class Session attr_reader :id def initialize(id) @id = id end def to_s "sessions:#{id}" end end ``` And finally `ApplicationCable::Connection` will be as follows ```ruby module ApplicationCable class Connection < ActionCable::Connection::Base identified_by :id def connect self.id = SecureRandom.uuid end end end ``` ### Step 6: Add Styles for Collaboration Cursor (Option) Everything should be working by now. In this step the cursor was looking odd, so some [CSS](https://github.com/vikas-0/collab_demo/blob/main/app/javascript/App.css) can be add to make it look good. Finally you can run your rails server and it should be good to go once we add all missing piecies specially authorisation. ### Conclusion By following these steps, you should have a real-time collaborative editor up and running using Tiptap, Y.js, and Rails Action Cable. While we used Mantine for styling in this demo, you can customize the styling as per your requirements. This setup provides a robust foundation for building collaborative applications with rich text editing capabilities.
vikas
1,892,582
Generative AI as a Career: Unlocking Future Tech Opportunities
In the rapidly evolving landscape of technology, artificial intelligence (AI) continues to push...
0
2024-06-18T15:24:45
https://dev.to/ms1034/generative-ai-as-a-career-exploring-opportunities-in-the-future-of-technology-4g7g
ai, machinelearning, deeplearning, generativeai
In the rapidly evolving landscape of technology, artificial intelligence (AI) continues to push boundaries and redefine possibilities across various industries. One of the most promising and intriguing areas within AI is generative AI. This subset of artificial intelligence focuses on creating new content, whether it's images, text, music, or even entire virtual environments, using advanced algorithms and deep learning techniques. As generative AI becomes more sophisticated, its applications are expanding, and so are the career opportunities associated with it. ## What is Generative AI? Generative AI refers to a class of algorithms that enables machines to generate new content autonomously. Unlike traditional AI systems that rely on rules and predefined data, generative AI learns patterns from large datasets and can then produce original outputs that mimic the style and characteristics of the input data. This technology leverages techniques such as neural networks, reinforcement learning, and deep learning to achieve its capabilities. ![Kid image generated using Gen AI models](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73tg0bpsb043lrq8pjip.jpg) ## Why Generative AI? ### High Demand & Growth: #### Market Size: Estimates vary slightly, but according to Fortune Business Insights, the global generative AI market size was valued at roughly USD 43.87 billion in 2023. ![Market Size of Gen AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0n69q12iwgs9ead5m6y.png) #### Projected Growth: The future looks bright! The market is expected to reach a staggering USD 967.65 billion by 2032, with a Compound Annual Growth Rate (CAGR) of around 39.6% during the forecast period (2024-2032) [Source: Fortune Business Insights](https://www.fortunebusinessinsights.com/generative-ai-market-107837) ![Gen AI Market Share in different fields](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/256oumtiqd7wgu0xuzji.png) #### Innovation and Impact: As generative AI continues to mature, its impact on industries and society is expected to grow significantly. Generative AI is revolutionizing how we create, design, and solve problems. By choosing this career path, you'll be right at the forefront of technological advancement, with the potential to shape the future of AI. #### Diverse & Rewarding Opportunities: The beauty of generative AI is the wide range of specializations available. Whether your passion lies in creative design, technical problem-solving, or ethical considerations, there's a niche waiting to be explored. This field offers the chance to combine your unique skills and interests with cutting-edge technology. #### Competitive Salaries & Benefit: The demand for skilled generative AI professionals is outpacing supply, leading to attractive salaries and benefits packages. As this field matures, we can expect compensation to remain highly competitive. ![ziprecruiter.com Generative AI Engineer Salary](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivviuzmlghara9k9v0x7.png) #### Future-proof your Career: As AI continues to integrate into every aspect of our lives, skills in generative AI will be highly sought after. Choosing this path sets you up for a long and rewarding career in a rapidly growing field. ### Career Paths in Generative AI The field of generative AI offers diverse career paths for individuals with a range of skills and interests: 1. **Research and Development**: Researchers in generative AI focus on advancing the underlying algorithms and models. They explore new architectures, improve training techniques, and aim to enhance the quality and efficiency of generative outputs. Careers in this domain often require a strong background in mathematics, computer science, and deep learning. 2. **Software Development**: Software engineers specializing in generative AI build practical applications and systems that leverage generative models. They may work on creating platforms for content generation, integrating AI with existing software solutions, or developing APIs that allow other applications to utilize generative capabilities. 3. **Creative Industries**: Generative AI is increasingly being used in creative fields such as art, music, and design. Artists and designers can use generative algorithms to explore new ideas, generate unique visual or auditory content, and even collaborate with AI systems to create hybrid works that blend human creativity with machine intelligence. 4. **Gaming and Virtual Reality**: In the gaming and virtual reality sectors, generative AI is pivotal for creating immersive experiences. Game developers use AI to generate realistic environments, populate game worlds with intelligent characters, and enhance gameplay through procedural generation techniques. 5. **Healthcare and Biotechnology**: Generative AI has applications in medical imaging analysis, drug discovery, and personalized medicine. Professionals in these fields utilize AI to generate synthetic data for training medical models, simulate biological processes, and accelerate research and development efforts. 6. **Ethics and Policy**: As generative AI technologies advance, there is a growing need for experts in ethics, policy-making, and regulation. Professionals in this domain work on establishing guidelines for the ethical use of AI, addressing concerns related to biases in generative models, and ensuring that AI deployments comply with legal and societal norms. ### Skills Required Entering a career in generative AI typically requires a blend of technical skills, domain knowledge, and creativity: - **Programming Languages**: Proficiency in languages such as Python, C++, and Java is crucial for implementing and optimizing AI algorithms. - **Machine Learning and Deep Learning**: A strong foundation in machine learning concepts, neural networks, and frameworks like TensorFlow or PyTorch is essential. - **Domain Expertise**: Depending on the chosen career path, knowledge of specific domains such as art, healthcare, or gaming may be advantageous. - **Creativity**: Especially for roles in creative industries, an ability to think innovatively and explore unconventional approaches is highly valuable. - **Problem-Solving Skills**: Generative AI often involves tackling complex problems, so strong analytical and problem-solving abilities are essential. In conclusion, generative AI presents exciting career prospects for those passionate about technology, creativity, and innovation. Whether you envision yourself as a researcher pushing the boundaries of AI capabilities, a developer building practical applications, or an artist exploring new forms of expression, the field of generative AI offers a diverse range of pathways to explore and opportunities to make a meaningful impact on the world. As AI continues to evolve, so too will the possibilities for those who choose to embark on a career in generative AI.
ms1034
1,892,630
Dockerizing Your Application: A Beginner's Guide
Welcome back to our blog series on becoming a Certified Kubernetes Administrator. This is post number...
0
2024-06-18T15:23:38
https://dev.to/jensen1806/dockerizing-your-application-a-beginners-guide-36h6
docker, containers, dockerhub, kubernetes
Welcome back to our blog series on becoming a Certified Kubernetes Administrator. This is post number two in our comprehensive series designed to cover all concepts, demos, and hands-on exercises for the Kubernetes certification exam, based on the latest 2024 curriculum. In our previous post, we discussed container fundamentals: why they are necessary, how they work, and how they differ from virtual machines. If you are already familiar with the basics of containers, feel free to skip that post. However, if you are new to the concept, I highly recommend starting there. ![Docker animated image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w76vpnc02oayxmeprdt4.jpeg) In this post, we'll be diving into a hands-on demo on how to dockerize an application from scratch. This guide is intended for beginners and will cover everything from writing a Dockerfile, running multiple commands to build your container, and finally hosting your application. ## Prerequisites Before we begin, make sure you have Docker installed on your computer. Docker provides detailed installation instructions for various operating systems here[](https://docs.docker.com/engine/install/). If you encounter any issues installing Docker locally, you can use Docker's sandbox environment at Play with Docker[](https://labs.play-with-docker.com/). ## Step 1: Setting Up the Environment First, we need a sample application to dockerize. We will use a simple to-do list application available on GitHub[](https://github.com/docker/getting-started-app.git). **1. Clone the Repository** `git clone https://github.com/docker/getting-started-app.git cd getting-started-app` **2. Create a Dockerfile** Create a new file named Dockerfile in the root of your project directory. `touch Dockerfile` Open the Dockerfile in a text editor and add the following content: Use an official Node runtime as a parent image `FROM node:18-alpine` Set the working directory in the container `WORKDIR /app` Copy the current directory contents into the container at /app `COPY . /app` Install any needed packages specified in package.json `RUN yarn install --production` Make port 3000 available to the world outside this container `EXPOSE 3000` Define environment variable `ENV NAME World` Run app.py when the container launches `CMD ["node", "src/index.js"]` ## Step 2: Building the Docker Image With the Dockerfile in place, we can now build our Docker image. `docker build -t day02-todo .` This command will build the Docker image and tag it as day02-todo. ## Step 3: Running the Docker Container Now that we have our Docker image, we can run it as a container. `docker run -d -p 3000:3000 day02-todo` This command will run the container in detached mode and map port 3000 on the host to port 3000 in the container. ## Step 4: Pushing the Image to Docker Hub To make our Docker image available for others, we need to push it to Docker Hub. **1. Login to Docker Hub** `docker login` **2. Tag the Image** `docker tag day02-todo <your-dockerhub-username>/day02-todo:latest` **3. Push the Image** `docker push <your-dockerhub-username>/day02-todo:latest` ## Step 5: Pulling and Running the Image on Another Machine To verify that everything works correctly, you can pull the Docker image on another machine and run it. `docker pull <your-dockerhub-username>/day02-todo:latest docker run -d -p 3000:3000 <your-dockerhub-username>/day02-todo:latest` ## Conclusion In this post, we covered the steps to dockerize a simple application, build a Docker image, run a container, push the image to Docker Hub, and finally pull and run the image on another machine. We also touched on some best practices for optimizing Docker images. In the next post, we'll delve into more advanced Docker concepts and Kubernetes integration. Stay tuned and happy learning! Feel free to leave comments or questions below. Until next time!
jensen1806
1,892,601
Is it safe to use encryption with plausible deniability?
I wanted to try to hide some of my files in a way that it wouldn't even be visible that something is...
0
2024-06-18T15:19:01
https://dev.to/lauralaura/is-it-safe-to-use-encryption-with-plausible-deniability-l7m
encryption, linux, security
I wanted to try to hide some of my files in a way that it wouldn't even be visible that something is hidden. I found a [tutorial for Cryptsetup LUKS](https://www.blunix.com/blog/plausible-encryption-deniability-on-linux-with-cryptsetup-luks.html) and tried to implement it. The problem is what I read on several other [threads](https://forums.whonix.org/t/veracrypt-plausible-deniability-encryption-for-whonix-users/7247/6) that a person might be detained and arrested if signs of such encryptions are found on their computer. Does anyone know if it's actually a serious issue?
lauralaura
1,892,600
How Discord Built `Access!` - An Authorization Management Portal
Managing access permissions is a critical aspect of application security, especially for large...
0
2024-06-18T15:15:18
https://www.permit.io/blog/how-discord-built-access-an-authorization-management-portal
authorization, iam, discord, webdev
Managing access permissions is a critical aspect of application security, especially for large platforms like Discord. In a great blog, “[Access: A New Portal For Managing Internal Authorization](https://discord.com/blog/access-a-new-portal-for-managing-internal-authorization),” Elisa Guerrant, Security Engineer at Discord, details how they built “Access!” - a secure, accessible internal authorization portal. This blog will discuss the importance of creating permission management systems that balance security and accessibility for all application users. It will also introduce "Permit: Elements," a set of prebuilt, embeddable UI components designed to further streamline and enhance permission delegation. Thus demonstrating how to improve the accessibility of your own authorization management system. Let’s dive in! ## Authorization: Security First, Accessibility Second Building and managing a system that can handle access permissions is a must for basically any application these days, especially one as large as Discord. With applications growing ever more complex and the reality of microservice-based architectures, ensuring that the right users have appropriate access to the right resources at the right times is a bigger challenge than ever. When building authorization, especially considering it’s a feature [mostly built in-house and requires a very large amount of effort to develop](https://www.permit.io/blog/roll-your-own-rbac), developers tend to focus on building authorization that provides the required level of security and restrictions their app needs. This approach is great, but it comes with a catch—it often overlooks the importance of developer and user experience. To understand why experiences are important in authorization, let’s talk about who uses our app. ### Who Uses Our Application? Every modern application consists of [several levels of users](https://www.permit.io/blog/best-practices-for-effective-user-permissions-and-access-delegation). It’s never as straightforward as end-users and developers. **End-users** interact with our application in multiple areas, each of which must be considered when designing our authorization layer. This includes [what users can see in the application](https://www.permit.io/blog/generate-personalized-frontend-experiences-with-user-attributes-and-feature-flags), ****which actions they can perform, and what data they can access. **App developers** need a level of user permissions management that allows them to create and handle new policies, requests, and processes. This obviously doesn't mean their access is unlimited, and it requires monitoring and management as well. Who decides what developers can access and what level of control they have over our application's authorization layer? More often than not, a specific person or group will be directly in charge (albeit reluctantly) of managing the application’s user and role management. Depending on our application, there are more possible levels here. **Internal Users** who are members of our organization might need different levels of access, and some may need to manage and delegate access to end-users. **Organizational Stakeholders**, such as DevOps, RevOps, and AppSec teams, require access to specific parts of the application, as well as the ability to manage user roles and delegate access management to internal users. ### How Do We Handle All These Users? The prospect of creating all-powerful superusers is alluring to many developers. What can be more secure than directly calling all the shots yourself? And while that’s true to some extent, this approach backfires very quickly. If you, as a developer, are the person directly in charge of all user roles and permission management in your organization, you will very quickly realize the strain this will have on you and the entire R&D team as you turn into a bottleneck for the entire business operation. Only one person/group can manage authorization for every user of your application? Great - it’s their full-time job now. The opposite approach, delegating all power away from developers to other stakeholders, is often equally dangerous, as it can create a slow-moving, inefficient bureaucracy or an unstable, risk-filled environment. What this means is that our authorization layer needs not only to provide a solid, secure basis for determining who has access to what within your application, but it must also be approachable and easy to understand. This is emphasized even further in Elisa’s blog, where she mentions the fact that [74% of all breaches involve the human element, with privilege misuse and stolen user credentials being two of the primary threats](https://www.verizon.com/business/resources/Tcdc/reports/2023-data-breach-investigations-report-dbir.pdf). This is the direct result of developers’ tendency to focus on authorization being secure (which is great) while neglecting the need for it to be accessible (Which is bad). As Elisa mentioned in her article: > “Policies designed to manage permissions tend to cause headaches for end-users and leave the decisions about “who can access what” in the hands of people who don’t have much information about the resources – often IT or Security”. > At Permit.io, we encounter this all the time. Many developers come to us because their homebrewed authorization solutions [lack both developer and user experience](https://www.permit.io/blog/devsecops-is-nothing-without-devex). Let’s see what Discord did to help solve these issues: ## What Did Discord Do? To address these concerns, the folks at Discord built an internal portal for staff to manage user permissions **for their internal users, organizational stakeholders, and developers**. Focusing on **workforce identity** (We’ll get to customer identity in a sec with Permit.io), they created it with the goals of being secure, transparent, and easy to use, and eventually [made the tool publicly available and free to use](https://github.com/discord/access). ### What was their goal? Discord uses Okta as its authentication solution for SSO. As they grew, they wanted the ability to further customize access controls for their employees. These were the goals they set for themselves: - This tool needed **to be security-focused and enforce the principle of least privilege**. - It needed to have **an intuitive user experience that didn’t require staff to have deep knowledge of the access control tool or the systems being managed**. This would help ensure the tool’s adoption within the company. - **The tool needed to be “self-serve” and allow delegation within their internal policies**. As we mentioned previously, having IT or Security teams exclusively manage permissions would create a bottleneck in one particular organization and slow down the whole operation. - **Finally, they wanted a system that was transparent and discoverable. Users would** be able to see what access they or their teammates have, what resources are controlled by the system, and what permissions they had in the past but have since expired. They should also be able to request access to resources freely, empowering them to troubleshoot their own permissions and solve issues through access requests. After what Elisa describes as “weeks of development time and dozens of cups of coffee”, the Discord team ended up building “Access” - an RBAC-based solution for managing their internal user access control. ### What features does this tool provide? Let’s see what features Discord’s tool ended up providing and how these concur with the [Best Practices for Effective User Permissions and Access Delegation](https://www.permit.io/blog/best-practices-for-effective-user-permissions-and-access-delegation): - **Delegated control:** Each group or app has a set of owners who control membership and access related to that resource, ensuring each group or app has designated owners responsible for managing membership and access. This follows the best practice of having a primary admin or owner who manages permissions, preventing conflicts such as two admins being able to remove each other and ensuring that those with the most context handle permissions. - **Time-bounded access:** This feature mitigates the risk of accumulating unnecessary permissions over time by allowing access to be set for predetermined periods. - **Access requests:** Access requests enable users to discover and request necessary permissions, which are then reviewed by appropriate owners. This delegation supports a cascading model of permissions, ensuring that those with the best understanding of the needs grant permissions, balancing the load across different levels of the organization. - **Audit logs:** Every user, group, and role in Access has a page viewable by all employees that shows complete membership and ownership history. These comprehensive audit logs allow all users to see complete membership and ownership histories. This transparency supports the generation of audit logs, which are crucial for tracking changes and understanding permission alterations at every level of the system. **‍** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2z28skbcza4ysnpwt8j.png) - **Using classic policy models for ‘high-level’ authorization policies** **(e.g., [RBAC](https://www.permit.io/blog/an-introduction-to-role-based-access-control), [ABAC](https://www.permit.io/blog/what-is-attribute-based-access-control)):** Access was built based on an RBAC model, which helped ensure consistency and ease of understanding across the system, lowering the cognitive load on system managers and users. - **Building an easy-to-use UI:** The intuitive and user-friendly interface built by the Discord team enables effective permission management. It makes permission management accessible and straightforward so that all users, regardless of technical expertise, can effectively manage and understand their permissions. ## Implementing Accessible Authorization Elements The tool built by Discord provides a masterful solution to the problem of effectively handling user permissions and access delegation. As mentioned before, we at [Permit.io](http://Permit.io) encounter this problem with our users all the time, and it spans beyond internal users. [Permit.io](http://Permit.io) aims to provide a comprehensive solution for managing user permissions **for all application users, from end-users to application developers,** in a single, unified interface. When we initially launched Permit, our goal was to provide developers with the building blocks needed (SDKs, gateway plugins, data updates) to integrate permissions into an application. This included a no-code UI for creating and managing permissions with RBAC, ABAC, and ReBAC support. ![Permit.io’s permission management UI with RBAC, ABAC, and ReBAC policies, all together in one interface.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9oi8543y496uurpewc7n.png) Permit.io’s permission management UI with RBAC, ABAC, and ReBAC policies, all together in one interface. To further improve accessibility, we introduced “Permit Elements,” a set of prebuilt and embeddable UI components that provide fully functional access control. These components allow you to safely delegate permissions to your end users. Permit Elements enable crucial permission management functionalities (such as User Management, Audit Logs, Access Requests, and Process Approval) to propagate through your user stack—from developers to internal users and end users, ensuring efficient and secure access management without bottlenecks. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eakecl99ooa4thy8dco3.png) By incorporating Permit Elements, you can enhance the accessibility and usability of your permission management system, ensuring that it is not only secure but also user-friendly and efficient. ## Conclusion Building an accessible permission management system is crucial for maintaining both security and efficiency in modern applications. Discord's innovative approach with its "Access" portal showcases how balancing security with user-friendly features can streamline operations and enhance overall security. By incorporating intuitive design and robust delegation capabilities, Discord has set a high standard for internal authorization management. To further enhance your permission management system, we covered "Permit: Elements” - a set of prebuilt, embeddable UI components that offer fully functional access control. These components enable you to delegate permissions safely and effectively to your end users. By adopting such solutions, you can ensure that your authorization processes are not only secure but also accessible and efficient for all users.
imdburnot
1,892,599
Box<T> Type in Rust Allows for Heap Allocation
In the world of systems programming, managing memory efficiently and safely is a critical challenge....
0
2024-06-18T15:15:16
https://dev.to/ashsajal/box-type-in-rust-allows-for-heap-allocation-kfj
rust, coding, programming, tutorial
In the world of systems programming, managing memory efficiently and safely is a critical challenge. Rust, with its unique approach to memory management, stands out for providing powerful tools to handle this complexity. One such tool is the `Box<T>` type, which facilitates heap allocation in a manner that integrates seamlessly with Rust’s ownership and borrowing rules. This article delves into how `Box<T>` works, its benefits, and its use cases in Rust programming. #### Understanding `Box<T>` The `Box<T>` type is one of Rust’s smart pointers, providing a way to allocate values on the heap rather than the stack. When you create a `Box<T>`, you allocate space for the value `T` on the heap and the box itself on the stack. The box contains a pointer to the heap-allocated value. This mechanism allows you to store data that may not fit on the stack or needs to live beyond the current scope. Here’s a simple example of how to use `Box<T>`: ```rust let b = Box::new(5); println!("b = {}", b); ``` In this example, `Box::new(5)` allocates an integer `5` on the heap and `b` becomes the owner of this heap-allocated value. When `b` goes out of scope, Rust automatically deallocates the memory on the heap, ensuring no memory leaks. #### Benefits of Using `Box<T>` 1. **Heap Allocation**: The primary benefit of `Box<T>` is its ability to allocate memory on the heap. This is particularly useful for large data structures or when you need to pass data around without copying it. 2. **Ownership and Safety**: `Box<T>` integrates with Rust’s ownership system, providing guarantees that the heap-allocated memory is properly cleaned up when no longer needed. This avoids common pitfalls like dangling pointers and memory leaks. 3. **Dynamic Sized Types (DSTs)**: `Box<T>` can store types whose size is not known at compile time, such as trait objects. This makes it possible to handle polymorphic data in a type-safe manner. 4. **Recursive Data Structures**: Creating recursive data structures like linked lists or trees is straightforward with `Box<T>`. Since Rust needs to know the size of each type at compile time, and recursive types’ sizes cannot be determined at compile time, `Box<T>` provides a way to overcome this limitation by wrapping recursive elements in a heap-allocated box. #### Use Cases for `Box<T>` 1. **Storing Large Data Structures**: When dealing with large data structures that would exceed the stack size, `Box<T>` allows these structures to be stored on the heap, avoiding stack overflow issues. 2. **Passing Data without Cloning**: `Box<T>` enables you to pass data to functions or across threads without the need to clone the data, improving performance by avoiding unnecessary copies. 3. **Trait Objects**: Using `Box<dyn Trait>` allows you to work with trait objects, enabling dynamic dispatch and polymorphism. This is useful for scenarios where you need to store and operate on different types that implement the same trait. 4. **Implementing Recursive Data Structures**: For example, a binary tree can be implemented using `Box<T>` to manage the nodes: ```rust enum BinaryTree { Empty, NonEmpty(Box<TreeNode>), } struct TreeNode { value: i32, left: BinaryTree, right: BinaryTree, } ``` ### Don’t Misuse Box<T> in Rust: Avoid These Common Pitfalls Rust's `Box<T>` type is a powerful tool for heap allocation, but misusing it can lead to suboptimal performance and bugs. Understanding its proper use is crucial for writing efficient and safe Rust code. Now I will highlight common mistakes developers make with `Box<T>` and how to avoid them. #### Misunderstanding Heap vs. Stack Allocation A common mistake is using `Box<T>` when stack allocation would suffice. For example, small data structures and values that don't need to outlive the current scope should remain on the stack for better performance. Use `Box<T>` primarily when dealing with large data structures or when explicit heap allocation is necessary. #### Inefficient Memory Management Overusing `Box<T>` can lead to fragmented memory and inefficient use of heap space. Ensure you only use `Box<T>` when heap allocation is justified. For most cases involving small or non-recursive data structures, Rust's default stack allocation is more efficient. #### Incorrect Handling of Trait Objects When working with trait objects, ensure you use `Box<dyn Trait>` correctly. Misusing trait objects can lead to performance penalties due to dynamic dispatch. Always evaluate if trait objects are necessary or if generics can achieve the same goal with better performance. ```rust let boxed_trait: Box<dyn MyTrait> = Box::new(MyStruct {}); ``` #### Overlooking Smart Pointer Alternatives While `Box<T>` is useful, sometimes other smart pointers like `Rc<T>` or `Arc<T>` are more appropriate for managing shared ownership or ensuring thread safety. Evaluate the specific requirements of your application to choose the right smart pointer. #### Failing to Implement Drop Correctly When you manually implement the `Drop` trait for types that own `Box<T>`, ensure you correctly handle the cleanup to avoid memory leaks. Rust’s ownership model simplifies this, but custom implementations require careful attention. #### Ignoring Performance Implications Heap allocation with `Box<T>` is slower than stack allocation. Measure and profile your code to understand the performance impact of using `Box<T>`. Optimize your data structures and algorithms to minimize unnecessary heap allocations. `Box<T>` is a valuable tool in Rust’s memory management toolkit, but it should be used judiciously. By avoiding these common pitfalls and understanding the appropriate use cases for `Box<T>`, you can write more efficient and robust Rust code. Always consider the trade-offs between heap and stack allocation and choose the best tool for your specific needs. #### Conclusion The `Box<T>` type is a fundamental tool in Rust’s arsenal for memory management, providing safe and efficient heap allocation. By leveraging `Box<T>`, Rust programmers can handle large data structures, enable polymorphism through trait objects, and implement recursive data structures with ease. Understanding and utilizing `Box<T>` effectively allows developers to write more flexible, efficient, and safe Rust programs, harnessing the full power of Rust's memory management capabilities. As you continue your journey with Rust, mastering `Box<T>` will undoubtedly be a key step in building robust and performant applications. ### References 1. [Rust Documentation on Box<T>](https://doc.rust-lang.org/std/boxed/struct.Box.html) 2. [Rust Ownership and Borrowing](https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html) 3. [The Rust Programming Language: Smart Pointers](https://doc.rust-lang.org/book/ch15-00-smart-pointers.html) 4. [Rust Reference Counting with Rc<T>](https://doc.rust-lang.org/std/rc/struct.Rc.html) 5. [Atomic Reference Counting with Arc<T>](https://doc.rust-lang.org/std/sync/struct.Arc.html) 6. [Rust Trait Objects](https://doc.rust-lang.org/book/ch17-02-trait-objects.html) 7. [Implementing the Drop Trait](https://doc.rust-lang.org/book/ch15-03-drop.html) **Follow me in [X/Twitter](https://twitter.com/ashsajal1)**
ashsajal
1,892,598
Combining Node.js with Async Rust for remarkable performance
Last month we announced that Encore.ts — an Open Source backend framework for TypeScript — is...
0
2024-06-18T15:14:50
https://encore.dev/blog/event-loops
typescript, javascript, node, programming
Last month we [announced](https://encore.dev/blog/encore-for-typescript) that Encore.ts — an Open Source backend framework for TypeScript — is generally available and ready to use in production. So we figured now is a great time to dig into some of the design decisions we made along the way, and how they lead to the remarkable performance numbers we've seen. ## The Numbers We benchmarked Encore.ts, Bun, Fastify, and Express, both with and without schema validation. For schema validation we used Zod where possible. In the case of Fastify we used Ajv as the officially supported schema validation library. For each benchmark we took the best result of five runs. Each run was performed by making as many requests as possible with 150 concurrent workers, over 10s. The load generation was performed with [oha](https://github.com/hatoo/oha), a Rust and Tokio-based HTTP load testing tool. Enough talk, let's see the numbers! ![Requests per second](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jo3kwnlnbitr27qmlera.png) ![Response latency](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxe0j06ndo2axwayne7a.png) (*Check out the benchmark code on [GitHub](https://github.com/encoredev/ts-benchmarks).*) Aside from performance, Encore.ts achieves this while maintaining **100% compatibility with Node.js**. How is this possible? From our testing we've identified three major sources of performance, all related to how Encore.ts works under the hood. ## Boost #1: Putting an event loop in your event loop Node.js runs JavaScript code using a single-threaded event loop. Despite its single-threaded nature this is quite scalable in practice, since it uses non-blocking I/O operations and the underlying V8 JavaScript engine (that also powers Chrome) is extremely optimized. But you know what's faster than a single-threaded event loop? A multi-threaded one. Encore.ts consists of two parts: 1. A TypeScript SDK that you use when writing backends using Encore.ts. 2. A high-performance runtime, with a multi-threaded, asynchronous event loop written in Rust (using [Tokio](https://tokio.rs/) and [Hyper](https://hyper.rs/)). The Encore Runtime handles all I/O like accepting and processing incoming HTTP requests. This runs as a completely independent event loop that utilizes as many threads as the underlying hardware supports. Once the request has been fully processed and decoded, it gets handed over to the Node.js event-loop, and then takes the response from the API handler and writes it back to the client. *(Before you say it: Yes, we put an event loop in your event loop, so you can event-loop while you event-loop.)* ![Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tzh8mlxzsg9xb50fy3yv.png) ## Boost #2: Precomputing request schemas Encore.ts, as the name suggests, is designed from the ground up for TypeScript. But you can't actually run TypeScript: it first has to be compiled to JavaScript, by stripping all the type information. This means run-time type safety is much harder to achieve, which makes it difficult to do things like validating incoming requests, leading to solutions like [Zod](https://zod.dev/) becoming popular for defining API schemas at runtime instead. Encore.ts works differently. With Encore, you define type-safe APIs using native TypeScript types: ```jsx import { api } from "encore.dev/api"; interface BlogPost { id: number; title: string; body: string; likes: number; } export const getBlogPost = api( { method: "GET", path: "/blog/:id", expose: true }, async ({ id }: { id: number }) => Promise<BlogPost> { // ... }, ); ``` Encore.ts then parses the source code to understand the request and response schema that each API endpoint expects, including things like HTTP headers, query parameters, and so on. The schemas are then processed, optimized, and stored as a Protobuf file. When the Encore Runtime starts up, it reads this Protobuf file and pre-computes a request decoder and response encoder, optimized for each API endpoint, using the exact type definition each API endpoint expects. In fact, Encore.ts even handles request validation directly in Rust, ensuring invalid requests never have to even touch the JS layer, mitigating many denial of service attacks. Encore’s understanding of the request schema also proves beneficial from a performance perspective. JavaScript runtimes like Deno and Bun use a similar architecture to that of Encore's Rust-based runtime (in fact, Deno also uses Rust+Tokio+Hyper), but lack Encore’s understanding of the request schema. As a result, they need to hand over the un-processed HTTP requests to the single-threaded JavaScript engine for execution. Encore.ts, on the other hand, handles much more of the request processing inside Rust, and only hands over the decoded request objects. By handling much more of the request life-cycle in multi-threaded Rust, the JavaScript event-loop is freed up to focus on executing application business logic instead of parsing HTTP requests, yielding an even greater performance boost. ## Boost #3: Infrastructure Integrations Careful readers might have noticed a trend: the key to performance is to off-load as much work from the single-threaded JavaScript event-loop as possible. We've already looked at how Encore.ts off-loads most of the request/response lifecycle to Rust. So what more is there to do? Well, backend applications are like sandwiches. You have the crusty top-layer, where you handle incoming requests. In the center you have your delicious toppings (that is, your business logic, of course). At the bottom you have your crusty data access layer, where you query databases, call other API endpoints, and so on. We can't do much about the business logic — we want to write that in TypeScript, after all! — but there's not much point in having all the data access operations hogging our JS event-loop. If we moved those to Rust we'd further free up the event loop to be able to focus on executing our application code. So that's what we did. With Encore.ts, you can declare infrastructure resources directly in your source code. For example, to define a Pub/Sub topic: ```jsx import { Topic } from "encore.dev/pubsub"; interface UserSignupEvent { userID: string; email: string; } export const UserSignups = new Topic<UserSignupEvent>("user-signups", { deliveryGuarantee: "at-least-once", }); // To publish: await UserSignups.publish({ userID: "123", email: "hello@example.com" }); ``` "So which Pub/Sub technology does it use?" — All of them! The Encore Rust runtime includes implementations for most common Pub/Sub technologies, including AWS SQS+SNS, GCP Pub/Sub, and NSQ, with more planned (Kafka, NATS, Azure Service Bus, etc.). You can specify the implementation on a per-resource basis in the runtime configuration when the application boots up, or let Encore's Cloud DevOps automation handle it for you. Beyond Pub/Sub, Encore.ts includes infrastructure integrations for PostgreSQL databases, Secrets, Cron Jobs, and more. All of these infrastructure integrations are implemented in the Encore.ts Rust Runtime. This means that as soon as you call `.publish()`, the payload is handed over to Rust which takes care to publish the message, retrying if necessary, and so on. Same thing goes with database queries, subscribing to Pub/Sub messages, and more. The end result is that with Encore.ts, virtually all non-business-logic is off-loaded from the JS event loop. ![Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5ejmtm3gzyzjje78xcq.png) In essence, with Encore.ts you get a truly multi-threaded backend "for free", while still being able to write all your business logic in TypeScript. ## Conclusion Whether or not this performance is important depends on your use case. If you're building a tiny hobby project, it's largely academic. But if you're shipping a production backend to the cloud, it can have a pretty large impact. Lower latency has a direct impact on user experience. To state the obvious: A faster backend means a snappier frontend, which means happier users. Higher throughput means you can serve the same number of users with fewer servers, which directly corresponds to lower cloud bills. Or, conversely, you can serve more users with the same number of servers, ensuring you can scale further without encountering performance bottlenecks. While we're biased, we think Encore offers a pretty excellent, best-of-all-worlds solution for building high-performance backends in TypeScript. It's fast, it's type-safe, and it's compatible with the entire Node.js ecosystem. And it's all Open Source, so you can check out the code and contribute on [GitHub](https://github.com/encoredev/encore). Or just [give it a try](https://encore.dev/docs/tutorials) and let us know what you think!
marcuskohlberg
1,892,596
SteamVR Overlay with Unity: Appendix
This part contains where to look for more detailed information. OpenVR...
27,740
2024-06-18T15:11:43
https://dev.to/kurohuku/appendix-3e8f
unity3d, steamvr, openvr, vr
This part contains where to look for more detailed information. ## OpenVR repository OpenVR’s official repository is the most important https://github.com/ValveSoftware/openvr I recommend reading the wiki at first. https://github.com/ValveSoftware/openvr/wiki/API-Documentation Detailed information that is not on the wiki tends to be written in C++ header file comments. If you are looking for some feature, try to find the header file. https://github.com/ValveSoftware/openvr/blob/master/headers/openvr.h Anything else, try to find from the issue. https://github.com/ValveSoftware/openvr/issues ## SteamVR Unity Plugin repository SteamVR Unity Plugin repository has utilities for Unity. https://github.com/ValveSoftware/steamvr_unity_plugin Here is the document. https://valvesoftware.github.io/steamvr_unity_plugin/api/index.html ## Other repositories Open source projects like OVR Advanced Settings are helpful. https://github.com/OpenVR-Advanced-Settings/OpenVR-AdvancedSettings ## Show 3d objects with overlay Overlay can show 3d objects with stereo image. For example, the VR paint app Vermillion has overlay mode that displays 3d objects. {% embed https://www.youtube.com/watch?v=udc1i97KPLY %} I don’t have enough information but list up methods I know. (The below information may contain incorrect information.) ### Method 1: SideBySide Pass `VROverlayFlags.SideBySide_Parallel` to `SetOverlayFlags()`. This enables stereo images for overlay. Side by side image is an image that the left half to the left eye and the right half to the right eye then it makes 3d. Create two cameras for each eye in Unity, and draw the camera output horizontally onto the texture to make side by side texture. ### Method 2: Stereo Panorama Pass `VROverlayFlags.StereoPanorama` to `SetOverlayFlags()`. This enables stereo panorama for overlay. Like the side by side, stereo panorama uses two areas of the image for each eye. ### Method 3: Overlay projection `SetOverlayTransformProjection()` shows overlay for the specific eye only. You can get the eye position by transforming Origin -> HMD -> Eye with [GetEyeToHeadTransform()](https://github.com/ValveSoftware/openvr/wiki/IVRSystem::GetEyeToHeadTransform). {% embed https://twitter.com/kurohuku7/status/1566307113697423360 %} ### Method 4: Combine planes If the desired 3d object is a simple form like a cube, you can make 3d objects with 2d planes. For example, create a cube with 6 overlays or a cylinder with a curved overlay with `SetOverlayCurvature()`. I use this method in OVR Locomotion Effect. It makes a large grid cube that encloses the player with 6 overlays. Also, the wind effect is made with a curved overlay as a tube and encloses the player’s HMD to create a sense of depth. {% embed https://www.youtube.com/watch?v=vv-e_6-vjiE %} Find with keyword `stereo` or `sidebyside` in `openvr.h` for 3d objects. ## Get the HMD output `GetMirrorTextureD3D11()` or `GetMirrorTextureGL()` of `OpenVR.Compositor` read the HMD output.
kurohuku
1,892,595
Pros and Cons of Choosing Python as Your Programming Language
Hello everyone! In today's post, we will explore the advantages and disadvantages of choosing Python...
0
2024-06-18T15:10:56
https://dev.to/techinsight/pros-and-cons-of-choosing-python-as-your-programming-language-1bcl
learntocode, techeducation, programminglanguages, coding
Hello everyone! In today's post, we will explore the advantages and disadvantages of choosing Python as your programming language. Python is one of the most popular languages for beginners and professionals alike, but like any language, it has its strengths and weaknesses. ## Introduction: Why Consider Python? Python has been around for over three decades and has become a staple in the programming world. It's known for its simplicity and versatility, making it a favorite among both newcomers and seasoned developers. But what makes Python stand out, and what are the potential drawbacks you should be aware of before diving in? ## Pros of Choosing Python 1. **Ease of Learning and Readability:** - Python's simple and clean syntax makes it easy to learn, especially for beginners. - Code readability is high, making it easier to understand and maintain. 2. **Versatility:** - Python is a versatile language that can be used for web development, data analysis, machine learning, automation, and more. - It has a vast standard library that supports many common programming tasks. 3. **Large Community and Support:** - Python has a large and active community, which means plenty of resources, tutorials, and libraries are available. - Extensive documentation and a wealth of third-party packages are at your disposal. 4. **Cross-Platform Compatibility:** - Python runs on various platforms, including Windows, macOS, and Linux. - This cross-platform nature allows you to develop applications that can work on multiple operating systems. 5. **Strong Support for Integration:** - Python integrates well with other languages and technologies, making it a good choice for projects requiring different programming languages. - It is often used as a scripting language for applications written in other languages. ## Cons of Choosing Python 1. **Performance Limitations:** - Python is generally slower than compiled languages like C++ or Java because it is an interpreted language. - It may not be the best choice for performance-critical applications. 2. **Mobile Development:** - While Python is great for web and desktop applications, it is not as strong in mobile development. - There are fewer frameworks and tools available for building mobile apps compared to languages like Swift or Kotlin. 3. **Runtime Errors:** - Python is dynamically typed, which can lead to runtime errors that are not caught until the code is executed. - This can make debugging more challenging compared to statically typed languages. 4. **Memory Consumption:** - Python can be more memory-intensive, which might be a concern for applications that need to run on devices with limited resources. ## Conclusion: Is Python Right for You? Python is an excellent choice for many programming tasks due to its readability, versatility, and strong community support. However, it is essential to consider its limitations, such as performance and mobile development capabilities, before deciding if it's the right language for your project. What has been your experience with Python? Do you have any questions or topics you'd like me to cover in future posts? Let me know in the comments! Thanks for reading, and stay tuned for more insights on programming and technology.
techinsight
1,892,594
Claude is so nice that
when I failed to talk to Claude because its capacity maxed out. This time I instantly grabbed my...
0
2024-06-18T15:08:01
https://dev.to/swimmingpolar/claude-is-so-nice-that-1i64
claude, chatgpt, ai, bullshit
when I failed to talk to Claude because its capacity maxed out. This time I instantly grabbed my credit card and subscribed it. On the other hand, when ChatGPT starts bullshitting, my patience towards its credibility is getting out. I don't know why but its Bullshittness level is accumulated and when it hits 100%, I would, from time to time, unsubscribe it with like "you're never getting a second chance" anger. And of course, subbed it again like a week after. So, technically I've never unsubscribed ChatGPT. lol Anyway, I mean... I'm just touched by how Claude talks.
swimmingpolar
1,892,592
Casting and its types in networking
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T15:06:56
https://dev.to/komsenapati/casting-and-its-types-in-networking-4mcn
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Casting means sending data packets over a network. Unicast is sending data to a single node in a network (one-one). Broadcast is sending data to all nodes in a network (one-all). Multicast is sending data to a group of nodes in a network (one-many). <!-- Explain a computer science concept in 256 characters or less. --> ## Additional Context I was learning about this from a youtube video and thought why not write this networking concept in this challenge? <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
komsenapati
1,892,590
RELIABLE EXPERT TO RECOVER SCAMMED CRYPTO - RESILIENT SHIELD RECOVERY
As a plumber in Brentwood, my life took an unexpected turn when I fell victim to a cryptocurrency...
0
2024-06-18T15:05:12
https://dev.to/frank_mormon_0e0c2d205994/reliable-expert-to-recover-scammed-crypto-resilient-shield-recovery-3nnp
As a plumber in Brentwood, my life took an unexpected turn when I fell victim to a cryptocurrency scam that cost me a staggering £155,500. It all began innocently enough, as I sought to invest and grow my savings through what seemed like a reputable exchange site. The promise of lucrative returns was enticing, and I eagerly transferred my hard-earned money into their platform, believing I was making a sound financial decision. Initially, everything appeared to be going according to plan. My investments showed promising growth, and I allowed myself to dream of a more secure financial future. However, the dream quickly turned into a nightmare when I attempted to withdraw my funds. Suddenly, the exchange site became evasive, citing various reasons why my withdrawal couldn't be processed at that moment. Days turned into weeks, and then months, with no sign of my money. In a state of panic and desperation, I turned to various avenues for help. I tried hiring hackers on three separate occasions, each claiming they could recover my funds for a fee. Hopeful yet cautious, I parted with $25,000 in total to these supposed experts, only to be left even more devastated and financially drained. Not only did they fail to recover my money, but they also vanished without a trace, leaving me in a precarious financial situation. At that point, I felt utterly hopeless. I couldn't fathom how I would recover from such a significant loss, especially considering the trust I had placed in those who promised to help. It was then that a ray of hope appeared in the form of RESILIENT SHIELD RECOVERY.. I was referred to them by a friend who had faced a similar ordeal and found success with their assistance. Skeptical yet clinging to the last vestiges of hope, I decided to reach out to them. From the very first contact with RESILIENT SHIELD RECOVERY., I felt a sense of relief and assurance. They listened attentively to my story, empathizing with my situation and understanding the urgency of my need. Unlike my previous encounters, they didn't make lofty promises or demand upfront payments. Instead, they offered a clear and transparent plan of action, outlining the steps they would take to recover my lost funds. RESILIENT SHIELD RECOVERY.'s professionalism and expertise were evident throughout the process. They meticulously investigated the scam, tracing digital footprints and navigating through complex layers of deception. Their approach was methodical and thorough, leaving no stone unturned in their quest for justice on my behalf. What struck me most was their commitment to integrity and client satisfaction. They maintained open lines of communication, providing regular updates and patiently answering my questions. This level of transparency reassured me that I was in capable hands, restoring my faith in the possibility of reclaiming what was rightfully mine. When RESILIENT SHIELD RECOVERY. finally succeeded in recovering a substantial portion of my lost funds, I was overwhelmed with gratitude and relief. It wasn't just about the money; it was about reclaiming my sense of security and trust. With their help, I was able to put the recovered funds back into my cryptocurrency wallet, albeit with newfound caution and wisdom. I am grateful to RESILIENT SHIELD RECOVERY. for turning what seemed like a hopeless situation into a story of redemption. They didn't just recover my money; they restored my hope and provided a lifeline when I needed it most. To anyone who finds themselves in a similar predicament, I wholeheartedly recommend RESILIENT SHIELD RECOVERY.. They are not just experts in recovering stolen funds; they are trustworthy allies dedicated to helping individuals reclaim their financial well-being. Name ; RESILIENT SHIELD RECOVERY WhatsApp No; +1(936)244‑3264 Email: resilientshieldrecovery@contractor.net
frank_mormon_0e0c2d205994
1,892,589
Taming the Compliance Beast: AWS Config to the Rescue
Taming the Compliance Beast: AWS Config to the Rescue In today's intricate regulatory...
0
2024-06-18T15:05:08
https://dev.to/virajlakshitha/taming-the-compliance-beast-aws-config-to-the-rescue-bnh
![usecase_content](https://cdn-images-1.medium.com/proxy/1*zqfBK-ivKOyE5TLv4mHkkA.png) # Taming the Compliance Beast: AWS Config to the Rescue In today's intricate regulatory landscape, maintaining compliance isn't just a checkbox—it's a business imperative. Fortunately, AWS offers a powerful tool to help navigate this complexity: AWS Config. This service acts as a watchful guardian over your AWS environment, continuously monitoring and recording configurations, and alerting you to any deviations from your defined standards. Let's delve deeper into how AWS Config empowers organizations to establish robust compliance controls. ### Understanding AWS Config: Your Cloud Configuration Historian AWS Config is essentially a configuration management service that provides a detailed, historical view of your AWS resources. This means you can see: * **Resource Inventory:** A complete list of all resources within your AWS account, including their types, configurations, and relationships. * **Configuration History:** A chronological record of changes made to your resources, allowing you to track modifications, identify potential issues, and meet audit requirements. * **Configuration Rules:** Customizable rules that you define to enforce specific configuration standards. For instance, you can set up rules to ensure all EC2 instances are launched within a specific VPC, or that all S3 buckets have encryption enabled. ### Use Cases: Where AWS Config Shines The beauty of AWS Config lies in its versatility. It can be tailored to address a diverse range of compliance needs across various industries and regulatory frameworks. Here are a few examples: **1. Enforcing Security Best Practices:** - **Problem:** Unencrypted S3 buckets can expose sensitive data and violate data privacy regulations. - **Solution:** Create a configuration rule in AWS Config that continuously monitors your S3 buckets for encryption status. If an unencrypted bucket is detected, the rule can trigger an automated remediation action to encrypt it or send a notification to your security team. **2. Meeting Compliance Standards (e.g., PCI DSS, HIPAA):** - **Problem:** PCI DSS (Payment Card Industry Data Security Standard) mandates strict access controls to systems handling cardholder data. - **Solution:** Implement configuration rules in AWS Config to ensure that security groups associated with databases storing cardholder data are configured according to PCI DSS requirements. For example, rules can verify that only authorized IP addresses have access to specific ports. **3. Streamlining Audits:** - **Problem:** Gathering evidence and demonstrating compliance during audits can be a time-consuming and document-heavy process. - **Solution:** AWS Config provides comprehensive audit trails and configuration snapshots, making it significantly easier to provide auditors with the necessary information to validate compliance. This centralized view of your resource configurations streamlines the audit process and reduces the burden on your team. **4. Proactive Drift Detection:** - **Problem:** Configuration drift—where resources deviate from their intended state over time—can lead to compliance violations and security vulnerabilities. - **Solution:** AWS Config continuously monitors your environment and can alert you to any configuration changes that deviate from your defined baseline. This allows you to proactively remediate drift and maintain a compliant posture. **5. Resource Optimization and Cost Control:** - **Problem:** Overprovisioned or unused resources can increase costs and potentially introduce security risks. - **Solution:** AWS Config can identify unused or idle resources, enabling you to optimize your AWS usage and reduce unnecessary spending. Additionally, by maintaining a clean and well-managed environment, you can minimize the attack surface and enhance overall security. ### The Competitive Landscape: Alternatives to AWS Config While AWS Config provides a robust solution for cloud compliance and governance, other cloud providers offer comparable services: * **Azure Policy:** Microsoft Azure's policy engine allows you to define policies that enforce organizational standards and assess compliance. It provides similar capabilities to AWS Config, such as monitoring resource configurations, detecting violations, and taking automated actions. * **Google Cloud Resource Manager:** Google Cloud's offering focuses on managing resources hierarchically with the ability to set permissions and constraints. While it provides some overlap with AWS Config in terms of resource inventory and access control, it does not have the same depth of configuration history and rule-based remediation. ### Conclusion: Navigate the Compliance Maze with Confidence AWS Config is an indispensable tool for organizations seeking to effectively manage compliance and governance in the cloud. Its ability to provide a centralized view of your AWS environment, coupled with its powerful automation and reporting capabilities, empowers you to establish a strong security and compliance posture. By leveraging AWS Config, you can navigate the complexities of regulatory requirements with greater ease and confidence, ensuring your business remains secure and compliant. ## An Advanced Use Case: Building a Proactive Security Auditing System Let's imagine you're the principal architect for a financial services company subject to stringent regulations like SOX and GDPR. You need to not only maintain compliance but also proactively identify and remediate security risks. Here's how you can leverage AWS Config along with other AWS services to build a robust solution: **1. Define Your Compliance Baseline:** - Use AWS Config Rules to translate your compliance requirements (e.g., SOX, GDPR) into specific configuration checks. For example, a rule can ensure that all EBS volumes attached to EC2 instances containing sensitive data are encrypted. **2. Integrate with AWS Security Hub:** - Stream your AWS Config findings to AWS Security Hub, a central security monitoring service. This provides a consolidated view of your security posture across various AWS services. **3. Leverage AWS CloudTrail for Comprehensive Logging:** - Enable AWS CloudTrail to log all API calls made within your AWS account, including changes made to resources. This provides an audit trail for all actions and helps in identifying the root cause of any compliance violations. **4. Automate Remediation with AWS Systems Manager:** - When AWS Config detects a non-compliant configuration, trigger an automated remediation workflow using AWS Systems Manager. For example, if a security group is modified to allow unrestricted access, Systems Manager can automatically revert the changes to the last known good configuration. **5. Real-time Monitoring and Alerting:** - Configure Amazon CloudWatch to monitor AWS Config events and generate alerts in real-time. This allows your security team to proactively respond to potential issues as they arise. For critical events, integrate CloudWatch with notification services like Amazon SNS to send alerts via email or SMS. **6. Continuous Improvement:** - Regularly review your AWS Config rules and remediation actions, refining them based on security best practices, industry standards, and evolving compliance requirements. Use AWS Config's reporting capabilities to generate dashboards and reports that provide insights into your compliance posture and identify areas for improvement. By implementing this advanced solution, you create a proactive security auditing system that not only ensures compliance but also strengthens your overall security posture. You gain continuous visibility into your environment, automate remediation actions, and equip your security team with the tools needed to identify and address risks swiftly and effectively.
virajlakshitha
1,892,588
3 Insanely Powerful Software Tools to Boost Your Productivity and Avoid Overtime
Here are three software tools that can significantly enhance your productivity and help you avoid...
0
2024-06-18T15:04:32
https://dev.to/fullfull567/3-insanely-powerful-software-tools-to-boost-your-productivity-and-avoid-overtime-lel
webdev, beginners, programming, productivity
Here are three software tools that can significantly enhance your productivity and help you avoid overtime. These tools are so efficient that tasks that usually take hours can be completed in just 5 minutes! ### 1. [ServBay](https://www.servbay.com/) — My Favorite Development Environment ServBay offers developers a one-click solution to create [development environment](https://www.servbay.com/). Similar to MAMP and XAMPP, ServBay is a productivity tool aimed at developers. However, it stands out by providing a simpler and quicker way to set up and manage development environments. Most of the time, developers don’t need to configure anything or write any code. With one-click installation of software packages, you can deploy environments, switch versions, and upgrade effortlessly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2iz8equ7vf2m927vsgn7.png) Since discovering ServBay, I've been using it consistently. It significantly reduces my development costs and boosts my efficiency. Our company also uses JNPF to lower the development costs of business applications. ServBay has a low entry barrier for new web developers, allowing them to focus on development without spending time on environment setup tutorials. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lzelg04x769yb63eyiua.png) Although ServBay is relatively new and has already partnered with many companies, it still needs time to build its brand. The available tutorials and documentation are simple, but improving these resources would help beginners get started more quickly. Interested users can check out their [official website](https://www.servbay.com). ### 2. [DrawDB](https://github.com/drawdb-io/drawdb) — Database Design Tool Databases are frequently used in development, so having a good database design tool can double your work efficiency. DrawDB is a multifunctional and user-friendly online tool that allows users to easily design database entity relationships. With its simple and intuitive interface, DrawDB enables users to create diagrams, export SQL scripts, and customize the editing environment without needing to create an account. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ejmxpw0ey2bmrmw7kf9.png) By providing a visual representation of entity relationships within database schemas, DrawDB simplifies the database design process. Users can easily create tables, define columns, establish relationships, and generate SQL scripts with just a few clicks. The intuitive interface is suitable for both beginners and experienced database designers, offering a seamless experience for designing complex database structures. 3. [Fliqlo](https://fliqlo.com/) — A Time Screen Saver Software Screen saver software is a staple for many computers, but most tools I found had a fatal flaw: ads! Then I discovered Fliqlo, a lightweight and simple screen saver software. Its functionality is straightforward: when your computer is idle and the screen goes black, it displays the time. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g6g7n1dpubpa1ixbjt0w.png) Since installing Fliqlo, I can glance at the time even when my computer is idle. It’s incredibly convenient! ### Summary These three software tools are all exceptionally useful and have been my personal favorites. Each one can significantly boost your productivity and open up new possibilities. They are free, powerful, and definitely worth a try!
fullfull567
1,892,587
Comprehensive Front-End Performance Enhancement: Basic Optimization, Latest Technologies, and AI-Driven Strategies
1. Introduction In modern web development, front-end performance optimization has become a...
0
2024-06-18T15:03:45
https://dev.to/happyer/comprehensive-front-end-performance-enhancement-basic-optimization-latest-technologies-and-ai-driven-strategies-52j6
webdev, website, frontend, development
## 1. Introduction In modern web development, front-end performance optimization has become a key factor in enhancing user experience and website efficiency. As users' expectations for fast loading and smooth interactions continue to rise, developers need to employ various technologies and tools to optimize front-end performance. This article will detail basic optimization methods, the latest front-end performance optimization technologies and tools, and AI-driven optimization strategies to help developers comprehensively enhance front-end performance. ## 2. Basic Optimization ### 2.1. Resource Loading Optimization 1. Image Optimization: - Choose appropriate image formats such as WebP, JPEG, or PNG for optimal compression. - Use image compression tools like TinyPNG or ImageOptim to reduce image file sizes. - Implement lazy loading to load images only when the user scrolls to them, reducing initial load time. 2. Resource Merging and Compression: - Merge CSS and JavaScript files to reduce the number of HTTP requests. - Use Gzip or Brotli to compress CSS, JavaScript, and HTML files, reducing file sizes. 3. Using CDN: - Deploy static resources on a Content Delivery Network (CDN) to speed up access for global users. ### 2.2. Rendering Performance Optimization 1. CSS Optimization: - Avoid using expensive CSS properties like `box-shadow` and `border-radius`, especially on mobile devices. - Use hardware acceleration (e.g., `transform: translateZ(0)`) to improve animation performance. 2. JavaScript Optimization: - Use `requestAnimationFrame` for animations and visual updates instead of `setTimeout` or `setInterval`. - Avoid forced synchronous layouts to prevent reflows and repaints. ### 2.3. DOM Operation Optimization 1. Reduce DOM Operations: - Use DocumentFragment for batch operations to avoid frequent DOM insertions and deletions. 2. Use Virtual DOM: - Utilize the virtual DOM mechanism of frameworks like React to reduce actual DOM operations and improve performance. ### 2.4. Network Request Optimization 1. Browser Caching: - Use browser caching strategies (e.g., ETag and Cache-Control) to cache static resources and reduce redundant requests. 2. API Optimization: - Reduce the number and data volume of API requests through techniques like pagination and data aggregation. ### 2.5. JavaScript Execution Optimization 1. Avoid Global Variables: - Minimize the use of global variables to reduce the overhead of scope chain lookups. 2. Web Workers: - Use Web Workers for compute-intensive tasks to execute them in background threads, avoiding blocking the main thread. ### 2.6. Memory Usage Optimization 1. Clean Up Unused Resources: - Timely clean up unused objects, event listeners, and timers to avoid memory leaks. 2. Performance Monitoring: - Use the browser's Performance API to analyze memory usage and identify potential issues. ### 2.7. Server-Side Rendering (SSR) Optimization 1. Accelerate First Screen Load: - Use server-side rendering to speed up the first screen load and improve initial rendering performance. 2. Code Consistency: - Ensure consistency between server-side and client-side code to avoid mismatches. ## 3. Basic Optimization Tools ### 3.1. Chrome DevTools - Chrome's built-in developer tools offer rich performance analysis features. - The Performance panel can record and analyze performance data during page load and runtime. - The Network panel monitors network requests, including load times and resource sizes. ### 3.2. Firefox Developer Tools - Firefox's developer tools also include performance analysis and network monitoring features. - The Performance panel records various performance metrics during page load and runtime. - The Network Monitor provides detailed information on network requests. ### 3.3. Lighthouse - An open-source tool by Google for improving web page quality. - Evaluates performance, accessibility, progressive web apps, and more, providing specific optimization suggestions. - Can be integrated into Chrome DevTools or used as a command-line tool. ### 3.4. WebPageTest - A free online service that tests website performance from multiple locations worldwide. - Provides detailed performance reports, including load times and page speed scores. - Supports simulating different network environments and device types. ### 3.5. JMeter - Apache JMeter is an open-source load testing tool primarily used for backend performance testing. - It can also simulate a large number of front-end requests to evaluate front-end performance. ### 3.6. LoadRunner - Micro Focus LoadRunner is a powerful performance testing tool that can create virtual user behavior under actual load conditions. - Supports a wide range of protocols, including web and mobile, for comprehensive front-end performance testing. ### 3.7. Gatling - Gatling is a high-performance load testing tool based on Scala, Akka, and Netty. - It focuses on ease of use and high performance, suitable for large-scale front-end performance testing. ### 3.8. SiteSpeed.io - SiteSpeed.io is an open-source web performance tool that analyzes web page performance and provides optimization suggestions. - Supports multiple browsers and offers a simple web interface to view test results. ### 3.9. SpeedCurve - SpeedCurve is a web-based performance monitoring tool for tracking and visualizing web application performance. - It can integrate with Lighthouse, providing real-time performance monitoring and alerting. ## 4. Latest Front-End Performance Optimization Technologies - **Preloading and Preconnecting**: Use preloading and preconnecting technologies to reduce page load times. Preloading allows the browser to load required resources before the page loads, while preconnecting establishes connections to target servers before the page loads. - **Lazy Loading and Deferred Loading**: Utilize lazy loading and deferred loading technologies to load only necessary resources. This can reduce page load times and improve user experience. - **WebAssembly**: A binary instruction format for running efficient code on the web. WebAssembly code is small, loads quickly, can execute in parallel, and is platform-independent, running on various browsers and operating systems. - **AI in Front-End Performance Optimization**: Use AI technology to analyze and optimize front-end code, automatically identifying performance bottlenecks and providing optimization suggestions. AI can analyze front-end code, identify performance bottlenecks, and provide optimization suggestions, helping developers quickly and effectively enhance front-end performance. ## 5. Latest Front-End Performance Testing Tools - **Sunshine Track**: Suitable for frameworks like Vue, React, and Angular, providing user behavior reporting and request data reporting to help developers monitor and analyze front-end performance. - **LocalForage**: For browser local cache operations, asynchronously executed to reduce the risk of code execution blocking, offering a range of APIs such as add, modify, delete, and search. - **Rsbuild**: A web build tool based on Rspack, providing a smooth migration solution from Webpack to Rspack, significantly reducing configuration requirements and improving build speed. - **Bun**: An all-in-one toolset integrating package management, testing, building, and transpiling, with outstanding performance. - **Vite**: An open-source front-end build tool by the Vue team, based on native ES modules, aiming to provide a fast and smooth development experience. - **Webpack**: A veteran module bundler and the most popular front-end build tool, supporting various module bundling and optimization strategies. ## 6. Best Practices for Front-End Performance Optimization - **Reduce File Sizes**: Compress and minify files to reduce their sizes. For larger files, consider splitting them into smaller modules and loading them only when needed. - **Use Caching**: Set HTTP headers to allow the browser to cache resources after the first visit, avoiding additional server requests. - **Reduce HTTP Requests**: Use CSS Sprites to combine multiple images and inline CSS to avoid additional CSS requests, reducing the number of HTTP requests. - **Use CDN**: Use CDN servers worldwide to significantly shorten page load times. - **Use Asynchronous Loading**: Asynchronous loading techniques can load non-critical resources without blocking page rendering. - **Reduce DOM Operations**: Use `innerHTML` instead of `createElement()` to create dynamic DOM elements, and use `document.createDocumentFragment` to encapsulate many DOM elements and add them to the DOM tree in one operation, avoiding extra DOM operations. - **Compress Images**: Compress images to reduce their sizes without affecting quality, reducing HTTP requests. ## 7. AI-Driven Optimization ### 7.1. AI-Assisted Performance Analysis - **Intelligent Performance Monitoring**: AI can monitor front-end performance in real-time, automatically detecting anomalies and bottlenecks. For example, by analyzing user interactions and page load times, AI can identify specific causes of performance degradation. - **Automated Testing**: AI-driven automated testing tools can simulate various user scenarios to comprehensively evaluate front-end performance. These tests include not only load times and response speeds but also user experience and interaction smoothness. ### 7.2. AI Optimization Suggestions - **Code Optimization Suggestions**: AI can analyze front-end code to identify redundant or unnecessary parts and provide optimization suggestions. For example, AI can help refactor code to reduce load times or suggest more efficient algorithms. - **Resource Optimization**: AI can intelligently suggest which resources should be optimized, compressed, or merged, and how to accelerate resource loading through strategies like CDN. ### 7.3. AI-Driven Adaptive Experience - **Personalized Loading Strategies**: Based on user behavior patterns and device characteristics, AI can dynamically adjust resource loading strategies. For example, for frequent visitors, AI might prioritize loading content they are most interested in. - **Intelligent Lazy Loading**: AI can predict user scroll paths to implement more precise lazy loading, loading only the necessary resources within the current viewport. ### 7.4. AI in Front-End Performance Tools - **Intelligent Build Tools**: Tools like Vite and Webpack can leverage AI algorithms to optimize the build process, automatically selecting the best module splitting and loading strategies. - **Performance Management Platforms**: Platforms such as New Relic and Dynatrace integrate AI technology to automatically detect performance issues and provide solutions. ## 8. Conclusion Front-end performance optimization is a complex and multi-layered process involving various aspects such as resource loading, rendering performance, DOM operations, network requests, JavaScript execution, and memory usage. By employing basic optimization methods like choosing appropriate image formats, merging and compressing resources, using CDNs, optimizing CSS and JavaScript, reducing DOM operations, leveraging browser caching, and optimizing API requests, developers can significantly enhance page load speed and user experience. Moreover, the latest front-end performance optimization technologies such as preloading and preconnecting, lazy loading and deferred loading, WebAssembly, and the application of AI in front-end performance optimization further drive performance improvements. AI-driven intelligent performance monitoring, automated testing, code optimization suggestions, resource optimization, personalized loading strategies, and intelligent lazy loading can achieve more efficient and smarter front-end performance optimization. By utilizing performance testing tools like Chrome DevTools, Firefox Developer Tools, Lighthouse, WebPageTest, JMeter, LoadRunner, Gatling, SiteSpeed.io, and SpeedCurve, developers can comprehensively monitor and analyze front-end performance, promptly identifying and resolving performance bottlenecks. In summary, front-end performance optimization requires not only mastering basic optimization methods but also keeping up with technological advancements, utilizing the latest optimization technologies and tools, and especially integrating AI-driven intelligent optimization strategies. This approach will enable developers to stand out in a competitive environment and deliver exceptional user experiences. ## 9. Codia AI's products Codia AI has rich experience in multimodal, image processing, development, and AI. 1.[**Codia AI Figma to code:HTML, CSS, React, Vue, iOS, Android, Flutter, Tailwind, Web, Native,...**](https://codia.ai/s/YBF9) ![Codia AI Figma to code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xml2pgydfe3bre1qea32.png) 2.[**Codia AI DesignGen: Prompt to UI for Website, Landing Page, Blog**](https://codia.ai/t/pNFx) ![Codia AI DesignGen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/55kyd4xj93iwmv487w14.jpeg) 3.[**Codia AI Design: Screenshot to Editable Figma Design**](https://codia.ai/d/5ZFb) ![Codia AI Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrl2lyk3m4zfma43asa0.png) 4.[**Codia AI VectorMagic: Image to Full-Color Vector/PNG to SVG**](https://codia.ai/v/bqFJ) ![Codia AI VectorMagic](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hylrdcdj9n62ces1s5jd.jpeg)
happyer
1,892,586
SteamVR Overlay with Unity: Controller Input
In this part, we will make the watch hidden by default and show it in a few seconds by pressing a...
27,740
2024-06-18T15:01:34
https://dev.to/kurohuku/part-10-controller-input-2ij2
unity3d, steamvr, openvr, vr
In this part, we will make the watch hidden by default and show it in a few seconds by pressing a controller button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yj6umot4u9eizsgfkdjw.gif) There are various ways to get controller input with Unity. This time, we will use [OpenVR Input API (SteamVR Input)](https://github.com/ValveSoftware/openvr/wiki/SteamVR-Input). ## Prerequisite setting Open the SteamVR Settings. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t32iq53hn1p5nsfw9xd1.png) Toggle **Advanced Settings**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/549oreg9jk784t4773m2.png) Toggle **Developer > Enable debugging options in the input binding user interface** to **On**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rigizcffoij3wiw2xntj.png) ## Create action manifest SteamVR Input uses abstracted actions instead of direct access to the physical buttons. The developer predefines actions and may create default mappings for each controller. Users can assign physical buttons to the actions in the SteamVR Input setting window. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffx83rfwp4ccllvbjtm7.png) First, create the action manifest that defines actions in JSON format. We use the Unity SteamVR Plugin GUI tool to generate the action manifest easily. ### Generate action manifest Select Unity menu **Window > SteamVR Input**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2jqigt1n9ewfr3h6ojt.png) It asks you whether you want to use a sample action manifest file. This time we will make up from the first so select No. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pifrl2hh0aou8fgewio9.png) Input the action sets name to **“Watch”**, and select the below dropdown to **per hand**. An application can have multiple action sets. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/stzacsek8vkgr7kpaddb.png) **Per hand** means it can map different actions to the buttons of the left and right-hand controller separately. **Mirrored** means it can map actions to only one controller and the other buttons are mirrored to the mapping. Click the **NewAction** name that is in the **In** box of the **Actions** section. The **“Action Details”** is shown to the right. Change the **Name** to **“WakeUp”**. Click the **“Save and generate”** button at the left bottom to generate an action manifest file. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s51rgvet3nafw0461u62.png) It will generate a new action manifest file as **StreamingAssets/SteamVR/actions.json**. ```json { "actions": [ { "name": "/actions/Watch/in/WakeUp", "type": "boolean" } ], "action_sets": [ { "name": "/actions/Watch", "usage": "leftright" } ], "default_bindings": [], "localization": [] } ``` ## Create default binding Click the right bottom **Open binding UI** button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tn2re0jybmn5y4meaww9.png) If any VR controller is not shown, check your HMD is recognized from SteamVR. Click the **“Create New Binding”**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t03057rs5el9ociegwnt.png) Set the Y button to show the watch for a few seconds. If your controller doesn’t have the Y button, use another button instead. Click the **+** button to the right of the **Y Button**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eh80pb6voosgjzxl61lt.png) Select **BUTTON**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l31lthf3uhv8rmlx71j.png) Select **None** to the right of the Click, then select **wakeup** action. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jz5krz7saktu7qsi2s3n.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3m058df3manpz6fztkdw.png) Click the left bottom check icon to save changes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ridp4mr8ad3584mf78rr.png) Click the right bottom **Replace Default Binding**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4xv1v4sobexu5hxp2ss.png) Click **Save**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swt8f3p74vd2v7k3y8tt.png) At this point, a new default binding file is added to the action manifest. `actions.json` ```json { "action_sets" : [ { "name" : "/actions/Watch", "usage" : "single" } ], "actions" : [ { "name" : "/actions/Watch/in/WakeUp", "type" : "boolean" } ], "default_bindings" : [ { + "binding_url" : "application_generated_unity_retryoverlay_exe_binding_oculus_touch.json", + "controller_type" : "oculus_touch" } ], "localization" : [] } ``` The default binding file is generated into the same directory to actions.json. These files will be included in the build for users. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffrm5hluq2zw6gcz99qn.png) Click the **<- BACK** button at the upper left, then you see the new setting is active. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/julogqv193yaue21jetr.png) Now we’ve made the default binding. Close the SteamVR Input window. Also, close the Unity SteamVR Input window. At this time, it asks you to save the change, so click **Close**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwnfj46azm4q8cskgigt.png) In this tutorial environment, clicking the Save button overwrites the action manifest as an empty **default_bindings**. (There is no problem with clicking Save when we open the binding setting window next time.) --- ### How about other controller bindings? You can create other controller’s default bindings by clicking the controller name on the right side. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txjz55r3v1x0suwza0yh.png) Other controllers’ default bindings are added to default_bindings param of action manifest. ```diff { "action_sets" : [ { "name" : "/actions/NewSet", "usage" : "leftright" } ], "actions" : [ { "name" : "/actions/NewSet/in/NewAction", "type" : "boolean" } ], "default_bindings" : [ { "binding_url" : "application_generated_unity_steamvr_inputbindingtest_exe_binding_oculus_touch.json", "controller_type" : "oculus_touch" }, + { + "binding_url" : "application_generated_unity_steamvr_inputbindingtest_exe_binding_vive_controller.json", + "controller_type" : "vive_controller" + } ], "localization" : [] } ``` You don’t necessarily have to create other controller default bindings because SteamVR automatically remaps actions for other controllers if at least one default binding exists. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9e7al8ucfkf8vctwfcow.png) --- ### Create script Create a new script `InputController.cs` into `Scripts` folder. We will add the controller input related code to this file. On hierarchy, **right click > Create Empty** to create an empty object, change the object name to **InputController**. Drag `InputController.cs` from the project window to the **InputController** object. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9o5rzzptco92fysdyp4l.png) ### Initialize and cleanup OpenVR Copy the below code to **InputController.cs**. ```cs using System; using UnityEngine; using Valve.VR; public class InputController : MonoBehaviour { private void Start() { OpenVRUtil.System.InitOpenVR(); } private void Destroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` ## Set action manifest path Set the action manifest path with [SetActionManifestPath()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVRInput.html#Valve_VR_CVRInput_SetActionManifestPath_System_String_) at launch. (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/SteamVR-Input#api-documentation) for details) The action manifest is generated as **StreamingAssets/SteamVR/actions.json** so we set this. ```diff using System; using UnityEngine; using Valve.VR; public class InputController : MonoBehaviour { private void Start() { OpenVRUtil.System.InitOpenVR(); + var error = OpenVR.Input.SetActionManifestPath(Application.streamingAssetsPath + "/SteamVR/actions.json"); + if (error != EVRInputError.None) + { + throw new Exception("Failed to set action manifest path: " + error); + } } private void Destroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` ## Get action set handle An application can have some action sets that have multiple actions. We use an action set handle to determine which action set is used. Get action set handle with [GetActionSetHandle()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVRInput.html#Valve_VR_CVRInput_GetActionSetHandle_System_String_System_UInt64__). The action set is set with a string **/actions/[action_set_name]**. This time, it’s **/actions/Watch**. ```diff public class InputController : MonoBehaviour { + ulong actionSetHandle = 0; private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Input.SetActionManifestPath(Application.streamingAssetsPath + "/SteamVR/actions.json"); if (error != EVRInputError.None) { throw new Exception("Failed to set action manifest path: " + error); } + error = OpenVR.Input.GetActionSetHandle("/actions/Watch", ref actionSetHandle); + if (error != EVRInputError.None) + { + throw new Exception("Failed to get action set /actions/Watch: " + error); + } } private void Destroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` ## Get action handle Next, get the action handle with [GetActionHandle()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVRInput.html#Valve_VR_CVRInput_GetActionHandle_System_String_System_UInt64__). Action is set with **/actions/[action_set_name]/in/[action_name]**. This time, action is **/actions/Watch/in/Wakeup**. ```diff public class InputController : MonoBehaviour { ulong actionSetHandle = 0; + ulong actionHandle = 0; private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Input.SetActionManifestPath(Application.streamingAssetsPath + "/SteamVR/actions.json"); if (error != EVRInputError.None) { throw new Exception("Faild to set action manifest path: " + error); } error = OpenVR.Input.GetActionSetHandle("/actions/Watch", ref actionSetHandle); if (error != EVRInputError.None) { throw new Exception("Failed to get action set /actions/Watch: " + error); } + error = OpenVR.Input.GetActionHandle($"/actions/Watch/in/WakeUp", ref actionHandle); + if (error != EVRInputError.None) + { + throw new Exception("Faild to get action /actions/Watch/in/WakeUp: " + error); + } } private void Destroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` ## Update action state Update each action state at the first of each frame. ### Prepare action set to update We use an action set to specify which actions will be updated. Pass target action sets as an array of [VRActiveActionSet_t](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.VRActiveActionSet_t.html) type. This time, the action set is only * so we make a single-element array `VRActionActiveSet_t[]`. ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Input.SetActionManifestPath(Application.streamingAssetsPath + "/SteamVR/actions.json"); if (error != EVRInputError.None) { throw new Exception("Failed to set action manifest path: " + error); } error = OpenVR.Input.GetActionSetHandle("/actions/Watch", ref actionSetHandle); if (error != EVRInputError.None) { throw new Exception("Failed to get action set /actions/Watch: " + error); } error = OpenVR.Input.GetActionHandle($"/actions/Watch/in/WakeUp", ref actionHandle); if (error != EVRInputError.None) { throw new Exception("Failed to get action /actions/Watch/in/WakeUp: " + error); } } + private void Update() + { + // Action set list to update + var actionSetList = new VRActiveActionSet_t[] + { + // This time, Watch action only. + new VRActiveActionSet_t() + { + // Pass Watch action set handle + ulActionSet = actionSetHandle, + ulRestrictedToDevice = OpenVR.k_ulInvalidInputValueHandle, + } + }; + } ... ``` The second argument `activeActionSize` is the byte size of `VRActiveActionSet_t` the struct. ## Get action value We have updated the action state. Next, we will get the action value. The function used to retrieve action value varies depending on the action type. ### GetDigitalActionData() This gets an on/off value likes the button is pushed or not. ### GetAnalogActionData() This gets analog values like thumb stick direction or how much trigger is pulled. ### GetPoseActionData() This gets a pose like controller position and rotation. This time, we want to get the button on/off value so use `GetDigitalActionData()`. Get the action value with WakeUp action handle we prepared. ```diff private void Update() { var actionSetList = new VRActiveActionSet_t[] { new VRActiveActionSet_t() { ulActionSet = actionSetHandle, ulRestrictedToDevice = OpenVR.k_ulInvalidInputValueHandle, } }; var activeActionSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VRActiveActionSet_t)); var error = OpenVR.Input.UpdateActionState(actionSetList, activeActionSize); if (error != EVRInputError.None) { throw new Exception("Failed to update action state: " + error); } + var result = new InputDigitalActionData_t(); + var digitalActionSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(InputDigitalActionData_t)); + error = OpenVR.Input.GetDigitalActionData(actionHandle, ref result, digitalActionSize, OpenVR.k_ulInvalidInputValueHandle); + if (error != EVRInputError.None) + { + throw new Exception("Failed to get WakeUp action data: " + error); + } } ``` The 3rd argument `digitalActionSize` is the byte size of `InputDigitalActionData_t` structure. The 4th argument `ulRestrictToDevice` is generally not used so set `OpenVR.k_ulInvalidInputValueHandle`. The returned value type is [InputDigitalActionData_t](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.InputDigitalActionData_t.html). ```cs struct InputDigitalActionData_t { bool bActive; VRInputValueHandle_t activeOrigin; bool bState; bool bChanged; float fUpdateTime; }; ``` The action on/off state is set to `bState`. `bChanged` is `true` at the frame where the action state is changed. Let’s detect the action. ```diff private void Update() { var actionSetList = new VRActiveActionSet_t[] { new VRActiveActionSet_t() { ulActionSet = actionSetHandle, ulRestrictedToDevice = OpenVR.k_ulInvalidInputValueHandle, } }; var activeActionSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VRActiveActionSet_t)); var error = OpenVR.Input.UpdateActionState(actionSetList, activeActionSize); if (error != EVRInputError.None) { throw new Exception("Failed to update action state: " + error); } var result = new InputDigitalActionData_t(); var digitalActionSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(InputDigitalActionData_t)); error = OpenVR.Input.GetDigitalActionData(actionHandle, ref result, digitalActionSize, OpenVR.k_ulInvalidInputValueHandle); if (error != EVRInputError.None) { throw new Exception("Failed to get WakeUp action data: " + error); } + if (result.bState && result.bChanged) + { + Debug.Log("WakeUp is executed"); + } } ``` Run the program and push the Y button, then the message is logged to the Unity console. ## Create event handler ### Notify to Unity Event Add `UnityEvent` into **InputController.cs** to notify WakeUp action, then call it with `Invoke()`. ```diff using System; using UnityEngine; using Valve.VR; + using using UnityEngine.Events; public class InputController : MonoBehaviour { + public UnityEvent OnWakeUp; private ulong actionSetHandle = 0; private ulong actionHandle = 0; // code omit. private void Update() { var actionSetList = new VRActiveActionSet_t[] { new VRActiveActionSet_t() { ulActionSet = actionSetHandle, ulRestrictedToDevice = OpenVR.k_ulInvalidInputValueHandle, } }; var activeActionSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VRActiveActionSet_t)); var error = OpenVR.Input.UpdateActionState(actionSetList, activeActionSize); if (error != EVRInputError.None) { throw new Exception("Failed to update action state: " + error); } var result = new InputDigitalActionData_t(); var digitalActionSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(InputDigitalActionData_t)); error = OpenVR.Input.GetDigitalActionData(actionHandle, ref result, digitalActionSize, OpenVR.k_ulInvalidInputValueHandle); if (error != EVRInputError.None) { throw new Exception("Failed to get WakeUp action data: " + error); } if (result.bState && result.bChanged) { - Debug.Log("WakeUp is executed"); + OnWakeUp.Invoke(); } } ``` ### Attach event handler Add method into `WatchOverlay.cs` that will be called when `WakeUp` action is executed. ```diff public class WatchOverlay : MonoBehaviour { ... + public void OnWakeUp() + { + // Show watch. + } } ``` In the hierarchy, open `InputController` inspector, then set `WatchOverlay.OnWakeUp()` to `OnWakeUp()` field. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/92wrvf65j3rp9p6raomq.png) ## Add code to show/hide the watch Update **WatchOverlay.cs** to show the watch only when `WakeUp` action is executed. ### Hide watch by default ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); overlayHandle = Overlay.CreateOverlay("WatchOverlayKey", "WatchOverlay"); Overlay.FlipOverlayVertical(overlayHandle); Overlay.SetOverlaySize(overlayHandle, size); - Overlay.ShowOverlay(overlayHandle); } ``` ### Show watch by WakeUp action Show overlay when the WakeUp action is executed. ```diff public void OnWakeUp() { + Overlay.ShowOverlay(overlayHandle); } ``` ### Add hide method Add `HideOverlay()` to **OpenVRUtil.cs**. ```diff public static void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } + public static void HideOverlay(ulong handle) + { + var error = OpenVR.Overlay.HideOverlay(handle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to hide overlay: " + error); + } + } public static void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } ``` ### Hide watch after three seconds This time, we use Unity Coroutine to wait three seconds. `WatchOverlay.cs` ```diff using System; + using System.Collections; using UnityEngine; using Valve.VR; using OpenVRUtil; public class WatchOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; public ETrackedControllerRole targetHand = ETrackedControllerRole.RightHand; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; + private Coroutine sleepCoroutine; ... public void OnWakeUp() { Overlay.ShowOverlay(overlayHandle); + if (sleepCoroutine != null) + { + StopCoroutine(sleepCoroutine); + } + sleepCoroutine = StartCoroutine(Sleep()); } + private IEnumerator Sleep() + { + yield return new WaitForSeconds(3); + Overlay.HideOverlay(overlayHandle); + } } ``` Run the program, and check the watch is shown for three seconds when the Y button is pushed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kr69g2zdqhm0pupvl5w3.gif) ## Complete 🎉 The watch application is completed. This tutorial ends here, so the final code organization is up to you. Look back at what we touched on in this tutorial. - Initialize OpenVR - Create overlay - Draw image file - Change overlay size and position - Follow devices - Draw camera output - Create dashboard overlay - Process events - Controller input We learned the basics of overlay application development. OpenVR has many more APIs, so how about trying to make original tools? The next page is additional information for more details.
kurohuku
1,892,584
Synchronizing multiple graphics
Step 1: Define our objective To keep things simple let’s set our objective first: We want...
0
2024-06-18T15:00:37
https://dev.to/lenormor/synchronizing-multiple-graphics-31mc
webdev, javascript, tutorial, node
## Step 1: Define our objective **To keep things simple let’s set our objective first:** We want to combine two instances of our `<demo-schedule-booking>` demonstration component. This component is capable of displaying a ScheduleJS Gantt chart. Note that the same approach can be used with two completely different graphics. ## Step 2: Create a higher-order ‘dual-gantt’ component A good idea is to use a containerized version of our graphics to be able to handle the display of multiple graphics with ease, using components. Now let’s start by creating a dual-gantt component and try out our <demo-schedule-booking> component. ``` <!-- Let's start with one graphic at a time --> <demo-schedule-booking><demo-schedule-booking/> ``` Fow now, if our `<demo-schedule-booking>` component is functional, our dual-gantt component will display the following screen, which is in fact identical to the original component. ![JS Gantt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8g6pvbbig5a6ioh8x6tn.png) Now we can create a `<div>` and use our `<demo-schedule-booking>` component twice in our brand new dual-gantt component: ``` <!-- Let's assemble two ScheduleJS demo bookings in our component --> <div class="gantts-container"> <demo-schedule-booking class="first-gantt"></demo-schedule-booking> <demo-schedule-booking class="second-gantt"></demo-schedule-booking> </div> ``` We added CSS classes to properly display one graphic above the other and separate them with a border. Here, the `gantts-container` class handles the display while the `first-gantt` and the `second-gantt` classes handle specificities, like the separation border. We now have two unsynchronized graphics inside our new dual-gantt component: ![JS Gantt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ikam8egadqntljnhpyu.png) To further improve the display, we adapted our `<demo-schedule-booking>` component to accept two new input properties: - `displayTimeline`: True by default, as we are going to use a single timeline, it is not necessary to repeat the timeline in both graphics. - `displayButtonBar`: True by default, will let us hide the button bar to only keep one button bar for both graphics. As a ScheduleJS Gantt component, `<demo-schedule-booking>` also accepts additional inputs by default. Here we will use the `dragToResizeInfoColumnPrevented` input property to prevent any individual info-column resize for both graphics. The result should get rid of the button bar and timeline for the second graphics: ``` <!-- Let's add a few properties to better handle the dual-gantt display --> <div class="gantts-container"> <demo-schedule-booking class="first-gantt" [dragToResizeInfoColumnPrevented]="true"> </demo-schedule-booking> <demo-schedule-booking class="second-gantt" [displayTimeline]="false" [displayButtonBar]="false" [dragToResizeInfoColumnPrevented]="true"> </demo-schedule-booking> </div> ``` ![JS Gantt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ciq0dv7ailues8el5jx.png) ## Step 3: Share a single timeline object We want to create a new timeline object and pass it down to our graphics once our dual-gantt component mounts. Here is the source code of our dual-gantt component: ``` // Our dual-gantt component import {Component, Injector} from "@angular/core"; import {Timeline} from "schedule"; @Component({ selector: "dual-gantt", templateUrl: "./dual-gantt.component.html", styleUrls: ["./dual-gantt.component.scss"] }) export class DualGanttComponent { // Here we create a single timeline, which requires the Angular Injector readonly timeline: Timeline = new Timeline(this._injector); // Use dependency injection to provide an instance of the Angular Injector constructor(private readonly _injector: Injector) { } } ``` Now we have to pass down our new timeline object as an input to our `<demo-schedule-booking>` ScheduleJS Gantt component in the dual-gantt component template: ``` <!-- Pass our freshly instanciated timeline object for registration in both components --> <div class="gantts-container"> <demo-schedule-booking class="first-gantt" [timeline]="timeline" [dragToResizeInfoColumnPrevented]="true"> </demo-schedule-booking> <demo-schedule-booking class="second-gantt" [timeline]="timeline" [displayTimeline]="false" [displayButtonBar]="false" [dragToResizeInfoColumnPrevented]="true"> </demo-schedule-booking> </div> ``` ## Step 4: Register the timeline using the ScheduleJS API The last step is to register the timeline in the `<demo-schedule-booking>` component. To do so, we decided to create a setter method, which will run the registration code when the input is passed down to the component: ``` export class DemoBookingComponent extends DefaultScheduleTreeGanttComponentBase<ResourceRow, DefaultScheduleGanttGraphicTreeComponent<ResourceRow>> { // [...] // Register the given timeline or do nothing @Input() set timeline(timeline: Timeline | undefined) { if (timeline) { this.gantt.setTimeline(timeline); this.gantt.getGraphics().setTimeline(timeline); } } // [...] } ``` ## Conclusion By creating a dual-Gantt component, we have successfully combined two instances of the `<demo-schedule-booking>` component into a cohesive and synchronized display. This approach not only enhances the visual representation of Gantt charts but also improves the user experience by sharing a single timeline and reducing redundancy. Leveraging the powerful features of ScheduleJS, we can efficiently manage complex project schedules with greater interactivity and customization. This method demonstrates how flexible and dynamic JavaScript Gantt charts can be, making them indispensable tools for modern project management. If you'd like to see the **final result**, don't hesitate to take a look at: [Synchronizing multiple graphics](https://schedulejs.com/en/synchronizing-multiple-graphics/) **For more information on JS Gantt see: [ScheduleJS](https://schedulejs.com)**
lenormor
1,892,336
TW Elements - TailwindCSS Navbar Icons & Logo. Free UI/UX design course
Navbar Icons &amp; Logo Now that we know how to work with icons in Tailwind, it's time to...
25,935
2024-06-18T15:00:00
https://dev.to/keepcoding/tw-elements-tailwindcss-navbar-icons-logo-free-uiux-design-course-4bi5
tailwindcss, tutorial, css, webdev
## Navbar Icons & Logo Now that we know how to work with icons in Tailwind, it's time to update the Navbar in our project. This is how it looks right now: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymnzkdqim4oxwfx4u46m.png) **First, let's change the icons on the right.** ## Step 1 - change icons In Navbar's code, find the comment "!-- Right elements --" . It means a wrapper in which the icons on the right side of the Navbar are located. **HTML** ``` <!-- Navbar --> <nav class="flex-no-wrap relative flex w-full items-center justify-between bg-white py-2 shadow-md shadow-black/5 dark:bg-neutral-600 dark:shadow-black/10 lg:flex-wrap lg:justify-start lg:py-4" data-twe-navbar-ref> <!-- Here add a container --> <div class="mx-auto flex w-full flex-wrap items-center justify-between px-3 lg:container"> [...] <!-- Right elements --> <div class="relative flex items-center"> <!-- Cart Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-5 w-5"> <path d="M2.25 2.25a.75.75 0 000 1.5h1.386c.17 0 .318.114.362.278l2.558 9.592a3.752 3.752 0 00-2.806 3.63c0 .414.336.75.75.75h15.75a.75.75 0 000-1.5H5.378A2.25 2.25 0 017.5 15h11.218a.75.75 0 00.674-.421 60.358 60.358 0 002.96-7.228.75.75 0 00-.525-.965A60.864 60.864 0 005.68 4.509l-.232-.867A1.875 1.875 0 003.636 2.25H2.25zM3.75 20.25a1.5 1.5 0 113 0 1.5 1.5 0 01-3 0zM16.5 20.25a1.5 1.5 0 113 0 1.5 1.5 0 01-3 0z" /> </svg> </span> </a> <!-- Container with two dropdown menus --> <div class="relative" data-twe-dropdown-ref> <!-- First dropdown trigger --> <a class="hidden-arrow me-4 flex items-center text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#" id="dropdownMenuButton1" role="button" data-twe-dropdown-toggle-ref aria-expanded="false"> <!-- Dropdown trigger icon --> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-5 w-5"> <path fill-rule="evenodd" d="M5.25 9a6.75 6.75 0 0113.5 0v.75c0 2.123.8 4.057 2.118 5.52a.75.75 0 01-.297 1.206c-1.544.57-3.16.99-4.831 1.243a3.75 3.75 0 11-7.48 0 24.585 24.585 0 01-4.831-1.244.75.75 0 01-.298-1.205A8.217 8.217 0 005.25 9.75V9zm4.502 8.9a2.25 2.25 0 104.496 0 25.057 25.057 0 01-4.496 0z" clip-rule="evenodd" /> </svg> </span> <!-- Notification counter --> <span class="absolute -mt-2.5 ms-2 rounded-[0.37rem] bg-danger px-[0.45em] py-[0.2em] text-[0.6rem] leading-none text-white" >1</span > </a> <!-- First dropdown menu --> <ul class="absolute left-auto right-0 z-[1000] float-left m-0 mt-1 hidden min-w-max list-none overflow-hidden rounded-lg border-none bg-white bg-clip-padding text-left text-base shadow-lg data-[twe-dropdown-show]:block dark:bg-neutral-700" aria-labelledby="dropdownMenuButton1" data-twe-dropdown-menu-ref> <!-- First dropdown menu items --> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Another action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Something else here</a > </li> </ul> </div> <!-- Second dropdown container --> <div class="relative" data-twe-dropdown-ref> <!-- Second dropdown trigger --> <a class="hidden-arrow flex items-center whitespace-nowrap transition duration-150 ease-in-out motion-reduce:transition-none" href="#" id="dropdownMenuButton2" role="button" data-twe-dropdown-toggle-ref aria-expanded="false"> <!-- User avatar --> <img src="https://tecdn.b-cdn.net/img/new/avatars/2.jpg" class="rounded-full" style="height: 25px; width: 25px" alt="" loading="lazy" /> </a> <!-- Second dropdown menu --> <ul class="absolute left-auto right-0 z-[1000] float-left m-0 mt-1 hidden min-w-max list-none overflow-hidden rounded-lg border-none bg-white bg-clip-padding text-left text-base shadow-lg data-[twe-dropdown-show]:block dark:bg-neutral-700" aria-labelledby="dropdownMenuButton2" data-twe-dropdown-menu-ref> <!-- Second dropdown menu items --> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Another action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Something else here</a > </li> </ul> </div> </div> </div> </nav> <!-- Navbar --> ``` The first icon is the shopping cart icon (marked with "!-- Cart Icon --"). Inside the <a> element you will find a <span> element and within it an <svg> element. Go to the **[Hero Icons](https://heroicons.com/)** website and copy the SVG code of the **cog-6-tooth** icon. Then paste it in place of the shopping cart SVG icon. **HTML** ``` <!-- Navbar --> <nav class="flex-no-wrap relative flex w-full items-center justify-between bg-white py-2 shadow-md shadow-black/5 dark:bg-neutral-600 dark:shadow-black/10 lg:flex-wrap lg:justify-start lg:py-4" data-twe-navbar-ref> <!-- Here add a container --> <div class="mx-auto flex w-full flex-wrap items-center justify-between px-3 lg:container"> [...] <!-- Right elements --> <div class="relative flex items-center"> <!-- Cog Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-6 w-6"> <path fill-rule="evenodd" d="M11.078 2.25c-.917 0-1.699.663-1.85 1.567L9.05 4.889c-.02.12-.115.26-.297.348a7.493 7.493 0 00-.986.57c-.166.115-.334.126-.45.083L6.3 5.508a1.875 1.875 0 00-2.282.819l-.922 1.597a1.875 1.875 0 00.432 2.385l.84.692c.095.078.17.229.154.43a7.598 7.598 0 000 1.139c.015.2-.059.352-.153.43l-.841.692a1.875 1.875 0 00-.432 2.385l.922 1.597a1.875 1.875 0 002.282.818l1.019-.382c.115-.043.283-.031.45.082.312.214.641.405.985.57.182.088.277.228.297.35l.178 1.071c.151.904.933 1.567 1.85 1.567h1.844c.916 0 1.699-.663 1.85-1.567l.178-1.072c.02-.12.114-.26.297-.349.344-.165.673-.356.985-.57.167-.114.335-.125.45-.082l1.02.382a1.875 1.875 0 002.28-.819l.923-1.597a1.875 1.875 0 00-.432-2.385l-.84-.692c-.095-.078-.17-.229-.154-.43a7.614 7.614 0 000-1.139c-.016-.2.059-.352.153-.43l.84-.692c.708-.582.891-1.59.433-2.385l-.922-1.597a1.875 1.875 0 00-2.282-.818l-1.02.382c-.114.043-.282.031-.449-.083a7.49 7.49 0 00-.985-.57c-.183-.087-.277-.227-.297-.348l-.179-1.072a1.875 1.875 0 00-1.85-1.567h-1.843zM12 15.75a3.75 3.75 0 100-7.5 3.75 3.75 0 000 7.5z" clip-rule="evenodd" /> </svg> </span> </a> [...] </div> </div> </nav> <!-- Navbar --> ``` After saving the file, a cog icon will appear instead of the shopping cart. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbvtyvt38vo13hoecozs.png) Have you noticed the "span class="[&>svg]:w-5"" element with our icon in the middle? This time it sets the size of the icon, so we can remove classes . w-6 .h-6 from the cog icon. Below our new icon you will find 2 dropdowns. We'll talk about dropdowns in future lessons, but we don't need them right now. So remove this code. **HTML** ``` <!-- Container with two dropdown menus --> <div class="relative" data-twe-dropdown-ref> <!-- First dropdown trigger --> <a class="hidden-arrow me-4 flex items-center text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#" id="dropdownMenuButton1" role="button" data-twe-dropdown-toggle-ref aria-expanded="false"> <!-- Dropdown trigger icon --> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-5 w-5"> <path fill-rule="evenodd" d="M5.25 9a6.75 6.75 0 0113.5 0v.75c0 2.123.8 4.057 2.118 5.52a.75.75 0 01-.297 1.206c-1.544.57-3.16.99-4.831 1.243a3.75 3.75 0 11-7.48 0 24.585 24.585 0 01-4.831-1.244.75.75 0 01-.298-1.205A8.217 8.217 0 005.25 9.75V9zm4.502 8.9a2.25 2.25 0 104.496 0 25.057 25.057 0 01-4.496 0z" clip-rule="evenodd" /> </svg> </span> <!-- Notification counter --> <span class="absolute -mt-2.5 ms-2 rounded-[0.37rem] bg-danger px-[0.45em] py-[0.2em] text-[0.6rem] leading-none text-white" >1</span > </a> <!-- First dropdown menu --> <ul class="absolute left-auto right-0 z-[1000] float-left m-0 mt-1 hidden min-w-max list-none overflow-hidden rounded-lg border-none bg-white bg-clip-padding text-left text-base shadow-lg data-[twe-dropdown-show]:block dark:bg-neutral-700" aria-labelledby="dropdownMenuButton1" data-twe-dropdown-menu-ref> <!-- First dropdown menu items --> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Another action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Something else here</a > </li> </ul> </div> <!-- Second dropdown container --> <div class="relative" data-twe-dropdown-ref> <!-- Second dropdown trigger --> <a class="hidden-arrow flex items-center whitespace-nowrap transition duration-150 ease-in-out motion-reduce:transition-none" href="#" id="dropdownMenuButton2" role="button" data-twe-dropdown-toggle-ref aria-expanded="false"> <!-- User avatar --> <img src="https://tecdn.b-cdn.net/img/new/avatars/2.jpg" class="rounded-full" style="height: 25px; width: 25px" alt="" loading="lazy" /> </a> <!-- Second dropdown menu --> <ul class="absolute left-auto right-0 z-[1000] float-left m-0 mt-1 hidden min-w-max list-none overflow-hidden rounded-lg border-none bg-white bg-clip-padding text-left text-base shadow-lg data-[twe-dropdown-show]:block dark:bg-neutral-700" aria-labelledby="dropdownMenuButton2" data-twe-dropdown-menu-ref> <!-- Second dropdown menu items --> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Another action</a > </li> <li> <a class="block w-full whitespace-nowrap bg-transparent px-4 py-2 text-sm font-normal text-neutral-700 hover:bg-neutral-100 active:text-neutral-800 active:no-underline disabled:pointer-events-none disabled:bg-transparent disabled:text-neutral-400 dark:text-neutral-200 dark:hover:bg-white/30" href="#" data-twe-dropdown-item-ref >Something else here</a > </li> </ul> </div> Instead, copy the cog icon code an additional 2 times and change the comment in the 2 new icons to "envelope" and "user". **HTML** <!-- Right elements --> <div class="relative flex items-center"> <!-- Cog Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor"> <path fill-rule="evenodd" d="M11.078 2.25c-.917 0-1.699.663-1.85 1.567L9.05 4.889c-.02.12-.115.26-.297.348a7.493 7.493 0 00-.986.57c-.166.115-.334.126-.45.083L6.3 5.508a1.875 1.875 0 00-2.282.819l-.922 1.597a1.875 1.875 0 00.432 2.385l.84.692c.095.078.17.229.154.43a7.598 7.598 0 000 1.139c.015.2-.059.352-.153.43l-.841.692a1.875 1.875 0 00-.432 2.385l.922 1.597a1.875 1.875 0 002.282.818l1.019-.382c.115-.043.283-.031.45.082.312.214.641.405.985.57.182.088.277.228.297.35l.178 1.071c.151.904.933 1.567 1.85 1.567h1.844c.916 0 1.699-.663 1.85-1.567l.178-1.072c.02-.12.114-.26.297-.349.344-.165.673-.356.985-.57.167-.114.335-.125.45-.082l1.02.382a1.875 1.875 0 002.28-.819l.923-1.597a1.875 1.875 0 00-.432-2.385l-.84-.692c-.095-.078-.17-.229-.154-.43a7.614 7.614 0 000-1.139c-.016-.2.059-.352.153-.43l.84-.692c.708-.582.891-1.59.433-2.385l-.922-1.597a1.875 1.875 0 00-2.282-.818l-1.02.382c-.114.043-.282.031-.449-.083a7.49 7.49 0 00-.985-.57c-.183-.087-.277-.227-.297-.348l-.179-1.072a1.875 1.875 0 00-1.85-1.567h-1.843zM12 15.75a3.75 3.75 0 100-7.5 3.75 3.75 0 000 7.5z" clip-rule="evenodd" /> </svg> </span> </a> <!-- Envelope Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-6 w-6"> <path fill-rule="evenodd" d="M11.078 2.25c-.917 0-1.699.663-1.85 1.567L9.05 4.889c-.02.12-.115.26-.297.348a7.493 7.493 0 00-.986.57c-.166.115-.334.126-.45.083L6.3 5.508a1.875 1.875 0 00-2.282.819l-.922 1.597a1.875 1.875 0 00.432 2.385l.84.692c.095.078.17.229.154.43a7.598 7.598 0 000 1.139c.015.2-.059.352-.153.43l-.841.692a1.875 1.875 0 00-.432 2.385l.922 1.597a1.875 1.875 0 002.282.818l1.019-.382c.115-.043.283-.031.45.082.312.214.641.405.985.57.182.088.277.228.297.35l.178 1.071c.151.904.933 1.567 1.85 1.567h1.844c.916 0 1.699-.663 1.85-1.567l.178-1.072c.02-.12.114-.26.297-.349.344-.165.673-.356.985-.57.167-.114.335-.125.45-.082l1.02.382a1.875 1.875 0 002.28-.819l.923-1.597a1.875 1.875 0 00-.432-2.385l-.84-.692c-.095-.078-.17-.229-.154-.43a7.614 7.614 0 000-1.139c-.016-.2.059-.352.153-.43l.84-.692c.708-.582.891-1.59.433-2.385l-.922-1.597a1.875 1.875 0 00-2.282-.818l-1.02.382c-.114.043-.282.031-.449-.083a7.49 7.49 0 00-.985-.57c-.183-.087-.277-.227-.297-.348l-.179-1.072a1.875 1.875 0 00-1.85-1.567h-1.843zM12 15.75a3.75 3.75 0 100-7.5 3.75 3.75 0 000 7.5z" clip-rule="evenodd" /> </svg> </span> </a> <!-- User Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-6 w-6"> <path fill-rule="evenodd" d="M11.078 2.25c-.917 0-1.699.663-1.85 1.567L9.05 4.889c-.02.12-.115.26-.297.348a7.493 7.493 0 00-.986.57c-.166.115-.334.126-.45.083L6.3 5.508a1.875 1.875 0 00-2.282.819l-.922 1.597a1.875 1.875 0 00.432 2.385l.84.692c.095.078.17.229.154.43a7.598 7.598 0 000 1.139c.015.2-.059.352-.153.43l-.841.692a1.875 1.875 0 00-.432 2.385l.922 1.597a1.875 1.875 0 002.282.818l1.019-.382c.115-.043.283-.031.45.082.312.214.641.405.985.57.182.088.277.228.297.35l.178 1.071c.151.904.933 1.567 1.85 1.567h1.844c.916 0 1.699-.663 1.85-1.567l.178-1.072c.02-.12.114-.26.297-.349.344-.165.673-.356.985-.57.167-.114.335-.125.45-.082l1.02.382a1.875 1.875 0 002.28-.819l.923-1.597a1.875 1.875 0 00-.432-2.385l-.84-.692c-.095-.078-.17-.229-.154-.43a7.614 7.614 0 000-1.139c-.016-.2.059-.352.153-.43l.84-.692c.708-.582.891-1.59.433-2.385l-.922-1.597a1.875 1.875 0 00-2.282-.818l-1.02.382c-.114.043-.282.031-.449-.083a7.49 7.49 0 00-.985-.57c-.183-.087-.277-.227-.297-.348l-.179-1.072a1.875 1.875 0 00-1.85-1.567h-1.843zM12 15.75a3.75 3.75 0 100-7.5 3.75 3.75 0 000 7.5z" clip-rule="evenodd" /> </svg> </span> </a> </div> ``` Then go back to the **[Hero Icons](https://heroicons.com/)** page and find the **envelope** and **user** icons. Copy their SVG code and, as in the case of the first icon, replace their code in the appropriate places. **HTML** ``` <!-- Right elements --> <div class="relative flex items-center"> <!-- Cog Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor"> <path fill-rule="evenodd" d="M11.078 2.25c-.917 0-1.699.663-1.85 1.567L9.05 4.889c-.02.12-.115.26-.297.348a7.493 7.493 0 00-.986.57c-.166.115-.334.126-.45.083L6.3 5.508a1.875 1.875 0 00-2.282.819l-.922 1.597a1.875 1.875 0 00.432 2.385l.84.692c.095.078.17.229.154.43a7.598 7.598 0 000 1.139c.015.2-.059.352-.153.43l-.841.692a1.875 1.875 0 00-.432 2.385l.922 1.597a1.875 1.875 0 002.282.818l1.019-.382c.115-.043.283-.031.45.082.312.214.641.405.985.57.182.088.277.228.297.35l.178 1.071c.151.904.933 1.567 1.85 1.567h1.844c.916 0 1.699-.663 1.85-1.567l.178-1.072c.02-.12.114-.26.297-.349.344-.165.673-.356.985-.57.167-.114.335-.125.45-.082l1.02.382a1.875 1.875 0 002.28-.819l.923-1.597a1.875 1.875 0 00-.432-2.385l-.84-.692c-.095-.078-.17-.229-.154-.43a7.614 7.614 0 000-1.139c-.016-.2.059-.352.153-.43l.84-.692c.708-.582.891-1.59.433-2.385l-.922-1.597a1.875 1.875 0 00-2.282-.818l-1.02.382c-.114.043-.282.031-.449-.083a7.49 7.49 0 00-.985-.57c-.183-.087-.277-.227-.297-.348l-.179-1.072a1.875 1.875 0 00-1.85-1.567h-1.843zM12 15.75a3.75 3.75 0 100-7.5 3.75 3.75 0 000 7.5z" clip-rule="evenodd" /> </svg> </span> </a> <!-- Envelope Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor"> <path d="M1.5 8.67v8.58a3 3 0 003 3h15a3 3 0 003-3V8.67l-8.928 5.493a3 3 0 01-3.144 0L1.5 8.67z" /> <path d="M22.5 6.908V6.75a3 3 0 00-3-3h-15a3 3 0 00-3 3v.158l9.714 5.978a1.5 1.5 0 001.572 0L22.5 6.908z" /> </svg> </span> </a> <!-- User Icon --> <a class="me-4 text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 [&.active]:text-black/90 dark:[&.active]:text-neutral-400" href="#"> <span class="[&>svg]:w-5"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor"> <path fill-rule="evenodd" d="M7.5 6a4.5 4.5 0 119 0 4.5 4.5 0 01-9 0zM3.751 20.105a8.25 8.25 0 0116.498 0 .75.75 0 01-.437.695A18.683 18.683 0 0112 22.5c-2.786 0-5.433-.608-7.812-1.7a.75.75 0 01-.437-.695z" clip-rule="evenodd" /> </svg> </span> </a> </div> ``` After saving the file, our navbar should look like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4o993ttt9oqwshzhnire.png) ## Step 2 - change logo Find the comment "!-- Logo --" in the navbar. As the name suggests, it means a logo element. **HTML** ``` <!-- Logo --> <a class="mb-4 me-2 mt-3 flex items-center text-neutral-900 hover:text-neutral-900 focus:text-neutral-900 dark:text-neutral-200 dark:hover:text-neutral-400 dark:focus:text-neutral-400 lg:mb-0 lg:mt-0" href="#"> <img src="https://tecdn.b-cdn.net/img/logo/te-transparent-noshadows.webp" style="height: 15px" alt="" loading="lazy" /> </a> ``` On the **[Hero Icons](https://heroicons.com/)** page, choose an icon you like to use as our logo. I chose the "fire" icon. Then replace the img element with an svg element with our new logo: **HTML** ``` <!-- Logo --> <a class="mb-4 me-2 mt-3 flex items-center text-neutral-900 hover:text-neutral-900 focus:text-neutral-900 dark:text-neutral-200 dark:hover:text-neutral-400 dark:focus:text-neutral-400 lg:mb-0 lg:mt-0" href="#"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="h-6 w-6"> <path fill-rule="evenodd" d="M12.963 2.286a.75.75 0 00-1.071-.136 9.742 9.742 0 00-3.539 6.177A7.547 7.547 0 016.648 6.61a.75.75 0 00-1.152-.082A9 9 0 1015.68 4.534a7.46 7.46 0 01-2.717-2.248zM15.75 14.25a3.75 3.75 0 11-7.313-1.172c.628.465 1.35.81 2.133 1a5.99 5.99 0 011.925-3.545 3.75 3.75 0 013.255 3.717z" clip-rule="evenodd" /> </svg> </a> ``` After saving the file, our navbar should look like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8z41hil3mfyiluby2qtj.png) **[DEMO AND SOURCE CODE FOR THIS LESSON](DEMO AND SOURCE CODE FOR THIS LESSON)**
keepcoding
1,892,563
Ansible Middleware : Automation of JBoss Web Server (JWS)
In a previous article, we discussed the importance of Ansible from a Middleware perspective and why...
0
2024-06-18T15:00:00
https://www.opensourcerers.org/2022/03/14/ansible-middleware-automation-of-jboss-web-server-jws/
automation, tomcat, jws, ansible
In a previous article, we discussed the [importance of Ansible from a Middleware perspective](https://open011prod.wpengine.com/2021/11/22/why-do-you-need-ansible-to-manage-your-middleware-runtimes/) and why making Middleware a first-class citizen in the Ansible ecosystem is crucial. In this post, we’ll explore the technical details of utilizing Ansible for automating a Middleware solution. We’ll focus on one of the most used middleware software: Apache Tomcat. Or rather, the Red Hat supported version of it known as [JBoss Web Server (JWS)](https://developers.redhat.com/products/webserver/overview). Let’s start by defining what exactly JWS is, in case you are not familiar with this specific product. JWS combines a web server (Apache HTTPD), a Java servlet engine (Apache Tomcat), and load balancing components (mod_jk and mod_cluster) to help them scale. All of those elements are supported by Red Hat. In this article, we’ll focus only on the Java servlet engine (Apache Tomcat) as many of the challenges brought up by its automation are typical of other middleware software (such as JBoss EAP, JBoss Data Grid or RH SSO). ## What are we trying to achieve? Our goal here is to automate the deployment of this Java servlet engine, using Ansible. This may seem like a simple endeavor. However it does entail a few tasks and operations to be performed to achieve a proper result. First, we need to **configure the target’s operating system**. This includes creating a user and associate group to run JWS as well as the integration into [systemd](https://en.wikipedia.org/wiki/Systemd), so that the newly spawned server can be managed by the host’s operating system. JWS also requires that dependencies of the servlet’s engine (mostly a Java Virtual Machine in the proper version) are installed. In the next step, we'll **configure JWS itself**, which includes specifying which interface it needs to be bound against, defining which ports it will listen to, and so on. We may also customize the Java server during this step, like enabling some features (ex: SSL) or disable some other ones (for instance, removing the demo webapps shipped with the archive). At this point, one could think we are done and that the server is ready. Because Apache Tomcat is an application server, however we also want to **automate the deployment of its workloads**. Which means we need to ensure the webapps its hosting are appropriately deployed. With all this preparation work finished, we should be able to start the systemd service we configured earlier and double-check that the server, and its webapps, are functioning properly and as expected. This includes that all webapps are indeed deployed, accessible and operational. Here again, Ansible will help us in this **validation step**. If you are familiar with Ansible primitives and built-in modules, you know it’s quite a lot of work to automate all of these requirements. Fortunately, most of this work has been implemented and is ready for use inside the Ansible JWS Collection. An important note here: the four steps we’ve laid out above for JWS actually apply to most, if not all, middleware software (at least, the ones provided and supported by Red Hat). Some steps would be easier or more challenging to automate, depending on which one, but in essence, all will have the same kinds of requirements. _One last thing before we jump into the technical bits: if you want to reproduce the automation we describe in this article, keep in mind that the target’s host is running RHEL 9 and uses Ansible 2.15.9 or above (with Python 3.9.18)._ ## Installing the JWS Collection Installing the collection dedicated to Red Hat JWS uses the ansible-galaxy command like any other collection. As it is provided by Red Hat, however, it requires to use (instead or on top of Ansible Galaxy) the Red Hat Automation Hub. This can be achieved by adding the repository to the ansible.cfg file: ``` [defaults] ... [galaxy] server_list = automation_hub, galaxy [galaxy_server.galaxy] url=https://galaxy.ansible.com/ [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=<insert-your-token-here> ``` More information on [how to use Red Hat Automation Hub are available here](https://access.redhat.com/documentation/en-us/red_hat_jboss_web_server/6.0/html/installing_jboss_web_server_by_using_the_red_hat_ansible_certified_content_collection/install_collection). To [obtain the Red Hat Automation Hub token](https://console.redhat.com/ansible/automation-hub/token), follow this documentation. Once this is setup, `ansible-galaxy `can install the collection: ``` # ansible-galaxy collection install redhat.jws Starting galaxy collection install process Process install dependency map Starting collection install process Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-jws-2.0.0.tar.gz to /root/.ansible/tmp/ansible-local-91vvby1md6/tmpjzlxwbz6/redhat-jws-2.0.0-mj2vji8g Installing 'redhat.jws:2.0.0' to '/root/.ansible/collections/ansible_collections/redhat/jws' Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-runtimes_common-1.2.1.tar.gz to /root/.ansible/tmp/ansible-local-91vvby1md6/tmpjzlxwbz6/redhat-runtimes_common-1.2.1-rwk2o0dw redhat.jws:2.0.0 was installed successfully Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/ansible-posix-1.5.4.tar.gz to /root/.ansible/tmp/ansible-local-91vvby1md6/tmpjzlxwbz6/ansible-posix-1.5.4-adyme9vq Installing 'redhat.runtimes_common:1.2.1' to '/root/.ansible/collections/ansible_collections/redhat/runtimes_common' redhat.runtimes_common:1.2.1 was installed successfully Installing 'ansible.posix:1.5.4' to '/root/.ansible/collections/ansible_collections/ansible/posix' Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-rhel_system_roles-1.23.0.tar.gz to /root/.ansible/tmp/ansible-local-91vvby1md6/tmpjzlxwbz6/redhat-rhel_system_roles-1.23.0-3gm1eccc ansible.posix:1.5.4 was installed successfully Installing 'redhat.rhel_system_roles:1.23.0' to '/root/.ansible/collections/ansible_collections/redhat/rhel_system_roles' redhat.rhel_system_roles:1.23.0 was installed successfully ``` **Note:** Ansible Galaxy comes with dependency management, which means that it fetches any required dependencies for this collection. ## Writing the playbook We can now start working on our playbook itself. We’ll begin by setting it up to use the collection we just installed and we’ll add content incrementally from there. Testing the collection Before incorporating any tasks to our playbook, we’ll add the dependency to the Ansible collection for JWS to confirm that the installation was successful: ``` --- - name: "JBoss Web Server installation and configuration" hosts: "all" become: true collections: - redhat.jws ``` This playbook will not perform any tasks on the target system. It is only designed to verify that the collection is indeed recognized by Ansible: ``` # ansible-playbook -i inventory jws_dev_to.yml PLAY [JBoss Web Server installation and configuration] *************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************** ok: [localhost] PLAY RECAP *********************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Automatically retrieve archive from Red Hat Customer Portal Now that the Ansible collection for JWS is properly installed and accessible from our playbook, let’s provide the service account information, so that the engine can automatically download the JWS binary files provided by Red Hat. For the collection to be able to download the JWS archive from the Red Hat Customer Portal, we need to supply the credentials associated with a Red Hat service account. One way to provide those values parameters is to create a `service_account.yml` which can be passed to Ansible as an extra source of variables: ``` --- rhn_username: <service_account_id> rhn_password: <service_account_password> ``` **Note:** As those variables contain secrets, it is highly recommended to use Ansible Vault to protect its content. Using this feature of the automation tool, however, is out of the scope of this article. Now, we have everything in place to run our playbook and install JWS on the target. But before doing so, we will see how to configure the software to match the requirement of our use case. ### Ensure Java environment is properly configured JWS is based on Apache Tomcat which means it’s Java software that requires a Java runtime to be executed. So, our automation needs to ensure that this environment is readily available on the target system. Here again, we just need to add variables to our playbook and the Ansible Collection for JWS will take care of everything. It will check that the appropriate JVM is available or install if it’s missing: ``` --- - name: "JBoss Web Server installation and configuration" hosts: "all" become: yes vars: jws_java_version: 17 collections: – redhat.jws roles: - jws ``` **Note:** that this feature is only available for system belonging to the RedHat family. ``` $ ansible -m setup localhost | grep family "ansible_os_family": "RedHat", ``` If the target system is not part of the RedHat family, the installation of the JVM must be added to the `pre_tasks` section of the playbook. ### Preparing the target system As we mentioned at the beginning of this article, before decompressing the archive and starting the server there are a few configurations that need to be done on target’s system. One of them is to ensure that the necessary user and group have been created. The Ansible collection for JWS comes with default values for both, but often, those would be required to be replaced: ``` --- - name: "JBoss Web Server installation and configuration" hosts: "all" become: yes vars: [...] jws_user: java_servlet_engine jws_group: java_web_server [...] collections: - redhat.jws roles: - jws pre_tasks: [...] tasks: ``` Once we execute this playbook, on top of fetching the archive from the website, it ensures that the appropriate group and user exists, before decompressing the servlet’s engine files into the defined `TOMCAT_HOME`. And upon that, Ansible will guarantee the requested JVM is available on the target host. It’s already pretty nice to have all of this plumbing work done for us, but we can go further. With just a little more configuration, Ansible Collection for JWS can set up a systemd service to run JWS. ### Integration with systemd service The Ansible collection for JWS comes with a playbook and default templates to help set up JWS as a `systemd` service. Therefore, all that it’s needed to automate this part of our deployment is again to add a variable to our playbook: ``` --- - name: "JBoss Web Server installation and configuration" hosts: "all" become: yes vars: [...] jws_systemd_enabled: True jws_service_name: tomcat [...] collections: - redhat.jws roles: [...] ``` **Note:** that this feature is only available for target systems belonging to the `RedHat` family. After a successful execution of the playbook, you can easily confirm that the Java server is running as a `systemd` service: ``` # systemctl status tomcat ● tomcat.service - Jboss Web Server Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; preset: disabled) Active: active (running) since Tue 2024-06-18 14:50:11 UTC; 52s ago Main PID: 3365 (java) Tasks: 44 (limit: 1638) Memory: 116.3M CPU: 4.170s CGroup: /system.slice/tomcat.service └─3365 /etc/alternatives/jre_17/bin/java -Djava.util.logging.config.file=/opt/jws-6.0/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogM> Jun 18 14:50:11 a09c3523b337 systemd[1]: Started Jboss Web Server. Jun 18 14:50:11 a09c3523b337 systemd-service.sh[3357]: Tomcat started. Jun 18 14:50:11 a09c3523b337 systemd-service.sh[3356]: Tomcat runs with PID: 3365 ``` ## Deploy a web application If we come back to our list of requirements we established at the beginning of this article, we can see that the Ansible collection for JWS already took care of most of them. There is only one part of the deployment left to automate: configuring server workloads. With a different middleware solution than JWS, this part can be complex. Fortunately for us, one of the qualities of the Java server is its simplicity. Deploying webapps only necessitates placing their package file (. war) in the appropriate directory prior to the server start. That’s all! This can be easily achieved using Ansible primitives, but the collection still helps our automation by supplying a handler to restart the server is a webapp is provisioned for the first time (or updated): ``` - name: "Deploy demo webapp" ansible.builtin.get_url: url: 'https://people.redhat.com/~rpelisse/info-1.0.war' dest: "{{ tomcat_home }}/webapps/info.war" notify: - "Restart Tomcat service" ``` ## Validation Our Java server is now deployed and running, its workloads also. All that remains to be implemented is the validation part. Firing up a systemd service is good, checking that it is working is better! To achieve this, we’ll add a couple tasks to the post_tasks: section of our playbook: - Check that the `systemd` service is indeed running; - A `wait:` task to ensure the Java server HTTP port is accessible (no need to go further if this fails); - A `get_uri:` tasks to get the root path (/) of the server followed by another to check the webapp availability (/info) — availability of the first one not confirming the one of the second one. ``` [...] tasks: [...] post_tasks: - name: "Populate service facts" ansible.builtin.service_facts: - name: "Check if service is running" ansible.builtin.assert: that: - ansible_facts is defined - ansible_facts.services is defined - ansible_facts.services['tomcat.service'] is defined - ansible_facts.services['tomcat.service']['state'] is defined - ansible_facts.services['tomcat.service']['state'] == 'running' - name: "Sleep for {{ tomcat_sleep }} seconds to let Tomcat starts " ansible.builtin.wait_for: timeout: "{{ tomcat_sleep }}" - name: " Checks that server is running" ansible.builtin.uri: url: "http://localhost:8080{{ item.path }}" status_code: {{ item.status }} return_content: no loop: { path: '/', status: 404 } { path: '/info’, status: 200 } ``` ## Conclusion Keeping the content of the previous article in mind, this demonstration hopefully shows how much the Ansible collection for JWS eases the automation around the Java server. Thanks to it, using Ansible, we fully automated its installation. It has been almost as simple and natural to write this playbook as it would have been for an instance of Nginx. With all this content available, we truly have made JWS a first-class citizen in the Ansible ecosystem.
rpelisse
1,892,440
Generative AI, from your local machine to Azure with LangChain.js
In this article, we'll take you through the development journey, starting from an idea and progressing towards production. We'll explore how LangChain.js framework together with Azure AI building blocks allows you to quickly build complex AI applications at the various stages of development.
0
2024-06-18T14:58:00
https://dev.to/azure/generative-ai-from-your-local-machine-to-azure-with-langchainjs-5288
webdev, javascript, beginners, ai
--- title: 'Generative AI, from your local machine to Azure with LangChain.js' published: true description: 'In this article, we''ll take you through the development journey, starting from an idea and progressing towards production. We''ll explore how LangChain.js framework together with Azure AI building blocks allows you to quickly build complex AI applications at the various stages of development.' tags: 'webdev, javascript, beginners, ai' cover_image: 'https://raw.githubusercontent.com/sinedied/articles/main/articles/azure/build-demo/assets/banner.jpg' id: 1892440 date: '2024-06-18T14:58:00Z' --- The generative AI landscape moves at a fast pace, and it can be challenging to keep up with the latest developments, even for seasoned developers. There are in particular two questions that often come up when starting a new AI project: - *How can I quickly validate an AI app idea, without investing too much time and resources?* - *If I have a working prototype, how fast can I scale it to production?* You don't want to be outpaced by competitors or newer technologies, and you want to be able to quickly iterate on your ideas or pivot to new ones. This is where LangChain.js comes in. It's a framework that allows you to build AI applications very little adherence to the underlying AI technologies and tools. It abstracts some of the complexity of AI development, allowing you to focus on the business logic of your application. In this article, we'll take you through the development journey, starting from an idea and progressing towards production. We'll explore how LangChain framework together with Azure AI building blocks allows you to quickly build complex AI applications at the various stages of development. > **Note:** If you prefer to watch a video version of this article, you can find it [on YouTube here](https://www.youtube.com/watch?v=L4T4_Z1kyao). ## TL;DR key takeaways - AI is not only reserved to Python developers: JavaScript developers also have everything they need to build AI applications. - LangChain.js provides useful abstraction over AI models and APIs, allowing you to switch between them easily. This is particularly useful when you're experimenting with different models or when you want to scale your application, moving from a local SLM model to a cloud-based LLM. - Ollama is allow you to experiment with AI models and embeddings locally, at no cost (if you have a powerful enough machine). - Azure provides many AI building blocks and services that you can use to scale your application to production. Here's the [source code on GitHub](https://github.com/Azure-Samples/serverless-chat-langchainjs) of the project we use as an example in this article. If you like the project, don't forget to give it a star ⭐️! ## Working locally with Ollama [Ollama](https://ollama.com/) is a command-line tool that allows you to run AI models locally on your machine, making it great for prototyping. Running 7B/8B models on your machine requires at least 8GB of RAM, but works best with 16GB or more. You can install Ollama on Windows, macOS, and Linux from the official website: https://ollama.com/download. Once you have Ollama installed, let's first download some models. You can find a list of available models on the [Ollama website](https://ollama.com/models). For this example, we'll use the [Phi-3 Mini](https://ollama.com/library/phi3:mini) model. Open a terminal and run the following command: ```bash ollama pull phi3 ``` > **Note:** This will download a few gigabytes of data, so make sure you have enough space on your machine and a good internet connection. Once the model is downloaded, you can start interacting with the Ollama server. For example, you can use the `ollama run` command to generate text based on a prompt: ```bash ollama run phi3 "What is artificial intelligence? Explain it to a 5 years old child." ``` You can also have a minimal ChatGPT-like experience right from you terminal by just running: ```bash ollama run phi3 ``` ![Ollama run chat example](https://raw.githubusercontent.com/sinedied/articles/main/articles/azure/build-demo/assets/ollama-run.png) You can then chat with the model interactively. Once you're done, you can stop the server by pressing `Ctrl+D`. Ollama also provides a REST API that you can use to interact with the model. The API provides [many options](https://github.com/ollama/ollama/blob/main/docs/api.md), like streaming, JSON mode, and more. Here's an example of how you can use the API: ```bash curl http://localhost:11434/api/generate -d '{ "model": "phi3", "prompt": "What is artificial intelligence? Explain it to a 5 years old child.", "stream": false }' ``` After running this command you should see a JSON response from the model Ollama even provides an OpenAI compatible API, so you can use it as drop-in replacement for OpenAI models in your applications. And as Ollama runs entirely on your machine, it means you don't even need a network connection to use it. While Ollama is great for experimentation and prototyping, keep in mind that smaller models are not as powerful as the larger models available in the cloud. While it might be enough to validate your idea, you'll probably want to switch to cloud-based models for production to get better results. ## Prototyping with LangChain.js Now that we know how to run AI models locally, let's see how we can use LangChain.js to quickly prototype an AI application. [LangChain.js](https://js.langchain.com/) is a JavaScript framework that provides a high-level API to interact with AI models and APIs with many built-in tools to make complex AI applications easier to build. Let's start with a simple example project from scratch. Open a terminal and run the following commands: ```bash # Creates a new folder and initializes a new Node.js project mkdir langchain-demo cd langchain-demo npm init es6 -y npm i langchain @langchain/core @langchain/community pdf-parse faiss-node touch index.js ``` Now open the `index.js` file in your favorite code editor and add the following code: ```javascript import { HumanMessage, SystemMessage } from "@langchain/core/messages"; import { ChatOllama } from "@langchain/community/chat_models/ollama"; const model = new ChatOllama({ model: "phi3" }); const response = await model.invoke([ new SystemMessage("You're a helpful assistant"), new HumanMessage("Say hello"), ]); console.log(response.content); ``` Run the code with `node index.js`. You should see the response from the model in the console. Congrats, you've just built the hello world of AI chatbots! ### A more complex example What if I want to use [RAG (Retrieval-Augmented Generation)](https://aka.ms/ws?src=gh%3Aazure-samples%2Fazure-openai-rag-workshop%2Fbase%2Fdocs%2F&step=1#what-is-retrievial-augmented-generation) to ground the answers using documents? Let's update our `index.js` file with the following code: ```javascript import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { createStuffDocumentsChain } from "langchain/chains/combine_documents"; import { createRetrievalChain } from "langchain/chains/retrieval"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatOllama } from "@langchain/community/chat_models/ollama"; import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama"; import { FaissStore } from "@langchain/community/vectorstores/faiss"; import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf"; // 1. Initialize the models const model = new ChatOllama({ model: "phi3" }); const embeddings = new OllamaEmbeddings({ model: "all-minilm:l6-v2" }); // 2. Load PDF document and split it into smaller chunks const loader = new PDFLoader("terms-of-service.pdf", { splitPages: false }); const pdfDocument = await loader.load(); const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 100 }); const documents = await splitter.splitDocuments(pdfDocument); // 3. Put the documents into a vector store and convert them to vectors const store = await FaissStore.fromDocuments(documents, embeddings, {}); // 4. Create the RAG chain that retrieves and combines the prompt with the documents const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: ChatPromptTemplate.fromMessages([ ["system", "You're a helpful assistant"], ["human", "Answer the question: {input}\nusing the following documents:\n\n{context}"], ]), }); const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain, }); // 5. Generate the result const response = await chain.invoke({ input: "What's our mission?" }); console.log(response); ``` This code will load a PDF document, split it into smaller chunks, convert them to vectors, and then use them in a multi-step workflow (chain) to perform a vector search and generate a response using the best results. Pheeew! This one is a more complex example, but it shows how LangChain.js can help you build more advanced AI scenarios in a few lines of code. Before running this code, you first need to download [this PDF document](https://raw.githubusercontent.com/Azure-Samples/serverless-chat-langchainjs/main/data/terms-of-service.pdf) and put it in the `langchain-demo` folder. We also need to download the embeddings model. You can do this by running the following command: ```bash ollama pull all-minilm:l6-v2 ``` This one is very small (~50MB), and helps converting text to vectors. Now you can run your code with: ```bash node index.js ``` ![LangChain.js RAG example results](https://raw.githubusercontent.com/sinedied/articles/main/articles/azure/build-demo/assets/rag-example.png) The resulting answer directly comes from the PDF document, you can open it and look at the original mission statement at the beginning of the document to see how the model used it in its response. Using the same principles as this example, we've prototyped a chatbot for the *Contoso Real Estate company*: we’ve built an experience that allows customers to ask support questions about the usage of its products. You can find the full source code of the project on [GitHub](https://github.com/Azure-Samples/serverless-chat-langchainjs). The final results, with an added chat UI, look like this: ![Contoso Real Estate chatbot](https://raw.githubusercontent.com/sinedied/articles/main/articles/azure/build-demo/assets/demo.gif) Now that we have a working prototype, let's see how we can deploy it to production using Azure. ## Migrating to Azure Azure provides many AI services that you can use for your applications, in our case we'll use [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service) for the models and [Azure AI Search](https://azure.microsoft.com/products/ai-services/ai-search) as our vector database. Thanks to LangChain.js abstraction, migrating your prototype to Azure for production is relatively straightforward, as you can swap the models and vector database without changing anything else in your code. If you look at [the chat API code](https://github.com/Azure-Samples/serverless-chat-langchainjs/blob/main/packages/api/src/functions/chat-post.ts#L54-L75), this is what we use to run the code locally with Ollama: ```typescript embeddings = new OllamaEmbeddings({ model: ollamaEmbeddingsModel }); model = new ChatOllama({ temperature: 0.7, model: ollamaChatModel, }); store = await FaissStore.load(faissStoreFolder, embeddings); ``` Switching to Azure OpenAI and Azure AI Search is as simple as changing the model and store initialization: ```typescript const credentials = getCredentials(); embeddings = new AzureOpenAIEmbeddings({ credentials }); model = new AzureChatOpenAI({ temperature: 0.7, credentials, }); store = new AzureAISearchVectorStore(embeddings, { credentials }); ``` We use passworless authentication for increased security, so we don't need to store any secrets in our code. The implementation of the `getCredentials` was omitted for simplicity, but you can [find it here](https://github.com/Azure-Samples/serverless-chat-langchainjs/blob/main/packages/api/src/security.ts). And that's it for the migration, at least for the code part, as to make it work you still have to create the necessary resources in Azure. We'll cover this in the next section. ### Azure Developer CLI As developers, we know infrastructure is no fun, but it's a necessary part of deploying applications to the cloud. Azure provides a tool called [Azure Developer CLI](https://aka.ms/azure-dev/install) that makes it easier to create and manage resources in Azure. It allows you to use [Infrastructure as Code](https://learn.microsoft.com/azure/cloud-adoption-framework/ready/considerations/infrastructure-as-code) to define your resources in a declarative way, and then deploy them with a single command. We won't cover in details how to build the infrastructure templates, if you're curious though you can have a look at the [`infra` folder](https://github.com/Azure-Samples/serverless-chat-langchainjs/tree/main/infra) and the [`azure.yaml` file](https://github.com/Azure-Samples/serverless-chat-langchainjs/blob/main/azure.yaml) used to configure and deploy the resources. The good news is that we have many samples that you can use as a starting point for your own project infrastructure. Once the configuration is done, you can create the resources and deploy the application with a few commands: ```bash # Authenticate to Azure azd auth login # Provision and deploy the resources azd up ``` You can try it with our [Serverless AI Chat project](https://github.com/Azure-Samples/serverless-chat-langchainjs) we used as an example in this article. ## Azure building blocks We skipped a bit over the implementation details of our example project, but to build it quickly we used some of the already existing Azure AI building blocks. Here's a list of some components we used, that you can reuse in your own projects: - **OpenAI Node.js SDK**: we announced at Build a new integration of Azure OpenAI with the official [OpenAI Node.js SDK](https://github.com/openai/openai-node?tab=readme-ov-file#microsoft-azure-openai), meaning it's now easier than ever to switch between OpenAI and Azure OpenAI models. The [LangChain.js Azure OpenAI integration](https://js.langchain.com/docs/integrations/chat/azure/) has also been updated to use this new SDK. - **Azure integrations in LangChain.js**: we've contributed support for many Azure services in LangChain.js, to make it easier to build your AI applications on top of Azure. This includes Azure OpenAI, Azure AI Search, Azure CosmosDB, and more. You can find more information in the [LangChain.js documentation](https://js.langchain.com/docs/integrations/platforms/microsoft). - **AI Chat protocol**: we've defined an [API schema](https://github.com/microsoft/ai-chat-protocol/tree/main/spec#readme) for AI chat applications, to make the frontend and backend components communicate. This schema is implemented in many of our AI samples, making them interoperable and easy to extend. We also provide an [NPM package](https://www.npmjs.com/package/@microsoft/ai-chat-protocol) that includes the TypeScript types for the data objects and a client library to interact with the API. - **AI Chat UI components**: if you want to focus on the backend part of your AI chat application, we provide [a set of web components](https://github.com/Azure-Samples/azure-openai-chat-frontend) that implements the AI Chat protocol. You can use them to quickly build a chat UI for your application. And since most of our AI samples also implement the protocol, you can also reuse any of their frontend component if you prefer, like the [one we used in our example project](https://github.com/Azure-Samples/serverless-chat-langchainjs/tree/main/packages/webapp). ## Conclusion We've covered a lot of ground in this article, starting from running AI models locally with Ollama, to prototyping a chatbot with LangChain.js, and finally deploying it to production on Azure. In a fast-paced environment like AI development, using JavaScript with existing high-level frameworks like LangChain.js and building on top of off-the-shelf building blocks can help you iterate quickly on your ideas and eventually bring them to production. you can quickly iterate on your ideas and bring them to production with the right tools and services. ### Reference links Here are some useful links to get you started with the tools and services we've mentioned in this article: - [LangChain.js](https://js.langchain.com) - [Ollama](https://ollama.com/) - [Azure Developer CLI](https://aka.ms/azure-dev/install) - [Azure OpenAI Node.js SDK](https://github.com/openai/openai-node?tab=readme-ov-file#microsoft-azure-openai) - [AI Chat protocol](https://github.com/microsoft/ai-chat-protocol/tree/main/spec#readme) - [AI Chat UI components](https://github.com/Azure-Samples/azure-openai-chat-frontend) - [Serverless AI Chat sample](https://github.com/Azure-Samples/serverless-chat-langchainjs/)
sinedied
1,892,561
Understanding Logistic Regression
Previously, we discussed linear regression, a method used to predict continuous outcomes. Today let's...
0
2024-06-18T14:23:30
https://dev.to/harsimranjit_singh_0133dc/understanding-logistic-regression-125k
Previously, we discussed linear regression, a method used to predict continuous outcomes. Today let's explore logistic regression, which is essential for binary classification problems in data science. ## What is Logistic Regression? Logistic regression is used to predict a binary outcome (such as Yes/No or True/False) based on one or more input variables. Unlike linear regression, which deals with continuous data, logistic regression estimates the probability that a given input belongs to a certain class. ## Applications of Logistic Regression - **Spam Detection**: Email services apply logistics regression to classify emails as spam or not by understanding the other input variables. - **Medical Predictions**: Logistic regression can be used to determine the probability of medical conditions, such as predicting heart attacks based on variables like weight and exercise habits. - **educational Outcomes**: Application aggregators use logistic regression to predict the probability of a student being accepted to a particular university or degree course by analyzing scores ## Logistic Regression Equation and Assumptions ### Logistic Regression Equation Logistic regression uses the logistic function (sigmoid function) to map predictions and their probabilities. The sigmoid function is defined as: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vwba5hhhhvsyxypmimf.png) ## Graph of Sigmoid function ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/um4m4lj4fnu2o2xb1efr.png) - if the output of the sigmoid function is greater than 0.5, the model predicts the instance as the positive class (1) - if the output is less than 0.5, the model predicts the instance as the negative class (0) ## Interpretation of sigmoid function The sigmoid function's output can be interpreted as the probability of the instance belonging to the positive class. For example - If the output is 0.7, there is a 70% chance that the instance belongs to the positive class - If the output is 0.2, there is a 20% chance that the instance belongs to the positive class Assumptions: 1. **Binary Logistics regression requires the dependent variable to be binary** That means the outcome variable must have tow possible outcomes, such as 'yes' or 'no'. 2. **Independence of observation** The observation should be independent of each other, in other words, the outcome of one instance should not affect the outcome of another. 3. **Linearity of independent variables and log odds** Although logistic regression does not require dependent and independent variables to be linearly dependent. it requires that the independent variables are linearly related to log odds. 4. **Absence of multicollinearity** The independent variables should not be too highly correlated with each other, 5. **Large sample size**" logistic requires large sample size generally you require at least 10 cases with the least frequency outcome for each dependent variables. ## Types of Logistic Regression 1. **Binary Logistic Regression:** when the dependent variable has two outcomes, such as predicting whether a loan will be approved (yes/no) 2. **Multinomial Logistic Regression**: when the dependent variable has more than two discrete outcomes, such as predicting the type of transport a person will choose (car, bike, bus) 3. **Ordinal Logistic Regression:** Used when the dependent variable is ordinal, such as survey responses (agree, disagree, unsure) ## Conclusion Logistic regression is a powerful and flexible tool for binary outcome modeling. Its simplicity, interpretability, and effectiveness with linearly separable datasets make it a preferred choice for many binary classification tasks in machine learning and predictive analytics. Understanding its assumptions and best practices ensures the development of robust and reliable models
harsimranjit_singh_0133dc
1,892,581
List of FAANG front-end coding questions.
Star Rating Design Pop Over Design Accordion (Amazon onsite) Design Carousel Design grid using...
0
2024-06-18T14:54:22
https://dev.to/anjandutta/list-of-faang-front-end-coding-questions-448d
interview, frontend, coding, beginners
- Star Rating - Design Pop Over - Design Accordion `(Amazon onsite)` - Design Carousel - Design grid using HTML/CSS and Javascript with search and sort, event bubbling (Amazon onsite) - Design NavBar - Infinite Scroll - Typeahead / autocomplete using trie - Implement Debounce function - Implement tic tac toe - Make snake ladder board - Make calendar of any Month like Date Picker - Implement throttle function - Implement custom Higher Order Functions like Map, Reduce, Filter, Sort `(Amazon Phone Interview)` - Create an analog clock - Make a todo list - Create functionality to change all text on page to different translations - Build a calculator (add/subtract/multiply/divide/log/pow) - Search and display Giphy images (through Giphy api) in responsive format - Build Connect Four - Implement Nested Checkboxes (parent is checked, children are checked and vice versa. Must use actual checkbox input) - Implement a poll widget - Implement Event Emitter - Implement promise.all function - Flatten nested Javascript Array without using Array.prototype.flat() - Implement Sort() function. - Implement promise.all function - LRU Cache `(Netflix onsite)` - Make snake ladder board - Make calendar of any Month - Implement throttle function - Implement an event emitter - Implement a hashtable - Create an analog clock - Make a todo list - Build Connect Four - Implement Material UI Chips with auto-suggest `(Amazon SDE I)` - When sending an e-mail, auto-suggest people and convert them into a chip with their avatar on the right - Implement a Carousel - Show default if the image doesn't exist. Prefetch the images. - Inversion of Object - Given an object make the keys as values and values as keys. - Recursively destructure if the type of value is an object - Write Throttle function without the use `setTimeout()` and `clearTimeout()` - Create a "Select" component using HTML + CSS + js. - Implement reusable table component. ## I will publish code/solutions to all of the above questions gradually. Follow me here for my future article updates.
anjandutta
1,892,407
Decoding Brand Mark and Logo Recognition: Modern Applications and Solutions
Introduction Brand Mark and Logo Recognition solutions are cutting-edge technological...
0
2024-06-18T14:47:15
https://dev.to/api4ai/decoding-brand-mark-and-logo-recognition-modern-applications-and-solutions-1gkp
brand, logo, recognitio, detection
## Introduction Brand Mark and Logo Recognition solutions are cutting-edge technological tools designed to identify and analyze the presence of brands and logos across various media formats. Leveraging sophisticated algorithms powered by artificial intelligence (AI) and machine learning, these solutions detect brand logos, products, and other brand-related elements in images, videos, audio, and textual content. By providing businesses with detailed insights into where and how their brand appears, these tools help monitor brand visibility, protect brand integrity, and optimize marketing strategies. This article aims to provide a comprehensive comparison of leading brand recognition solutions available in the market. By evaluating these solutions, businesses can make informed decisions about which technology best suits their needs. We will examine a range of popular brand recognition tools, assessing them based on key criteria such as accuracy, features, ease of use, cost, scalability, and customer support. This analysis will highlight the strengths and weaknesses of each solution, offering valuable insights to help businesses enhance their brand recognition strategies effectively. ## Why Do Businesses Need Brand Recognition Solutions? In today's highly competitive market, the concept of "brand" is integral to the success of any business. The ability to identify and differentiate a brand from its competitors not only helps build customer loyalty but also drives sales and enhances market presence. With the advent of advanced technologies, businesses now have access to sophisticated brand recognition solutions that can accurately detect and analyze brand presence across various media, including images, videos, and social media platforms. These solutions leverage artificial intelligence and machine learning to provide precise and actionable insights, making it easier for brands to monitor their visibility and protect their reputation. Brand recognition serves as a powerful indicator of a company's identity, reputation, and connection with consumers. Here's why it holds such importance: - **Building Trust and Loyalty:** Recognizable brands inspire trust and loyalty among consumers, leading to repeat purchases and positive word-of-mouth recommendations. - **Market Differentiation:** In crowded markets, strong brand recognition sets businesses apart from competitors, helping them attract customers and command premium prices for their products or services. - **Consumer Engagement:** Brands with high recognition enjoy greater visibility and engagement, as consumers are more likely to interact with familiar brands and seek out their offerings. - **Brand Equity:** Establishing strong brand recognition enhances brand equity, enabling businesses to leverage their reputation and goodwill to expand into new markets or product lines. As a result, brand mark and logo recognition tools have become indispensable for businesses seeking to maintain a competitive edge. These advanced technologies offer numerous practical applications that can significantly enhance a company's marketing and operational strategies. Here are some key practical applications of brand recognition tools: - **Marketing and Advertising Optimization:** Brand recognition tools enable businesses to track the effectiveness of their marketing campaigns across various media platforms. By analyzing where and how brand logos and related elements appear, companies can measure the reach and impact of their advertisements. This data-driven approach allows marketers to fine-tune their strategies, ensuring optimal allocation of resources and maximizing return on investment (ROI). - **Brand Protection and Integrity:** Counterfeiting and unauthorized use of brand logos can severely damage a company's reputation. Brand recognition tools help identify and mitigate these risks by continuously scanning the internet, social media, and e-commerce platforms for unauthorized use of brand assets. This proactive approach allows businesses to take swift action against infringement, protecting their brand's integrity and maintaining consumer trust. - **Competitive Analysis:** Understanding how competitors are positioning their brands is crucial for staying ahead in the market. Brand recognition tools provide insights into competitors' marketing strategies by identifying their brand presence across various channels. This information helps businesses analyze competitive positioning, identify market trends, and develop strategies to differentiate themselves effectively. - **Customer Experience Enhancement:** Brand recognition tools can be used to personalize customer experiences by identifying brand interactions in real-time. For instance, in retail environments, these tools can recognize customer preferences and provide tailored recommendations based on past interactions with the brand. This level of personalization enhances customer satisfaction and fosters brand loyalty. - **Social Media Monitoring:** Social media platforms are vital for brand engagement and reputation management. Brand recognition tools can monitor brand mentions, logos, and related content across social media channels. This real-time monitoring helps businesses gauge public sentiment, engage with customers effectively, and respond promptly to any potential issues, thereby enhancing their online presence and reputation. - **Market Research and Insights:** Accurate market research is essential for informed decision-making. Brand recognition tools can aggregate data from various sources to provide comprehensive insights into consumer behavior, market trends, and brand perception. These insights enable businesses to make strategic decisions, identify new opportunities, and innovate their product offerings. - **Event and Sponsorship Analysis:** For companies investing in events and sponsorships, brand recognition tools offer a way to measure the visibility and impact of their investments. By analyzing the presence of brand logos in event coverage, companies can assess the effectiveness of their sponsorships and make data-driven decisions about future investments. In conclusion, brand recognition tools offer a wide range of practical applications that can significantly enhance a business's marketing, operational, and strategic initiatives. By leveraging these advanced technologies, companies can protect their brand, optimize their marketing efforts, and gain valuable insights into market dynamics, ultimately driving growth and success in a competitive landscape. ## Upgrading Existing Solutions Even for businesses that have established brand recognition solutions in place, the task of keeping them up-to-date with new brands is a perpetual challenge. Here’s why upgrading existing solutions is so complex: - **Continuous Data Integration:** Upgrading existing solutions requires ongoing data acquisition and integration efforts to incorporate new brands and brand variations into the system. This process demands significant resources and infrastructure to ensure the accuracy and reliability of the solution. - **Scalability and Performance:** As the number of brands in the market grows, existing solutions must scale to handle larger volumes of data while maintaining performance and accuracy. This often necessitates upgrades to hardware, software, and algorithms to meet the increasing demands on the system. - **Adaptability to Market Changes:** Brands undergo frequent changes, such as logo redesigns, product launches, and shifts in market positioning. Existing solutions must be agile enough to adapt to these changes quickly and effectively to avoid inaccuracies and errors in brand recognition. The complexity of managing brand recognition in today’s marketplace underscores the importance of robust and adaptable solutions. Whether seeking new solutions or upgrading existing ones, businesses must prioritize investments in technology, data infrastructure, and expertise to effectively navigate the ever-evolving brandscape. By addressing these challenges head-on, businesses can ensure that their brand recognition efforts remain accurate, relevant, and competitive in the dynamic world of commerce. ## Leading Brand Mark and Logo Recognition Solutions In this section, we will explore some of the top brand recognition solutions known for their exceptional capabilities and performance. These solutions have been selected based on their popularity, technological innovations, and proven success in the market. ![Google Cloud Vision](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nu6gaii1de26mihagznb.png) [**Google Cloud Vision API**](https://cloud.google.com/vision/docs/detecting-logos) **History:** Introduced by Google in 2016, the Google Cloud Vision API initially offered general image analysis capabilities, including object recognition and OCR (Optical Character Recognition). Over time, Google enhanced the API with advanced features like logo detection, enabling businesses to identify and analyze brand logos in images. Continuous updates and enhancements have improved its accuracy and expanded its functionality. **Userbase:** The Google Cloud Vision API is widely utilized across various industries. E-commerce platforms use it to enhance product search and monitor brand presence. Advertising and marketing agencies rely on it to analyze visual content for brand mentions, while media and entertainment companies use it for cataloging visual content. Retailers automate product categorization with the API, and financial services employ it for document analysis and fraud detection. Its versatility and robust capabilities make it a popular choice in diverse sectors. ![Azure AI Vision](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yynpy1io74ozfbz56glp.png) [**Microsoft Azure AI Vision**](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-brand-detection) **History:** Microsoft launched Azure AI Vision, formerly known as the Computer Vision API, as part of its Azure Cognitive Services suite. Initially focused on general image analysis tasks, it soon incorporated advanced features like logo detection and brand recognition. Continuous development and integration with other Azure services have significantly enhanced its capabilities. **Userbase:** Azure AI Vision is widely used across various industries. Retail companies utilize it for product search and brand monitoring, while marketing firms leverage it for campaign analysis. Financial institutions rely on it for document verification and fraud detection, and media companies use it for content management. Additionally, government agencies employ Azure AI Vision for security and surveillance applications. ![SmartClick](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjs547b0qal9i3qizymk.png) **[SmartClick](https://smartclick.ai/api/logo-detection/)** **History:** SmartClick was developed as a specialized solution for visual recognition, with a strong emphasis on logo detection and brand recognition. Its creation was driven by the need for accurate and real-time brand monitoring across various media platforms. **Userbase:** SmartClick is widely used by advertising and marketing agencies to track brand presence in campaigns. E-commerce platforms utilize it to enhance product categorization and search functions. Media companies leverage SmartClick for monitoring brand appearances in content, while retail businesses use it for inventory management and customer engagement. ![Brand Recognition](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jpno4x79rtbfxmllp8dz.png) **[API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition)** **History:** The API4AI Brand Recognition API was created to deliver robust brand and logo detection capabilities. Over time, it has evolved to improve accuracy and support newly established brands, while also incorporating real-time analysis features to meet the increasing demands of businesses for effective brand monitoring. **Userbase:** This solution is extensively used by digital marketing agencies for brand monitoring and analysis. E-commerce companies utilize it to enhance product search capabilities, media organizations leverage it for content tagging and management, and financial services firms employ the API4AI Brand Recognition API to verify brand authenticity in documents. ![Visua](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xoie85p99dmf97gb541.png) **[Visua](https://visua.com/technology/logo-detection-api)** **History:** Formerly known as LogoGrab, Visua has been a significant player in logo and brand detection technology. It was developed to equip businesses with tools to monitor and protect their brand presence across various media channels. **Userbase:** Visua is extensively utilized by brand protection agencies to monitor unauthorized use of logos. Advertising firms leverage it to measure brand visibility in campaigns, retailers employ it for product authentication and customer engagement, and e-commerce platforms use it to enhance search functionality. ![Hive](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/18hxuxqgo5zlnk9ao189.png) [**Hive**](https://thehive.ai/apis/logo-detection) **History:** Hive is renowned for its AI-powered visual recognition capabilities, particularly in logo detection and brand recognition. The platform has continuously improved its scalability and real-time processing features. **Userbase:** Media and entertainment companies use Hive to monitor brand appearances in content. Marketing agencies leverage it for campaign analysis and brand tracking, while retail businesses utilize it for inventory management. Financial institutions employ Hive for document verification and fraud detection. ![Amazon Rekognition](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mssmg184azi20zhi7do.png) [**Amazon Rekognition**](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html) **History:** Launched as part of Amazon Web Services (AWS), Amazon Rekognition initially focused on general image and video analysis. It soon integrated logo detection and brand recognition capabilities, leveraging AWS's robust infrastructure for enhanced scalability and performance. **Userbase:** E-commerce platforms use Amazon Rekognition for product search and brand monitoring. Marketing agencies leverage it to analyze visual content in campaigns. Media companies utilize it for cataloging and managing visual content, while retail businesses employ it to enhance customer experiences. Financial services rely on it for fraud detection and document analysis. ## Pros and Cons In this section, we will assess the advantages and disadvantages of the leading brand mark and logo recognition solutions. By examining the pros and cons of each option, businesses can better understand the strengths and potential limitations of these technologies. This analysis will help organizations make informed decisions about which brand recognition solution best meets their specific needs and operational requirements. **[Google Cloud Vision API](https://cloud.google.com/vision/docs/detecting-logos)** **Pros:** - **High Accuracy:** Provides precise detection of "known" brand logos and other visual elements. - **Comprehensive Features:** Offers a wide range of capabilities, including OCR, facial detection, object detection, and explicit content detection. - **Seamless Integration:** Easily integrates with other Google Cloud services, enhancing its utility for businesses within Google's ecosystem. **Cons:** - **Cost:** Can become expensive for large-scale or high-frequency usage. - **Database Limitations:** Limited number of supported logos, with no option to add new ones upon request. **[Microsoft Azure AI Vision](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-brand-detection)** **Pros:** - **Robust Integration:** Integrates seamlessly with the broader suite of Azure Cognitive Services and other Microsoft products. - **Scalability:** Capable of handling large volumes of data and scaling according to business needs. - **Comprehensive Support:** Backed by extensive documentation and customer support from Microsoft. **Cons:** - **Database Limitations:** The number of supported logos is restricted, with no option to add new ones upon request. - **Learning Curve:** May have a steeper learning curve for users not familiar with the Azure ecosystem. **[SmartClick](https://smartclick.ai/api/logo-detection/)** **Pros:** - **Specialized Focus:** Tailored specifically for logo and brand recognition, offering high accuracy. - **Real-Time Analysis:** Provides real-time brand monitoring capabilities. - **User-Friendly:** Known for its easy-to-use interface and straightforward integration process. **Cons:** - **Limited Features:** May lack some of the additional functionalities offered by more comprehensive solutions. - **Scalability Issues:** Might not handle extremely large-scale data as effectively as some competitors. **[API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition)** **Pros:** - **Extensive Database:** Operates efficiently without needing a pre-populated logo database, providing an immediate, out-of-the-box solution. - **New-Born Logo Support:** Typically does not require additional actions to support new logos, a unique feature in the market. - **Real-Time Capabilities:** Offers real-time analysis for prompt brand monitoring. **Cons:** - **Professional Instrument:** Integration requires only a few lines of code but does assume at least basic programming skills. - **Internet Dependency:** Requires an internet connection for cloud-based services, limiting usability in offline or remote settings. **[Visua](https://visua.com/technology/logo-detection-api)** **Pros:** - **High Precision:** Renowned for its accurate logo and brand detection. - **Scalable:** Capable of handling large volumes of data, making it suitable for enterprises. - **Dedicated Brand Protection:** Specifically designed for brand monitoring and protection. **Cons:** - **Cost:** Higher costs may be a barrier for smaller businesses. - **Limited General Features:** Primarily focuses on brand detection, lacking broader image recognition capabilities. **[Hive](https://thehive.ai/apis/logo-detection)** **Pros:** - **AI-Powered:** Utilizes advanced AI for high accuracy and performance. - **Versatile:** Offers a range of visual recognition capabilities beyond just logo detection. - **Scalability:** Efficiently handles large-scale data. **Cons:** - **Cost:** Can be expensive for extensive use. - **Complex Setup:** Initial setup and integration can be complex and time-consuming. **[Amazon Rekognition](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html)** **Pros:** - **Comprehensive Features:** Offers a wide range of capabilities including logo detection, facial analysis, object recognition, and more. - **Scalability:** Utilizes AWS infrastructure to efficiently scale with demand. - **Integration:** Easily integrates with other AWS services, providing a cohesive ecosystem. **Cons:** - **Privacy and Security:** Concerns about data privacy when using a cloud service. - **Database Limitations:** Limited number of supported logos, with no option to add new ones upon request. ## Use Cases and Recommendations In this section, we will examine various use cases for leading brand mark and logo recognition solutions and offer tailored recommendations based on specific business needs. By understanding the unique strengths and capabilities of each solution, businesses can select the best fit for their operational requirements. Whether you require high accuracy, professional-grade tools, solutions for occasional use, or budget-friendly options, this guide will help you navigate the diverse landscape of brand recognition technologies effectively. **[Google Cloud Vision API](https://cloud.google.com/vision/docs/detecting-logos)** **Use Cases:** - **E-commerce:** Enhances product search and recommendation systems by recognizing products and brand logos in images. - **Marketing and Advertising:** Analyzes social media and online content for brand mentions and visibility. - **Media and Entertainment:** Catalogs and manages visual content by automatically tagging brand logos and other objects. **Recommendation:** The Google Cloud Vision API is highly recommended for businesses that require high accuracy and a comprehensive set of features. It is particularly suitable for organizations already using Google Cloud services due to its seamless integration. However, businesses should consider the potential costs, especially for large-scale usage. **[Microsoft Azure AI Vision](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-brand-detection)** **Use Cases:** - **Retail:** Improves inventory management and customer experience through visual search and product recognition. - **Financial Services:** Enhances document verification processes by detecting logos and textual information. - **Security and Surveillance:** Monitors public spaces for specific brand logos or other visual markers. **Recommendation:** Azure AI Vision is ideal for businesses that are part of the Microsoft ecosystem and need robust integration with other Azure services. Its scalability makes it suitable for large enterprises. However, the complex pricing structure and learning curve should be taken into consideration. **[SmartClick](https://smartclick.ai/api/logo-detection/)** **Use Cases:** - **Advertising:** Real-time monitoring of brand presence in online and offline advertisements. - **E-commerce:** Enhances product categorization and search capabilities by detecting brand logos. - **Content Moderation:** Identifies and removes unauthorized use of brand logos in user-generated content. **Recommendation:** SmartClick is recommended for businesses seeking a specialized and user-friendly solution for logo and brand recognition. It is particularly useful for marketing agencies and e-commerce platforms. However, companies with large-scale data needs should evaluate its scalability capabilities. [**API4AI Brand Recognition API**](https://api4.ai/apis/brand-recognition) **Use Cases:** - **Marketing:** Tracks brand mentions across social media and online platforms. - **Retail:** Automates product categorization and enhances search functionalities. - **Digital Asset Management:** Organizes and tags visual content for easier retrieval and management. **Recommendation:** API4AI Brand Recognition is ideal for businesses seeking a customizable and accurate brand recognition solution. It is particularly effective for digital marketing and e-commerce applications and is essential for recognizing rare or new brands. **[Visua](https://visua.com/technology/logo-detection-api)** **Use Cases:** - **Brand Protection:** Monitors and identifies unauthorized use of brand logos. - **Advertising Analysis:** Evaluates the visibility and impact of brand logos in marketing campaigns. - **Retail:** Authenticates products by detecting and verifying brand logos. **Recommendation:** Visua is highly recommended for businesses focused on brand protection and monitoring. Its high precision and scalability make it suitable for enterprises. However, the higher cost might be a barrier for smaller businesses. **[Hive](https://thehive.ai/apis/logo-detection)** **Use Cases:** - **Media Analysis:** Monitors TV shows, movies, and online videos for brand appearances. - **Advertising:** Tracks and analyzes the effectiveness of brand placements in advertisements. - **Retail:** Enhances visual search capabilities for a better customer experience. **Recommendation:** Hive is ideal for media and entertainment companies that require versatile and scalable visual recognition capabilities. Its AI-powered technology provides high accuracy. However, businesses should consider the cost and potential complexity of setup. [**Amazon Rekognition**](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html) **Use Cases:** - **E-commerce:** Enhances product search and recommendation systems by detecting brand logos and other visual features. - **Marketing:** Analyzes social media and online content for brand mentions and visibility. - **Security:** Monitors public spaces for specific brand logos or other visual markers. **Recommendation:** Amazon Rekognition is recommended for businesses that require comprehensive features and robust scalability. It integrates well with other AWS services, making it ideal for organizations within the AWS ecosystem. However, privacy concerns and potential costs should be considered. ##General Recommendations - **High Accuracy:** [Google Cloud Vision API](https://cloud.google.com/vision/docs/detecting-logos) and [API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition) are ideal for businesses that require precise and reliable brand recognition. - **Professional Use:** [Microsoft Azure AI Vision](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-brand-detection) and [Google Cloud Vision API](https://cloud.google.com/vision/docs/detecting-logos) offer comprehensive features and integration capabilities suitable for professional, enterprise-level applications. - **Occasional Use:** [SmartClick](https://smartclick.ai/api/logo-detection/) and [API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition) provide user-friendly interfaces and affordable pricing, making them ideal for intermittent use. - **High-Volume Use:** [Amazon Rekognition](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html), [Google Cloud Vision API](https://cloud.google.com/vision/docs/detecting-logos), [API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition), and [Microsoft Azure AI Vision](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-brand-detection) can efficiently handle large data volumes, making them perfect for enterprises with substantial brand recognition needs. - **Budget Constraints:** [API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition) and [SmartClick](https://smartclick.ai/api/logo-detection/) offer cost-effective solutions without compromising essential features. - **Database-Free Setup:** [API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition) is easy to set up without requiring extensive database integration. Its support for new-born brands without specific training is a unique feature in the market. - **Real-Time Monitoring:** [Hive](https://thehive.ai/apis/logo-detection) and [SmartClick](https://smartclick.ai/api/logo-detection/) provide real-time processing capabilities, suitable for immediate brand recognition and monitoring. ## Conclusion Brand recognition solutions have become increasingly popular due to their ability to provide businesses with crucial insights into their brand's visibility and perception across various media. These technologies enable accurate and efficient detection of brand logos, products, and mentions, which is essential for optimizing marketing strategies, ensuring brand integrity, and enhancing customer engagement. With rapid advancements in AI and machine learning, these solutions have become more sophisticated, offering real-time analysis, high accuracy, and seamless integration with other business tools. This demand has driven businesses to adopt brand recognition solutions as a cornerstone of their brand management strategies. By leveraging brand recognition solutions, businesses can gain a competitive edge through effective monitoring and analysis of their brand presence. Utilizing these tools helps companies protect their brand from unauthorized use, track the effectiveness of marketing campaigns, and gain valuable insights into customer behavior and preferences. This data-driven approach enables businesses to make informed decisions, optimize their marketing efforts, and ultimately drive growth. Additionally, the ability to provide real-time feedback and swiftly adapt to market changes ensures that businesses remain agile and responsive to emerging trends. To stay ahead in the competitive market, it is essential for businesses to explore and adopt the right brand recognition solutions. We encourage you to investigate the solutions discussed in this article—such as [Google Cloud Vision API](https://cloud.google.com/vision/docs/detecting-logos), [Microsoft Azure AI Vision](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-brand-detection), [SmartClick](https://smartclick.ai/api/logo-detection/), [API4AI Brand Recognition API](https://api4.ai/apis/brand-recognition), [Visua](https://visua.com/technology/logo-detection-api), [Hive](https://thehive.ai/apis/logo-detection), and [Amazon Rekognition](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html). Assess these options based on your specific needs, whether it's high accuracy, scalability, cost-effectiveness, or ease of integration. Take the initiative to explore these powerful tools and see how they can enhance your brand management efforts. Conduct trials, gather feedback, and analyze the results to choose the solution that best fits your business requirements. By leveraging these advanced technologies, you can strengthen your brand presence, engage more effectively with your customers, and drive your business towards sustained growth. Start your journey today and ensure your brand stands out in the digital landscape. [More Stories about Cloud, Web, AI and Image Processing](https://api4.ai/blog)
taranamurtuzova
1,892,578
The Ultimate 2024 Tech Stack for Solo SaaS Developers: Build Smarter, Not Harder
Building a SaaS as a solo developer is a challenging task. You have to wear multiple hats and be...
0
2024-06-18T14:45:55
https://creativedesignsguru.com/tech-stack-solo-saas-dev/
nextjs, react, javascript, webdev
Building a SaaS as a solo developer is a challenging task. You have to wear multiple hats and be proficient in various technologies, requiring strategic decisions about your tech stack. This means you need to be a full-stack developer familiar with both frontend and backend. Choosing the right tech stack is crucial for the best developer experience. In this post, I'll share my **Next.js stack for building a SaaS** and break down the different parts of the stack. I'll also share the favorite tools I rely on. If you want to see the final result, you can check out a <a href="https://pro-demo.nextjs-boilerplate.com" target="_blank" rel="noopener">live demo of the stack</a>. ![Next.js Boilerplate SaaS Dashboard](https://creativedesignsguru.com/assets/images/themes/nextjs-boilerplate-saas-dashboard.png) Hope this post will inspire you and help you on your own SaaS journey. ## Next.js, the Backbone of the Stack As a solo developer, you need a framework that allows you to build full-stack applications with ease. Next.js is an excellent choice for building a SaaS, as it's a **React framework** that enables you to build modern applications efficiently. ![Next.js React Meta framework](https://creativedesignsguru.com/assets/images/themes/nextjs-logo.png) I use Next.js to create both the user dashboard and the marketing site. The frontend is written in React, while the backend uses Next.js Route Handlers. The Route Handlers create a RESTful API, which can be used by React components and other clients, such as mobile applications. Using the same framework for both the **marketing site and the dashboard** allows me to reuse components and styles across all parts of the SaaS. This makes the design more consistent and development more efficient. Similarly, having both the frontend and backend rely on Next.js makes it extremely easy to share code between the two. ## Only One Language, TypeScript ![TypeScript programming language logo](https://creativedesignsguru.com/assets/images/themes/typescript-logo.png) To maximize productivity, I use only a single programming language: TypeScript. Combined with Next.js, TypeScript allows me to write both frontend and backend code within one framework and one language, simplifying the development process and reducing context switching. ## Shadcn UI with Tailwind CSS <div class="flex flex-wrap justify-center items-center gap-x-5 mt-8"> <img class="w-40" src="https://creativedesignsguru.com/assets/images/themes/shadcn-ui-logo.png" title="Shadcn UI logo"> <img class="w-56" src="https://creativedesignsguru.com/assets/images/themes/tailwind-css-logo.png" title="Tailwind CSS logo"> </div> For the UI, I choose Shadcn UI, a collection of components built on top of Radix UI, which provides unstyled React components. **Shadcn UI styles Radix UI** components using Tailwind CSS to deliver a beautiful UI for the SaaS. The good news is that I can share these components between the marketing site and the user dashboard seamlessly. ## Authentication Authentication is a crucial part of any SaaS. I use <a href="https://clerk.com?utm_source=github&utm_medium=sponsorship&utm_campaign=nextjs-boilerplate" target="_blank" rel="noopener">Clerk</a> for authentication, which offers comprehensive features like email/password and social login. These basic features are available in many open-source libraries. [![Clerk auth logo](https://creativedesignsguru.com/assets/images/themes/clerk-auth-logo.png)](https://clerk.com?utm_source=github&utm_medium=sponsorship&utm_campaign=nextjs-boilerplate) However, if your application requires more **advanced features**, Clerk is an excellent choice. It can handle multi-factor authentication, user impersonation, multi-session support (one user can connect to multiple accounts), blocking disposable emails, brute-force protection, bot protection, and more. <a href="https://clerk.com?utm_source=github&utm_medium=sponsorship&utm_campaign=nextjs-boilerplate" target="_blank" rel="noopener">Clerk</a> also offers a complete UI in React for authentication, which can be customized to match your brand. It saves you the time and effort of developing authentication from scratch. Some of the **built-in UIs** provided by Clerk include Sign Up, Sign In, Forgot Password, Reset Password, and User Profile. ![Clerk Sign up page](https://creativedesignsguru.com/assets/images/themes/clerk-sign-up-page.png) ## Multi-tenancy and Team Management A robust SaaS should support collaboration within teams or organizations. <a href="https://clerk.com?utm_source=github&utm_medium=sponsorship&utm_campaign=nextjs-boilerplate" target="_blank" rel="noopener">Clerk</a> provides a comprehensive multi-tenancy and team management system, including a full **UI for managing teams and inviting users**. This means I don't need to implement backend logic or UI for team management, as Clerk handles everything, including sending invitation emails and allowing users to switch between teams seamlessly. ![Clerk Multi-tenancy](https://creativedesignsguru.com/assets/images/themes/clerk-multi-tenancy-user-management.png) ## Role and Permission With multi-tenancy, it's important to manage roles and permissions. <a href="https://clerk.com?utm_source=github&utm_medium=sponsorship&utm_campaign=nextjs-boilerplate" target="_blank" rel="noopener">Clerk</a> allows creating custom roles and permissions, enabling users to assign roles. For example, an Admin has all the privileges within the team, while a Read-Only role can only view the resources. This ensures appropriate access and security. ![Clerk Roles and Permissions](https://creativedesignsguru.com/assets/images/themes/clerk-custom-roles-permissions.png) ## Database I use Drizzle ORM for database management because it's type-safe and integrates seamlessly with TypeScript. With Drizzle, I can define models and relationships directly in TypeScript, eliminating the need for an external schema file. This means you don't have to learn another syntax. ![Drizzle ORM](https://creativedesignsguru.com/assets/images/themes/drizzle-orm-logo.png) Drizzle also provides **Drizzle Kit**, a CLI tool that simplifies the migration process. With Drizzle Kit, you can generate a migration folder to seamlessly update your database schema. Additionally, you have **Drizzle Studio** for a visual interface to manage your database. Drizzle Studio allows you to view your database schema, run queries, and browse your data. ![Drizzle Studio](https://creativedesignsguru.com/assets/images/themes/drizzle-studio.png) ## Stripe ![Stripe logo](https://creativedesignsguru.com/assets/images/themes/stripe-logo.png) Stripe handles payments and subscriptions seamlessly. With the Stripe SDK, I can easily integrate payment processing into my Next.js application. **Stripe offers a checkout page**, to which users can be redirected. This page not only reminds users of the plan they are about to subscribe to, but also the monthly or yearly price. Finally, the users can enter their credit card details and subscribe to the selected plan. ![Stripe Checkout SaaS](https://creativedesignsguru.com/assets/images/themes/stripe-checkout-saas.png) Once a subscription is made, Stripe will send a webhook event to my REST API endpoint, indicating that the user has subscribed. This allows me to update the user's subscription status in my database. Stripe offers a **self-service portal** for your users to manage their subscription. In this portal, users can change their plan, update their payment method, cancel their subscription, and view their invoices. ![Stripe Customer Portal](https://creativedesignsguru.com/assets/images/themes/stripe-customer-portal-saas.png) ## Internationalization (i18n) ![Next-Intl logo](https://creativedesignsguru.com/assets/images/themes/next-intl-logo.png) To reach a global audience, I use the Next-Intl library to support multiple languages in Next.js. Next-Intl ensures type-safe translations, verifying that the correct translation key is in use. This prevents runtime errors caused by missing or incorrect translations. [![Crowdin Logo](https://creativedesignsguru.com/assets/images/themes/crowdin-logo.png)](https://l.crowdin.com/next-js) To make the translation experience more efficient, I use <a href="https://l.crowdin.com/next-js" target="_blank" rel="noopener">Crowdin</a>, a **localization platform** that integrates seamlessly with GitHub. Crowdin allows me to manage translations collaboratively, ensuring that the application is available in the desired languages. ![Crowdin editor](https://creativedesignsguru.com/assets/images/themes/crowdin-editor.png) ## Form Management <div class="flex flex-wrap justify-center items-center gap-x-5"> <img class="w-32" src="https://creativedesignsguru.com/assets/images/themes/react-hook-form-logo.png" title="React Hook Form logo"> <img class="w-20" src="https://creativedesignsguru.com/assets/images/themes/zod-library.png" title="Zod logo"> </div> I use **React-Hook-Form combined with Zod** for form management and validation. React-Hook-Form simplifies form handling in React, while Zod ensures data validation. The Zod schema can be easily shared between the frontend and the backend to ensure data validity on both sides. ## Testing As a SaaS builder, it's essential to ensure that the application works as expected. Without a team to test my application, I must rely on automated tests. This way, I'm confident that my application won't experience any regression when new features are added. <div class="flex flex-wrap justify-center items-center gap-x-5"> <img class="w-32" src="https://creativedesignsguru.com/assets/images/themes/vitest-logo.png" title="Vitest logo"> <img class="w-32" src="https://creativedesignsguru.com/assets/images/themes/react-testing-library-logo.png" title="React Testing Library logo"> </div> I use **Vitest and React Testing Library for unit testing**. Vitest is a test runner that supports TypeScript and ESM out of the box, offering a modern alternative to Jest. Another advantage of Vitest is its official VSCode extension and Vitest UI, which make Vitest even better. And, React Testing Library provides utilities for interacting with React components. ![Playwright logo](https://creativedesignsguru.com/assets/images/themes/playwright-logo.png) For end-to-end (E2E) and integration testing, I rely on Playwright. Playwright is a powerful tool that allows you to automate browser interactions, making it ideal for testing the full functionality of your application. With Playwright, I can simulate user interactions across different browsers, ensuring that my app performs consistently. Additionally, Playwright is excellent for testing Next.js Route Handlers, as it can easily send HTTP requests and validate responses. ## GitHub Actions GitHub Actions is a powerful tool that I use for Continuous Integration (CI). It allows me to automate the process of running tests and checks on my code before merging changes to the main branch. ![GitHub Actions logo](https://creativedesignsguru.com/assets/images/themes/github-actions-logo.png) Whenever I push a new commit or create a pull request, **GitHub Actions automatically triggers workflows** that I have defined in my repository. These workflows run unit tests with Vitest, executing end-to-end tests with Playwright, and performing linting and code formatting checks. If there is any issue, GitHub Actions will notify me, preventing me from merging faulty code. Because my code is continuously being tested and validated, it provides a safety net that allows me to focus on building new features. Especially as a solo developer, we need to wear multiple hats and have limited time to manually test every aspect of an application. ## Logging <div class="flex flex-wrap justify-center items-center gap-x-5 mt-8"> <img class="w-48" src="https://creativedesignsguru.com/assets/images/themes/pino-logging-logo.png" title="Pino.js logging logo"> <img class="w-48" src="https://creativedesignsguru.com/assets/images/themes/better-stack-logo.png" title="Better Stack logo"> </div> I use Pino, a **fast and lightweight logging** library for Node.js. Pino provides a simple API to log messages and supports structured logging, making it easy to search and analyze logs. In production, I take logging a step further by sending the logs to <a href="https://betterstack.com/?utm_source=github&utm_medium=sponsorship&utm_campaign=next-js-boilerplate" target="_blank" rel="noopener">Better Stack</a>. Better Stack offers a robust logging platform that allows for real-time log monitoring, alerting, and visualization. By integrating Pino with Better Stack, I ensure that all log data is efficiently captured, stored, and accessible, enabling quick identification and resolution of issues in a live environment. ## Error Monitoring For error monitoring, I use <a href="https://sentry.io/for/nextjs/?utm_source=github&utm_medium=paid-community&utm_campaign=general-fy25q1-nextjs&utm_content=github-banner-nextjsboilerplate-logo" target="_blank" rel="noopener">Sentry</a>, which captures errors and exceptions. It provides detailed reports that include stack traces, user context, and other relevant information, making it easier to identify issues. [![Sentry error monitoring logo](https://creativedesignsguru.com/assets/images/themes/sentry-logo.png)](https://sentry.io/for/nextjs/?utm_source=github&utm_medium=paid-community&utm_campaign=general-fy25q1-nextjs&utm_content=github-banner-nextjsboilerplate-logo) In local development, I use **Spotlight to capture Sentry events**, taking advantage of Sentry's telemetry without overwhelming the production instance. ![Sentry spotlight local](https://creativedesignsguru.com/assets/images/themes/sentry-spotlight.png) ## Environment Variables ![T3 Env Zod logo](https://creativedesignsguru.com/assets/images/themes/t3-env-zod-logo.png) **T3 Env** is a library that uses Zod to validate and transform environment variables. This ensures that all environment variables are correctly defined and validated. ## Linter and Code Formatter Maintaining a clean codebase is essential. I use **ESLint and Prettier for linting and code formatting**. ESLint ensures code quality by enforcing best practices and catching potential errors, while Prettier enforces a consistent coding style. This makes the codebase more readable and maintainable. <div class="flex flex-wrap justify-center items-center gap-x-5"> <img class="w-40" src="https://creativedesignsguru.com/assets/images/themes/eslint-logo.png" title="ESLint linter logo"> <img class="w-48" src="https://creativedesignsguru.com/assets/images/themes/prettier-logo.png" title="Prettier Code formatter logo"> </div> I recommend using the Airbnb style guide as the base configuration for ESLint, as it's one of the most popular JavaScript style guides. Additionally, I use `eslint-plugin-playwright` to ensure my Playwright tests follow best practices and `eslint-plugin-tailwind` to enforce best practices for Tailwind CSS. ## VSCode ![VSCode logo](https://creativedesignsguru.com/assets/images/themes/vscode-logo.png) Visual Studio Code (VSCode) is my code editor of choice and has a rich ecosystem of extensions. Here are some of the extensions that I recommend that work well with my tech stack: - `vscode-eslint`, integrates ESLint into VS Code - `vscode-tailwindcss`, provides IntelliSense and syntax highlighting for Tailwind CSS - `vscode-github-actions`, manages GitHub Actions workflows directly in VSCode - `i18n-ally`, supports internationalization, offering translation key management, making it easier to work with multiple languages ## Conclusion In conclusion, building a SaaS as a solo developer can be quite challenging. However, choosing the right tech stack can make the process much easier, allowing you to focus on delivering value to your users. The combination of Next.js, TypeScript, Shadcn UI with Tailwind CSS, Clerk, Drizzle ORM, Stripe, and other tools shared in this article provides a scalable environment for **building a SaaS product**. These tools not only simplify the development process but also ensure that your application is secure, performant, and user-friendly. They handle everything from authentication, multi-tenancy, and payment processing to database management, testing, and continuous integration, **helping you focus on your business logic and user experience**. If you want to check out the final result, you can find a <a href="https://pro-demo.nextjs-boilerplate.com" target="_blank" rel="noopener">live demo</a>. I've created a [Next.js SaaS boilerplate](https://nextjs-boilerplate.com/pro-saas-starter-kit), which is a comprehensive starting point for building your own SaaS product using the same tech stack shared in this article. [![Next.js SaaS Starter kit](https://creativedesignsguru.com/assets/images/themes/nextjs-saas-boilerplate.png)](https://nextjs-boilerplate.com/pro-saas-starter-kit) The key to success as a solo developer is to leverage the right tools and technologies. This tech stack is my personal choice based on my experience and requirements. Depending on your project's needs, you might choose different tools. However, the principles remain the same: **choose tools that make you productive**. I hope this post has given you some insights and inspiration for your own SaaS journey. Happy coding!
ixartz
1,892,577
Descubriendo AWS DeepRacer: Experiencias y consejos para AWS DeepRacer de los finalistas de las SpainSkills
Experiencias con AWS DeepRacer y el aprendizaje por refuerzo La competición de AWS...
25,290
2024-06-18T14:45:32
https://dev.to/aws-espanol/descubriendo-aws-deepracer-experiencias-y-consejos-para-aws-deepracer-de-los-finalistas-de-las-spainskills-20de
alianzatechskills2jobs, aws, deepracer, awsespanol
--- series: Descubriendo AWS DeepRacer --- ### Experiencias con AWS DeepRacer y el aprendizaje por refuerzo La competición de AWS DeepRacer dentro de la demostración cloud de las SpainSkills fue una oportunidad única para que los finalistas de las diferentes comunidades autónomas participantes experimentaran con el **aprendizaje por refuerzo de forma práctica**. Algunos de estos finalistas han compartido sus experiencias y consejos para futuros participantes en las competiciones organizadas por la Alianza Tech de AWS (Skills to Jobs Tech Alliance). Por orden alfabético de la Comunidad Autonómica de los finalistas: ## Mario (Aragón, CPIFP Pirámide): Improvisación y experimentación Mario nos explica como pasó todo un día entrenando modelos y compitiendo online, encontrando la **experiencia divertida a pesar de la curva de aprendizaje inicial**. Interactuar con otros participantes y aprender sobre inteligencia artificial fueron algunos de los aprendizajes que destaca en este video. Para él, lo más emocionante fue observar al modelo aprender y mejorar según los parámetros que iba modificando. Por último, Mario enfatiza la importancia de experimentar con diferentes configuraciones para comprender el proceso de aprendizaje. {% embed https://www.youtube.com/watch?v=6qmE2vnIbnw %} ## Robin (Canarias, IES Lomo de la Herradura): Una valiosa herramienta docente Robin destaca AWS DeepRacer como un **método de enseñanza efectivo** para entender la inteligencia artificial y el aprendizaje por refuerzo. Combina entrenamiento virtual con una competición en un circuito físico, lo que le permite profundizar en estos conceptos de forma gamificada. Lo más emocionante fue usar los coches físicos en la pista, centrándose en **mantener la estabilidad del coche y optimizar la velocidad.** También alaba la consola de AWS por su facilidad de uso. {% embed https://www.youtube.com/watch?v=fGKOIzH3Rz4 %} ## Sergio (Cantabria, IES Augusto González Linares): Sorprendido por la sencillez A pesar de ser nuevo en el ámbito del Machine Learning, Sergio quedó sorprendido por la **facilidad para comenzar con AWS DeepRacer**. El momento más emocionante fue la segunda ronda de la competición, en la que se enfocó en **controlar la velocidad del coche** y disfrutar de la experiencia. Sergio anima a los interesados a participar, enfatizando que es una actividad divertida y no deben preocuparse por los errores. También explica como su mayor reto fue aproximarse a la nueva tecnología, pero una vez comenzó a practicar con funciones sencillas fue mejorando sus resultados. {% embed https://www.youtube.com/watch?v=uJ67vy02wO4 %} ## Andoni (País Vasco, IES Xabier Zubiri Manteo BHI): Carreras físicas intensas Andoni expresó la satisfacción de entrenar distintos modelos y observar su progreso en persona, lo que le ha inspirado a seguir sus estudios con un máster en inteligencia artificial. Los momentos más intensos fueron las carreras físicas, donde los coches estaban muy igualados y se generaba suspense sobre los resultados finales. Su principal reto fue optimizar la velocidad del coche a través de la función de recompensa. {% embed https://youtu.be/SfwXZx_zLB4 %} Como podemos ver, **AWS DeepRacer** es una excelente iniciativa que permite a los estudiantes aprender sobre inteligencia artificial y aprendizaje por refuerzo de forma práctica y divertida. ¡Esperamos que haya más competiciones como ésta en el futuro! Únete a la comunidad de desarrolladores de AWS para aprender más sobre cloud y tecnología: https://dev.to/aws-espanol/impulsa-tu-carrera-unete-a-la-comunidad-de-desarrolladores-de-aws-en-iberia-user-groups-h0m
iaasgeek
1,890,533
SteamVR Overlay with Unity: Overlay Events
Show the current time on the right hand Let’s display the current time on the right...
27,740
2024-06-18T13:57:23
https://dev.to/kurohuku/part-9-overlay-events-4342
unity3d, steamvr, openvr, vr
## Show the current time on the right hand Let’s display the current time on the right hand. ### Create variable Add a variable in `WatchOverlay.cs` to determine which hand to show the current time. OpenVR has [ETrackedControllerRole](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.ETrackedControllerRole.html) type representing the left or right hand, so we use this. ```diff public class WatchOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; + public ETrackedControllerRole targetHand = EtrackedControllerRole.LeftHand; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; ... ``` The left hand is `EtrackedControllerRole.LeftHand`, and the right hand is `EtrackedControllerRole.RightHand`. ### Get selected controller device index Currently, it gets only the left controller device index. Let’s make it to get the current selected controller. ```diff private void Update() { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); - var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); - if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) - { - Overlay.SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); - } + var controllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(targetHand); + if (controllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) + { + Overlay.SetOverlayTransformRelative(overlayHandle, controllerIndex, position, rotation); + } Overlay.SetOverlayRenderTexture(overlayHandle, renderTexture); } ``` Run the program, switch `Target Hand` to `Right Hand` in the `WatchOverlay` inspector. It should display the current time on the right hand. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rys9a8znj4kp6samd1dj.png) But the current time is displayed on the wrong position on the right hand because we set the position for only the left hand. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rcejzhabmjfrmlw3pp7o.png) Let’s fix the position for the right hand. ### Note down current params At first, open `WatchOverlay` inspector, note down `X`, `Y`, `Z`, `RotationX`, `Rotation Y`, `Rotation Z` values. ``` Left hand params x = -0.044 y = 0.015 z = -0.131 rotationX = 154 rotationY = 262 rotationZ = 0 ``` ### Create variables for each hand Remove the member variables below. - `x`, `y`, `z` - `rotationX`, `rotationY`, `rotationZ` Keep the `size` because it will be shared from both hands. Add left-hand and right-hand variables below. - `leftX`, `leftY`, `leftZ` - `leftRotationX`, `leftRotationY`, `leftRotationZ` - `rightX`, `rightY`, `rightZ` - `rightRotationX`, `rightRotationY`, `rightRotationZ` ```diff public class WatchOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; public ETrackedControllerRole targetHand = ETrackedControllerRole.RightHand; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; // Keep size [Range(0, 0.5f)] public float size; - // Remove current variables - [Range(-0.2f, 0.2f)] public float x; - [Range(-0.2f, 0.2f)] public float y; - [Range(-0.2f, 0.2f)] public float z; - [Range(0, 360)] public int rotationX; - [Range(0, 360)] public int rotationY; - [Range(0, 360)] public int rotationZ; + // Add for left hand + [Range(-0.2f, 0.2f)] public float leftX; + [Range(-0.2f, 0.2f)] public float leftY; + [Range(-0.2f, 0.2f)] public float leftZ; + [Range(0, 360)] public int leftRotationX; + [Range(0, 360)] public int leftRotationY; + [Range(0, 360)] public int leftRotationZ; + // Add for right hand + [Range(-0.2f, 0.2f)] public float rightX; + [Range(-0.2f, 0.2f)] public float rightY; + [Range(-0.2f, 0.2f)] public float rightZ; + [Range(0, 360)] public int rightRotationX; + [Range(0, 360)] public int rightRotationY; + [Range(0, 360)] public int rightRotationZ; ``` ### Switch position and rotation to match the current hand Edit `Update()` to set corresponding to `targetHandposition` and rotation. ```diff private void Update() { - var position = new Vector3(x, y, z); - var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); + Vector3 position; + Quaternion rotation; + + if (targetHand == ETrackedControllerRole.LeftHand) + { + position = new Vector3(leftX, leftY, leftZ); + rotation = Quaternion.Euler(leftRotationX, leftRotationY, leftRotationZ); + } + else + { + position = new Vector3(rightX, rightY, rightZ); + rotation = Quaternion.Euler(rightRotationX, rightRotationY, rightRotationZ); + } var controllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(targetHand); if (controllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { Overlay.SetOverlayTransformRelative(overlayHandle, controllerIndex, position, rotation); } Overlay.SetOverlayRenderTexture(overlayHandle, renderTexture); } ``` ### Adjust position for right hand Run the program and open `WatchOverlay` inspector. Check `Target Hand` is set to `Right Hand`. Move the right-hand sliders to adjust the position and rotation to the best fit. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0isiuwho2r4e919qin9j.png) Here are the sample values for the right hand. ``` Right hand params x = 0.04 y = 0.003 z = -0.107 rotationX = 24 rotationY = 258 rotationZ = 179 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2v66bokd39ne48rj527k.png) After you set the best position on the sliders, **right click theWatchOverlay component name on the inspector > Copy Component** to copy values. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/497379ujk0hx7o0meets.png) Stop the program, **right click > Paste Component Values**. Also, input the left-hand values from the note you took. Reset `Target Hand` variable to `Left Hand`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/361bo0e61sbjz1im5r5u.png) Here, we got the position and rotation for both hands. ## Create button events Let’s make it so that when the button is clicked, switch controllers. Create a new script `WatchSettingController.cs` inside `Scripts` folder. We will add the code of button events here. **Right click hierarchy > Create Empty** to make an empty object, and change the object name to `SettingController`. Add `WatchSettingController.cs` to the `SettingController` object. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csrfkqtmn65k4ljl8uu5.png) Copy the code below to `WatchSettingController.cs`. ```cs using UnityEngine; public class WatchSettingController : MonoBehaviour { [SerializeField] private WatchOverlay watchOverlay; } ``` Add an event when the button is clicked to `WatchSettingController.cs`. ```diff using UnityEngine; + using Valve.VR; public class WatchSettingController : MonoBehaviour { [SerializeField] private WatchOverlay watchOverlay; + public void OnLeftHandButtonClick() + { + // When "Left Hand" button is clicked, switch to left hand. + watchOverlay.targetHand = ETrackedControllerRole.LeftHand; + } + + public void OnRightHandButtonClick() + { + // When "Right Hand" button is clicked, switch to right hand. + watchOverlay.targetHand = ETrackedControllerRole.RightHand; + } } ``` ## Set button events Select `Dashboard > Canvas > LeftHandButton` object on the hierarchy. On the inspector, click + button in the `OnClick()` field of `Button` component. Drag `SettingController` object to `None (Object)` of `OnClick()` field, and select `WatchSettingController.OnLeftHandButtonClick()`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ohs4d1inure0lkriflpi.png) Here, `SettingController.OnLeftHandButtonClick()` is called when the button is clicked. Similarly, set `WatchSettingController.OnRightHandButtonClick()` to `OnClick()` field in `RightHandButton` inspector. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3afvirre1xt7mhqpdo1.png) ## Get dashboard event Currently, nothing happens when the button is clicked. The previous settings are for Unity side events, not OpenVR. OpenVR and Unity events are different, so we must notify Unity of the OpenVR dashboard event. First, we get the OpenVR click event. Overlay events can be detected by polling with [PollNextOverlayEvent()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_PollNextOverlayEvent_System_UInt64_Valve_VR_VREvent_t__System_UInt32_). (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::PollNextOverlayEvent) for details) Let’s watch dashboard overlay events in `Update()` with `PollNextOverlayEvents()`. If some overlay events occur on the specified overlay, `PollNextOverlayEvent()` returns `true` and takes one event from the event queue. If all events are taken, it returns `false`. Add event detection code into `Update()` of `DashboardOverlay.cs`. ```diff void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); + var vrEvent = new VREvent_t(); + var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); + + // If overlay events are left then true. + while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) + { + // vrEvent is popped up event. + } + + // Break the loop when all events are popped out. } ``` OpenVR event represents as [VREvent_t](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.VREvent_t.html) type. (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/VREvent_t) for details) `uncbVREvent` is the byte size of `VREvent_t` type. Get `EVREventType.VREvent_MouseButtonDown` and `EVREventType.VREvent_MouseButtonUp` events. ```diff private void Update() { var vrEvent = new VREvent_t(); var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); while (OpenVR.Overlay.PollNextOverlayEvent(overlayHandle, ref vrEvent, uncbVREvent)) { + switch ((EVREventType)vrEvent.eventType) + { + case EVREventType.VREvent_MouseButtonDown: + Debug.Log("MouseDown"); + break; + + case EVREventType.VREvent_MouseButtonUp: + Debug.Log("MouseUp"); + break; + } } } ``` `vrEvent.eventType` is a `uint` event code, so we cast it `EVREventType` for comparison. Run the program, click the dashboard button in VR, and then **“MouseDown”** and **“MouseUp”** should be logged. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d80ioxa5ze7hd58b0aha.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbzl4r01bhe2p12kxa04.png) *Aim with the laser pointer then click!* --- ### Is there EVREventType.VREvent_MouseClick? No there is. OpenVR Overlay mouse events are simple. There are only three events `MouseButtonDown`, `MouseButtonUp`, `MouseMove`. If you want to use other events like OnClick, OnMouseEnter or OnMouseLeave, you should combine the three basic events. It is harder than Unity UI to make some UIs like slider. #### Is there easy way to make UI? I released a UI asset that has basic UIs for SteamVR dashboard. If you are interested, try it. https://assetstore.unity.com/packages/tools/gui/ovrle-ui-dashboard-ui-kit-for-steamvr-270636 {% embed https://youtu.be/JHFiPjIXEsE?si=0fjfxjVVN4EKj-gK %} --- Get the click position from the overlay mouse event. ```diff private void Update() { var vrEvent = new VREvent_t(); var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); while (OpenVR.Overlay.PollNextOverlayEvent(overlayHandle, ref vrEvent, uncbVREvent)) { switch (vrEvent.eventType) { case (uint)EVREventType.VREvent_MouseButtonDown: - Debug.Log("MouseDown"); + Debug.Log($"MouseDown: ({vrEvent.data.mouse.x}, {vrEvent.data.mouse.y})"); break; case (uint)EVREventType.VREvent_MouseButtonUp: - Debug.Log("MouseUp"); + Debug.Log($"MouseUp: ({vrEvent.data.mouse.x}, {vrEvent.data.mouse.y})"); break; } } } ``` `vrEvent` has a mouse position as [VREvent_Mouse_t](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.VREvent_Mouse_t.html?q=VREvent_Mouse_t) type. The mouse position is UV, so the x and y are 0–1. Run the program, open the SteamVR dashboard, and click the overlay. The clicked mouse position should be logged. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3kbf3kk6coytfxk0qz38.png) ## Apply mouse scaling factor The mouse position is 0–1 UV, but we can convert it to the actual UI position (px) by applying the Mouse Scaling Factor with [SetOverlayMouseScale()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayMouseScale_System_UInt64_Valve_VR_HmdVector2_t__). (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::SetOverlayMouseScale) for details) Set mouse scaling factor in `Start()` of `DashboardOverlay.cs`. ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); (dashboardHandle, thumbnailHandle) = Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; Overlay.SetOverlayFromFile(thumbnailHandle, filePath); Overlay.FlipOverlayVertical(dashboardHandle); Overlay.SetOverlaySize(dashboardHandle, 2.5f); + var mouseScalingFactor = new HmdVector2_t() + { + v0 = renderTexture.width, + v1 = renderTexture.height + }; + error = OpenVR.Overlay.SetOverlayMouseScale(dashboardHandle, ref mouseScalingFactor); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set mouse scaling factor: " + error); + } } ``` `v0` is width, `v1` is height of `mouseScalingFactor`. When the OpenVR mouse event is dispatched, the mouse scaling factor multiplies to the UV position to scale it to the actual UI size. Run the program, and click the dashboard. The click position should be scaled to (0, 0) ~ (1024, 768). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/blddz4l1w3bo2sc1ofkh.png) ## Test mouse event with Overlay Viewer FYI, Overlay Viewer can dispatch mouse events. Run the program, launch Overlay Viewer, and click `WatchDashboardKey` from the overlay list. Check the right bottom Mouse Capture and click the preview area, A mouse event will be dispatched and logged to the Unity console. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndd9aldle67wmuei8m58.png) It is useful to check events without the HMD. ## Detect which button is clicked Detect which Unity button is clicked by the event mouse position. Unity’s [Graphic Raycaster](https://docs.unity3d.com/Packages/com.unity.ugui@1.0/api/UnityEngine.UI.GraphicRaycaster.html) can detect UI components by specific mouse positions. ### Get Graphic Raycaster Add `graphicRaycaster` variable to `DashboardOverlay.cs`. ```diff using UnityEngine; using Valve.VR; using System; using OpenVRUtil; + using UnityEngine.UI; public class DashboardOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; + public GraphicRaycaster graphicRaycaster; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; ... ``` Open **Dashboard > DashboardOverlay** inspector. Drag **Dashboard > Canvas object** to the **GraphicRaycaster** variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ynceie4t51m9amouy84e.png) ### Add detection method Add a method to `DashboardOverlay.cs` which detects a UI component at the specified position. ```diff private void OnDestroy() { OpenVRUtil.System.ShutdownOpenVR(); } + private Button GetButtonByPosition(Vector2 position) + { + // Return button that at position.x, position.y. + // If nothing, return null. + return null; + } ... ``` In this tutorial, we only use `Button` component so we take `Button` only. ## Add EventSystem `GraphicRaycaster` requires the [PointerEventData](https://docs.unity3d.com/Packages/com.unity.ugui@1.0/api/UnityEngine.EventSystems.PointerEventData.html) as an argument. PointerEvent data is defined in [EventSystem](https://docs.unity3d.com/Packages/com.unity.ugui@1.0/manual/EventSystem.html) which controls Unity events. At first, add a variable to save `EventSystem` to `DashboardOverlay.cs`. ```diff using UnityEngine; using Valve.VR; using System; using OpenVRUtil; using UnityEngine.UI; + using UnityEngine.EventSystems; public class DashboardOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; public GraphicRaycaster graphicRaycaster; + public EventSystem eventSystem; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; ... ``` Open **Dashboard > DashboardOverlay** inspector from the hierarchy. Drag **EventSystem** object to the **Event System** variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvjcx27jc6kfktvpp8yw.png) ### Create PointerEventData In `GetButtonByPosition()`, create `PointerEventData` with `EventSystem`. ```diff private Button GetButtonByPosition(Vector2 position) { + var pointerEventData = new PointerEventData(eventSystem); + pointerEventData.position = position; return null; } ``` ### Get Button with GraphicRaycaster Pass the `pointerEventData` to [GraphicRaycaster.Raycast()](https://docs.unity3d.com/Packages/com.unity.ugui@1.0/api/UnityEngine.UI.GraphicRaycaster.html#UnityEngine_UI_GraphicRaycaster_Raycast_UnityEngine_EventSystems_PointerEventData_System_Collections_Generic_List_UnityEngine_EventSystems_RaycastResult__) to find a Button component on the click position, and return it if it exists. ```diff using UnityEngine; using Valve.VR; using System; using OpenVRUtil; using UnityEngine.UI; using UnityEngine.EventSystems; + using System.Collections.Generic; public class DashboardOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; public GraphicRaycaster graphicRaycaster; public EventSystem eventSystem; ... private Button GetButtonByPosition(Vector2 position) { var pointerEventData = new PointerEventData(eventSystem); pointerEventData.position = position; + // List to save found elements. + var raycastResultList = new List<RaycastResult>(); + + // Find elements at pointerEventData position then save to raycastList. + graphicRaycaster.Raycast(pointerEventData, raycastResultList); + + // Get buttons from saved list. + var raycastResult = raycastResultList.Find(element => element.gameObject.GetComponent<Button>()); + + // Return null if no button found. + if (raycastResult.gameObject == null) + { + return null; + } + + // Otherwise return found button. + return raycastResult.gameObject.GetComponent<Button>(); } ``` Here, finding a button with a click position is done. ### Call detect function When the OpenVR mouse event is dispatched, retrieve the mouse position and pass it to the `GetButtonByPosition()` just created above. There is no mouse click event in the OpenVR overlay event, so we will use `MouseDown` forclick detection. ```diff void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); var vrEvent = new VREvent_t(); var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) { switch (vrEvent.eventType) { - // Remove MouseUp event we don't use this time. - case (uint)EVREventType.VREvent_MouseButtonDown: - Debug.Log($"MouseDown: ({vrEvent.data.mouse.x}, {vrEvent.data.mouse.y})"); - break; case (uint)EVREventType.VREvent_MouseButtonUp: - Debug.Log($"MouseUp: ({vrEvent.data.mouse.x}, {vrEvent.data.mouse.y})"); + var button = GetButtonByPosition(new Vector2(vrEvent.data.mouse.x, vrEvent.data.mouse.y)); + Debug.Log(button); break; } } } ``` Run the program, and click the buttons on the dashboard. The button name should be logged into the Unity console. Currently, it displays the wrong button name but it’s OK for now. If we click an empty area in the overlay, it should say `Null`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qgfz113yk4hj18lhdnqw.png) ## Flip the click position vertically Why the clicked button name is swapped to the other one is that the mouse position Y-axis (multiplied mouse scaling factor to V value) is reversed between OpenVR overlay and Unity canvas. OpenVR overlay mouse position is notified as the left bottom is (0, 0). This is originally the same as Unity. But the notified mouse position is flipped vertically this time because we have flipped the V-axis with `SetOverlayTextureBounds()` when drawing texture to avoid flipping vertically caused by the UV coordinate difference between DirectX and Unity. This texture bounds change affects the notified mouse position. In this tutorial, we assume that the graphics API is always DirectX, and always flips texture with `SetOverlayTextureBounds()`, so we will flip the click position V (Y) axis at the event handling. ```diff void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); var vrEvent = new VREvent_t(); var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) { switch (vrEvent.eventType) { case (uint)EVREventType.VREvent_MouseButtonUp: + vrEvent.data.mouse.y = renderTexture.height - vrEvent.data.mouse.y; var button = GetButtonByPosition(new Vector2(vrEvent.data.mouse.x, vrEvent.data.mouse.y)); Debug.Log(button); break; } } } ``` Run the program, click the Left Hand and Right Hand buttons, and check the correct button is captured. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qvf50mhzxb9ree1bbun.png) ## Switch controllers by clicking buttons Now, we have got which button is clicked. Next, switch which hand to display the current time when a button is clicked. We have created the code to switch hands and attached it to each Button's onClick method, so we will just call it. `DashboardOverlay.cs` ```diff void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); var vrEvent = new VREvent_t(); var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) { switch (vrEvent.eventType) { case (uint)EVREventType.VREvent_MouseButtonUp: vrEvent.data.mouse.y = renderTexture.height - vrEvent.data.mouse.y; var button = GetButtonByPosition(new Vector2(vrEvent.data.mouse.x, vrEvent.data.mouse.y)); - Debug.Log(button); + if (button != null) + { + button.onClick.Invoke(); + } break; } } } ``` Run the program, open the dashboard then click buttons. It should switch which hand to show the current time. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmfn0kwd4pcmbtwi72ht.gif) Here, creating dashboard overlay and switch controllers are done. ## Organize code ### Mouse Scaling Factor Move the mouse scaling factor setting as `SetOverlayMouseScale()`. `OpenVRUtil.cs` ```diff ... public static void SetOverlayRenderTexture(ulong handle, RenderTexture renderTexture) { if (!renderTexture.IsCreated()) { return; } var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); var texture = new Texture_t { eColorSpace = EColorSpace.Auto, eType = ETextureType.DirectX, handle = nativeTexturePtr }; var error = OpenVR.Overlay.SetOverlayTexture(handle, ref texture); if (error != EVROverlayError.None) { throw new Exception("Failed to draw texture: " + error); } } + public static void SetOverlayMouseScale(ulong handle, int x, int y) + { + var pvecMouseScale = new HmdVector2_t() + { + v0 = x, + v1 = y + }; + var error = OpenVR.Overlay.SetOverlayMouseScale(handle, ref pvecMouseScale); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set mouse scale: " + error); + } + } ... ``` `DashboardOverlay.cs` ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); (dashboardHandle, thumbnailHandle) = Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; Overlay.SetOverlayFromFile(thumbnailHandle, filePath); Overlay.SetOverlaySize(dashboardHandle, 2.5f); Overlay.FlipOverlayVertical(dashboardHandle); - var pvecMouseScale = new HmdVector2_t() - { - v0 = renderTexture.width, - v1 = renderTexture.height - }; - var error = OpenVR.Overlay.SetOverlayMouseScale(dashboardHandle, ref pvecMouseScale); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to set mouse scale: " + error); - } + Overlay.SetOverlayMouseScale(dashboardHandle, renderTexture.width, renderTexture.height); } ``` ### Event handling Move the event handling as ProcessOverlayEvents(). `DashboardOverlay.cs` ```diff void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); - - var vrEvent = new VREvent_t(); - var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); - while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) - { - switch (vrEvent.eventType) - { - case (uint)EVREventType.VREvent_MouseButtonUp: - vrEvent.data.mouse.y = renderTexture.height - vrEvent.data.mouse.y; - var button = GetButtonByPosition(vrEvent.data.mouse.x, vrEvent.data.mouse.y); - if (button != null) - { - button.onClick.Invoke(); - } - break; - } - } + ProcessOverlayEvents(); } ... + private void ProcessOverlayEvents() + { + var vrEvent = new VREvent_t(); + var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); + while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) + { + switch (vrEvent.eventType) + { + case (uint)EVREventType.VREvent_MouseButtonUp: + vrEvent.data.mouse.y = renderTexture.height - vrEvent.data.mouse.y; + var button = GetButtonByPosition(new Vector2(vrEvent.data.mouse.x, vrEvent.data.mouse.y)); + if (button != null) + { + button.onClick.Invoke(); + } + break; + } + } + } private Button GetButtonByPosition(float x, float y) { ... ``` ## Final code `WatchOverlay.cs` ```cs using UnityEngine; using Valve.VR; using System; using OpenVRUtil; public class WatchOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; public ETrackedControllerRole targetHand = ETrackedControllerRole.LeftHand; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; [Range(0, 0.5f)] public float size; [Range(-0.2f, 0.2f)] public float leftX; [Range(-0.2f, 0.2f)] public float leftY; [Range(-0.2f, 0.2f)] public float leftZ; [Range(0, 360)] public int leftRotationX; [Range(0, 360)] public int leftRotationY; [Range(0, 360)] public int leftRotationZ; [Range(-0.2f, 0.2f)] public float rightX; [Range(-0.2f, 0.2f)] public float rightY; [Range(-0.2f, 0.2f)] public float rightZ; [Range(0, 360)] public int rightRotationX; [Range(0, 360)] public int rightRotationY; [Range(0, 360)] public int rightRotationZ; private void Start() { OpenVRUtil.System.InitOpenVR(); overlayHandle = Overlay.CreateOverlay("WatchOverlayKey", "WatchOverlay"); Overlay.FlipOverlayVertical(overlayHandle); Overlay.SetOverlaySize(overlayHandle, size); Overlay.ShowOverlay(overlayHandle); } private void Update() { Vector3 position; Quaternion rotation; if (targetHand == ETrackedControllerRole.LeftHand) { position = new Vector3(leftX, leftY, leftZ); rotation = Quaternion.Euler(leftRotationX, leftRotationY, leftRotationZ); } else { position = new Vector3(rightX, rightY, rightZ); rotation = Quaternion.Euler(rightRotationX, rightRotationY, rightRotationZ); } var controllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(targetHand); if (controllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { Overlay.SetOverlayTransformRelative(overlayHandle, controllerIndex, position, rotation); } Overlay.SetOverlayRenderTexture(overlayHandle, renderTexture); } private void OnApplicationQuit() { Overlay.DestroyOverlay(overlayHandle); } private void OnDestroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` `DashboardOverlay.cs` ```cs using UnityEngine; using Valve.VR; using System; using OpenVRUtil; using UnityEngine.UI; using UnityEngine.EventSystems; using System.Collections.Generic; public class DashboardOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; public GraphicRaycaster graphicRaycaster; public EventSystem eventSystem; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { OpenVRUtil.System.InitOpenVR(); (dashboardHandle, thumbnailHandle) = Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; Overlay.SetOverlayFromFile(thumbnailHandle, filePath); Overlay.FlipOverlayVertical(dashboardHandle); Overlay.SetOverlaySize(dashboardHandle, 2.5f); Overlay.SetOverlayMouseScale(dashboardHandle, renderTexture.width, renderTexture.height); } private void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); ProcessOverlayEvents(); } private void OnApplicationQuit() { Overlay.DestroyOverlay(dashboardHandle); } private void OnDestroy() { OpenVRUtil.System.ShutdownOpenVR(); } private void ProcessOverlayEvents() { var vrEvent = new VREvent_t(); var uncbVREvent = (uint)System.Runtime.InteropServices.Marshal.SizeOf(typeof(VREvent_t)); while (OpenVR.Overlay.PollNextOverlayEvent(dashboardHandle, ref vrEvent, uncbVREvent)) { switch (vrEvent.eventType) { case (uint)EVREventType.VREvent_MouseButtonUp: vrEvent.data.mouse.y = renderTexture.height - vrEvent.data.mouse.y; var button = GetButtonByPosition(new Vector2(vrEvent.data.mouse.x, vrEvent.data.mouse.y)); if (button != null) { button.onClick.Invoke(); } break; } } } private Button GetButtonByPosition(Vector2 position) { var pointerEventData = new PointerEventData(eventSystem); pointerEventData.position = new Vector2(position.x, position.y); var raycastResultList = new List<RaycastResult>(); graphicRaycaster.Raycast(pointerEventData, raycastResultList); var raycastResult = raycastResultList.Find(element => element.gameObject.GetComponent<Button>()); if (raycastResult.gameObject == null) { return null; } return raycastResult.gameObject.GetComponent<Button>(); } } ``` `OpenVRUtil.cs` ```cs using UnityEngine; using Valve.VR; using System; namespace OpenVRUtil { public static class System { public static void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } public static void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } } public static class Overlay { public static ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } public static (ulong, ulong) CreateDashboardOverlay(string key, string name) { ulong dashboardHandle = 0; ulong thumbnailHandle = 0; var error = OpenVR.Overlay.CreateDashboardOverlay(key, name, ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create dashboard overlay: " + error); } return (dashboardHandle, thumbnailHandle); } public static void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay: " + error); } } } public static void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } public static void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } public static void SetOverlaySize(ulong handle, float size) { var error = OpenVR.Overlay.SetOverlayWidthInMeters(handle, size); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } } public static void SetOverlayTransformAbsolute(ulong handle, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformAbsolute(handle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } public static void SetOverlayTransformRelative(ulong handle, uint deviceIndex, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(handle, deviceIndex, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } public static void FlipOverlayVertical(ulong handle) { var bounds = new VRTextureBounds_t { uMin = 0, uMax = 1, vMin = 1, vMax = 0 }; var error = OpenVR.Overlay.SetOverlayTextureBounds(handle, ref bounds); if (error != EVROverlayError.None) { throw new Exception("Failed to texture flip: " + error); } } public static void SetOverlayRenderTexture(ulong handle, RenderTexture renderTexture) { if (!renderTexture.IsCreated()) return; var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); var texture = new Texture_t { eColorSpace = EColorSpace.Auto, eType = ETextureType.DirectX, handle = nativeTexturePtr }; var error = OpenVR.Overlay.SetOverlayTexture(handle, ref texture); if (error != EVROverlayError.None) { throw new Exception("Failed to draw texture: " + error); } } public static void SetOverlayMouseScale(ulong handle, int x, int y) { var pvecMouseScale = new HmdVector2_t() { v0 = x, v1 = y }; var error = OpenVR.Overlay.SetOverlayMouseScale(handle, ref pvecMouseScale); if (error != EVROverlayError.None) { throw new Exception("Failed to set mouse scale: " + error); } } } } ``` `WatchSettingController.cs` ```cs using UnityEngine; using Valve.VR; public class WatchSettingController : MonoBehaviour { [SerializeField] private WatchOverlay watchOverlay; public void OnLeftHandButtonClick() { watchOverlay.targetHand = ETrackedControllerRole.LeftHand; } public void OnRightHandButtonClick() { watchOverlay.targetHand = ETrackedControllerRole.RightHand; } } ``` Here, we made it to deal OpenVR overlay event. The final task is to display the current time in a few seconds when the controller button is pushed.
kurohuku
1,892,450
The Silent Killer: Not Preparing for Scale Can Doom Your SaaS
Imagine this: Your innovative SaaS product just launched on Product Hunt or AppSumo. Almost...
27,766
2024-06-18T14:42:32
https://dev.to/brandonbaz/the-silent-killer-how-not-preparing-for-scale-can-doom-your-saas-after-launching-on-product-hunt-or-appsumo-jbm
softwareengineering, saas, startup
**Imagine this**: Your innovative SaaS product just launched on Product Hunt or AppSumo. Almost overnight, you’re bombarded with thousands of new users. It’s the dream scenario for any startup founder. But what if that dream quickly morphs into a nightmare? Your servers crash, users can’t access their accounts, and your support team is drowning in complaints. The very platform that was supposed to catapult your startup to success could also precipitate its downfall. In this article, we’ll explore why preparing for scale is crucial for SaaS startups, especially after a major launch, and how a lack of preparation can spell disaster. I’ll also provide a high-level overview of strategies commonly used. In future articles, we'll zoom in on a single strategy, providing practical examples and highlighting common pitfalls and challenges I've encountered. ## Understanding the Gravity of Scaling ### The Dream Turned Nightmare Launching on platforms like Product Hunt or AppSumo can bring a dramatic influx of users. Failing to plan for this influx can lead to severe consequences, including system outages, performance degradation, and an overwhelmed support team. I’ve seen this happen firsthand—one minute, you’re celebrating, and the next, you’re in full-on crisis mode, everything is on fire. ### Real-World Example: Figma’s Strategic Decision Figma, a design tool, knew they needed a reliable database solution to handle their growing user base. They turned to Amazon RDS for its high availability and performance. By offloading the heavy lifting of database maintenance, Figma could focus on what they do best—building a fantastic product. This strategic move allowed them to scale seamlessly without getting bogged down by backend issues. ### The Fallout of Unpreparedness #### Not preparing for scalability can result in: - **System Outages**: Systems may crash under increased load, leading to significant downtime, loss of revenue, and damage to your brand’s reputation. Trust me, nothing kills the mood faster than a 404 error. - **Performance Degradation**: Even if the system doesn’t crash, slow load times and poor user experience can drive users to more reliable alternatives. ![user-frustration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0eipybq9a60xujklc25f.jpg) ### Real-World Example: Etsy’s Scaling Challenges Etsy faced performance slowdowns as its user base grew, resulting in delayed page loads and a subpar user experience. They had to invest heavily in scaling their infrastructure to retain users and improve performance. After re-engineering their infrastructure and optimizing databases, Etsy saw a 50% reduction in page load times, significantly enhancing user satisfaction. In my experience, sometimes drastic changes are necessary to achieve significant improvements. I have been known to suggest such changes, and while they might not always be the right solution, they spark meaningful conversations. These discussions can lead to innovative solutions and help teams think outside the box. ## Key Strategies for Effective Scaling ### Scalability: Deploying Reinforcements to Handle Increased Load SaaS applications must efficiently add resources to manage increased traffic and ensure consistent performance. 1. **Use Auto-Scaling**: Implement auto-scaling to automatically adjust the number of active servers based on current demand. Most Cloud Providers have this capability. 2. **Container Orchestration**: Kubernetes is a well-known tool that simplifies management and scaling of your containers. Docker Swarm is another option, while it is less feature-packed than Kubernetes. 3. **Service Mesh Technologies**: Implement service mesh technologies like Istio to manage microservices communication, security, and monitoring. This can help ensure reliable and secure service-to-service interactions, even as your application scales. ### Optimizing Database Performance Databases often become bottlenecks as your user base grows. Optimizing database performance is critical to maintaining application speed and reliability. 1. **Database Sharding**: Split your database into smaller, more manageable pieces (shards) to distribute the load and improve performance. 2. **Caching**: Use caching strategies to store frequently accessed data in memory, reducing the load on your database and speeding up response times. 3. **Regular Maintenance and Monitoring**: Conduct regular maintenance and monitoring to identify and address performance issues before they impact users. ### Enhancing User Experience A seamless user experience is **essential** for retaining customers and fostering growth. Focus on optimizing your application's performance and usability. I strongly recommend investing in this **first**, and sacrificing **never**. 1. **Load Testing**: This is critical to understanding the pain points in your system under load. I could write a book about the implications of not load testing prior to launch. 2. **User Feedback**: Actively seek and incorporate user feedback to continuously improve your platform. You would think this was just a given, correct? (It’s related to 3 out of 12 top causes of failure) 3. **Performance Monitoring Tools**: Use performance monitoring tools to track and optimize your application's performance in real-time. **Tip**: Setup push notifications so you can be proactive at launch, instead of reactive to user complaints. ### Conclusion Preparing for scale is not just a technical challenge; it's a critical component of your SaaS startup's success strategy. By implementing robust scaling strategies, optimizing database performance, and focusing on user experience, you can ensure that your platform remains reliable and performant, even during rapid growth. In the next series of articles, we'll take a closer look at each strategy, one by one, to give you a deeper understanding. I'll share real-life examples and highlight common pitfalls and challenges I've encountered along the way. Stay tuned for more detailed content that will give you the knowledge and tools you need to successfully scale your SaaS startup. #### **Sneak Peek: Deep Dive into Load Testing** Load testing is a critical component of ensuring your SaaS platform can handle high traffic volumes without compromising performance. In our next article, we will cover: - **Types of Load Testing**: Learn about different load testing methodologies such as stress testing, spike testing, and endurance testing. - **Tools and Frameworks**: Explore popular load testing tools like k6, Apache JMeter, Gatling, and Locust so you understand their unique features and use cases. - **Creating Effective Test Scenarios**: Discover how to design realistic test scenarios that mimic actual user behavior and traffic patterns. - **Analyzing Results**: Gain insights into interpreting load test results to identify bottlenecks and performance issues. - **Best Practices**: Get practical tips and best practices for conducting load tests, including setting up your test environment, running tests, and post-test analysis. Have you experienced unexpected downtime after a major launch? How did you handle it? Share your story in the comments below or reach out to me directly on LinkedIn. Let's learn from each other's experiences and build resilient, scalable SaaS platforms together.
brandonbaz
1,892,574
Setting Up Dynamic Environment Variables with Vite and Docker
In this tutorial, we'll walk through the process of setting up dynamic environment variables in a...
0
2024-06-18T14:42:02
https://dev.to/dutchskull/setting-up-dynamic-environment-variables-with-vite-and-docker-5cmj
vite, docker, webdev, devops
In this tutorial, we'll walk through the process of setting up dynamic environment variables in a Vite project, ensuring that these variables can be replaced dynamically within a Docker container. This setup allows for flexibility and ease of deployment across different environments. ## Setting Up the Environment Variables First, ensure that your environment variables for Vite start with the `VITE_` prefix. This makes it easy to manage and replace them dynamically. ### Step 1: Define Environment Variables Create environment variable files for different environments at the root of your project. Vite uses different `.env` files for different environments: - **Production:** Create a `.env.production` file. - **Local Development:** Optionally, create a `.env.local` file for local development. #### Example: `.env.production` ```properties VITE_PREFIX_VALUE=PREFIX_VALUE ``` ### Step 2: Create an Environment Class In your project, create a class to map these Vite environment variables to constants that you can use throughout your project. ```typescript class Environment { PREFIX_VALUE: string; constructor() { this.PREFIX_VALUE = import.meta.env.VITE_PREFIX_VALUE; } } const environment = new Environment(); export default environment; ``` ## Setting Up the Tooling We'll use a shell script to dynamically replace the baked-in environment variables within the Docker container. ### Step 3: Create a Shell Script Create a file named `env.sh` with the following content. > **Important**: Ensure the file has `LF` line endings, not `CRLF`. ```shell #! /bin/sh if [ -z "$APP_ENV_PREFIX" ]; then echo "APP_ENV_PREFIX is not set. Exiting." exit 1 fi for i in $(env | grep "^$APP_ENV_PREFIX"); do key=$(echo "$i" | cut -d '=' -f 1) value=$(echo "$i" | cut -d '=' -f 2-) echo "$key=$value" find "/tmpl/dist/web-app/" -type f -exec sed -i "s|${key}|${value}|g" '{}' + done ``` Ensure that the `$APP_ENV_PREFIX` matches your chosen prefix (`PREFIX_` in this example). The path `/tmpl/dist/web-app/` should match the location of your built project within the Docker container. ## Setting Up the Docker Container ### Step 4: Create a Dockerfile Create a `Dockerfile` with the following content: ```dockerfile FROM node:hydrogen-alpine as build-env COPY package.json package-lock.json ./ RUN npm install RUN mkdir -p /usr/src/app && cp -R ./node_modules ./usr/src/app WORKDIR /usr/src/app COPY . ./ RUN npm run build FROM nginx:mainline-alpine3.18-perl COPY ./.nginx/${NGINX_CONFIG_FILE:-nginx.conf} /etc/nginx/nginx.conf RUN rm -rf /usr/share/nginx/html/* COPY --from=build-env /usr/src/app/dist /usr/share/nginx/html/web-app COPY --from=build-env /usr/src/app/dist/* /tmpl/dist/web-app/ COPY env.sh /docker-entrypoint.d/env.sh RUN chmod +x /docker-entrypoint.d/env.sh CMD ["nginx", "-g", "daemon off;"] ``` ### Key Points: - **File Paths:** Ensure the path `/tmpl/dist/web-app/` matches the location specified in the shell script. - **Shell Script:** Place the `env.sh` file in `/docker-entrypoint.d/` to ensure it runs on container startup. ## Setting Up Docker Compose ### Step 5: Create a Docker Compose File Create a `docker-compose.yml` file to define and run your container. ```yaml services: app: image: webapp:latest environment: APP_ENV_PREFIX: PREFIX_ PREFIX_VALUE: "This is the value you want to dynamically use in your container." ``` ### Key Points: - **Environment Variable Prefix:** Set the `APP_ENV_PREFIX` to match your prefix (`PREFIX_`). - **Dynamic Values:** Define your environment variables with the specified prefix. ## Conclusion By following this tutorial, you've set up a Vite project within a Docker container, enabling dynamic environment variables. This approach ensures that your application can adapt to different environments seamlessly, making deployments more flexible and efficient. Feel free to adjust the paths and prefixes to fit your specific project requirements. Happy coding!
dutchskull
1,892,575
Modelos de Lenguaje Grandes y su Optimización en Recursos Computacionales Limitados
Los modelos de lenguaje grandes, como GPT-4, han revolucionado el procesamiento del lenguaje natural,...
0
2024-06-18T14:41:46
https://dev.to/gcjordi/modelos-de-lenguaje-grandes-y-su-optimizacion-en-recursos-computacionales-limitados-8p
ia, ai, machinelearning
Los modelos de lenguaje grandes, como GPT-4, han revolucionado el procesamiento del lenguaje natural, permitiendo aplicaciones avanzadas en generación de texto, traducción automática y respuesta a preguntas. Sin embargo, el entrenamiento y la implementación de estos modelos requieren recursos computacionales significativos, lo que limita su accesibilidad y aplicabilidad en entornos con recursos limitados. Optimizar estos modelos para funcionar eficientemente en hardware con capacidades reducidas es un desafío crucial para democratizar el acceso a tecnologías avanzadas de inteligencia artificial. Una de las técnicas más efectivas para lograr esta optimización es la compresión de modelos. La compresión de modelos implica reducir el tamaño del modelo sin sacrificar significativamente su rendimiento. Métodos comunes incluyen la cuantización, que reduce la precisión de los pesos del modelo, y la podadura (pruning), que elimina los parámetros redundantes. La cuantización puede reducir considerablemente el tamaño del modelo y acelerar su inferencia al utilizar números de menor precisión, mientras que la podadura ayuda a eliminar conexiones y nodos innecesarios, reduciendo la complejidad computacional. Otra técnica clave es la destilación de conocimiento. En este enfoque, un modelo grande y complejo (maestro) entrena a un modelo más pequeño y eficiente (aprendiz). El modelo aprendiz aprende a imitar el comportamiento del modelo maestro, logrando un rendimiento comparable con una fracción de los recursos. La destilación de conocimiento permite transferir la capacidad de generalización del modelo grande al modelo más pequeño, haciendo que este último sea adecuado para aplicaciones en dispositivos con recursos limitados. El uso de modelos eficientes por diseño también es una estrategia importante. Estos modelos están diseñados desde el principio para ser más eficientes en términos de computación y memoria. Por ejemplo, arquitecturas como Transformers ligeros y variantes de BERT optimizadas están diseñadas para operar con menos recursos sin perder demasiada precisión. Además, la implementación en hardware especializado, como unidades de procesamiento tensorial (TPU) y unidades de procesamiento gráfico (GPU) optimizadas para operaciones de IA, puede mejorar significativamente la eficiencia. Estos dispositivos están diseñados específicamente para acelerar las operaciones de aprendizaje profundo y pueden manejar grandes cantidades de datos con menor consumo de energía. La optimización de algoritmos también juega un papel crucial. Técnicas como la paralelización de datos y modelos, así como la distribución del entrenamiento a través de múltiples nodos, pueden mejorar la eficiencia computacional. Además, la implementación de algoritmos de entrenamiento más rápidos y eficientes, como los optimizadores adaptativos, puede acelerar el proceso de entrenamiento y reducir la necesidad de recursos. En resumen, la optimización de modelos de lenguaje grandes para recursos computacionales limitados se puede lograr mediante la compresión de modelos, la destilación de conocimiento, el diseño de modelos eficientes, la implementación en hardware especializado y la optimización de algoritmos. Estas estrategias no solo hacen que la tecnología sea más accesible, sino que también promueven su aplicación en una variedad más amplia de entornos y dispositivos. [Jordi G. Castillón](https://jordigarcia.eu/)
gcjordi
1,892,628
What’s New in Flutter: 2024 Volume 2
TL;DR: Explore the new features in Syncfusion Flutter widgets for 2024 Volume 2, including new...
0
2024-06-19T15:10:01
https://www.syncfusion.com/blogs/post/whats-new-flutter-2024-volume-2
flutter, mobile, ui, web
--- title: What’s New in Flutter: 2024 Volume 2 published: true date: 2024-06-18 14:39:36 UTC tags: flutter, mobile, UI, web canonical_url: https://www.syncfusion.com/blogs/post/whats-new-flutter-2024-volume-2 cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ji64q4j1p6e6sl3a12az.jpeg --- **TL;DR:** Explore the new features in Syncfusion Flutter widgets for 2024 Volume 2, including new technical indicators in Charts and RTL, custom text selection menus in Flutter PDF Viewer, and more. Discover all updates and enhancements for seamless, high-quality application development across multiple platforms. Syncfusion [Flutter widgets](https://www.syncfusion.com/flutter-widgets "Flutter widgets") are written natively in Dart to help you create rich, high-quality applications for iOS, Android, Web, Windows, macOS, and Linux from a single code base. This blog explains the new features added to the Flutter suite for the [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release! ## New technical indicators in Flutter Charts The [Flutter Charts](https://www.syncfusion.com/flutter-widgets/flutter-charts "Flutter Charts") widget delivers the following new technical indicators: ### Rate of Change (ROC) indicator The Rate of Change (ROC) indicator is a momentum oscillator used in technical analysis that assesses the speed at which a price changes over a specified period. By comparing the current price to a price from a previous time, the ROC reflects the percentage change in price, indicating whether the price movement is accelerating or decelerating. Traders often use the ROC to identify overbought or oversold conditions, potential trend reversals, or to confirm trends. The ROC fluctuates around a zero line; values above zero suggest upward momentum (bullish), while values below zero indicate downward momentum (bearish). It is commonly used as a signal for entry or exit points in trading strategies. Refer to the following image. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Rate-of-Change-ROC-indicator-in-Flutter-Charts.gif" alt="Rate of Change (ROC) indicator in Flutter Charts" style="width:100%"> <figcaption>Rate of Change (ROC) indicator in Flutter Charts</figcaption> </figure> ### Weighted Moving Average (WMA) The Weighted Moving Average (WMA) is a technical indicator that smooths price data to help traders identify trends in financial markets. Unlike the simple moving average (SMA), which assigns equal weight to all prices in the calculation period, the WMA emphasizes more recent prices by assigning them greater weights. This makes the WMA more responsive to recent price changes, offering traders quicker signals for entering or exiting trades. The weights typically decrease linearly, with the most recent price having the highest weight and the oldest price in the period having the lowest weight. Traders use the WMA to help confirm trends and to generate potential buy or sell signals. Refer to the following image. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Weighted-Moving-Average-WMA-indicator-in-Flutter-Charts.gif" alt="Weighted Moving Average (WMA) indicator in Flutter Charts" style="width:100%"> <figcaption>Weighted Moving Average (WMA) indicator in Flutter Charts</figcaption> </figure> ## PDF Viewer The [Flutter PDF Viewer](https://www.syncfusion.com/flutter-widgets/flutter-pdf-viewer "Flutter PDF Viewer") offers the following new user-friendly updates: ### Page rendering enhancements The rendered pages will have good quality even at the initial zoom level. The rendering performance of pages in the viewer for large documents has also been improved for all platforms. Especially on the web and Android platforms, we have achieved approximately an 80% deduction in the rendering time for a document of 50 MB size. ### Right-to-left (RTL) rendering Users can now scroll right-to-left (RTL) horizontally using the RTL layout. This will help you accommodate languages that are read from right to left for better continuity in reading. ### Customize the text selection menu This feature allows users to customize the built-in text selection menu, helping them design their own. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Customizing-the-Text-selection-menu-in-Flutter-PDF-Viewer.gif" alt="Customizing the text selection menu in Flutter PDF Viewer" style="width:100%"> <figcaption>Customizing the text selection menu in Flutter PDF Viewer</figcaption> </figure> ## Conclusion Thanks for reading! You now have a comprehensive understanding of the key features of our [Flutter widgets](https://www.syncfusion.com/flutter-widgets "Flutter widgets") for the [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release. To explore all the updates in this release, please refer to our [release notes](https://help.syncfusion.com/common/essential-studio/release-notes/v26.1.35 "Essential Studio Release Notes") and [What’s New](https://www.syncfusion.com/products/whatsnew "Essential Studio What’s New page") pages. We encourage you to try our [Flutter packages](https://pub.dev/publishers/syncfusion.com/packages "Flutter packages") and share your feedback in the comments below. You can also access our complete [user guide](https://help.syncfusion.com/flutter/introduction/overview "Introduction to Syncfusion Flutter widgets documentation") and explore our [Flutter project examples](https://github.com/syncfusion/flutter-examples "Flutter GitHub demo") for more information. Additionally, you can discover our demo apps on various platforms, such as [Google Play](https://play.google.com/store/apps/details?id=com.syncfusion.flutter.examples&hl=en "Syncfusion Flutter UI Widgets in Google Play"), the [App Store](https://apps.apple.com/in/app/syncfusion-flutter-ui-widgets/id1475231341 "Syncfusion Flutter UI Widgets in App Store"), the [Microsoft Store](https://www.microsoft.com/store/productId/9NHNBWCSF85D "Syncfusion Flutter Gallery in Microsoft Store"), the [Snap Store](https://snapcraft.io/syncfusion-flutter-gallery "Syncfusion Flutter Gallery in Snap Store"), the [App Center](https://install.appcenter.ms/orgs/syncfusion-demos/apps/syncfusion-flutter-gallery/distribution_groups/release "Syncfusion Flutter Gallery in App Center"), and our [website](https://flutter.syncfusion.com/#/ "Flutter UI Widgets in Syncfusion Site"). If you require a new widget in our Flutter framework or additional features in our existing widgets, please contact us via our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/flutter "Syncfusion Feedback Portal"). We are always delighted to assist you! ## Related blogs - [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!") - [Open and Save PDF Files Locally in Flutter](https://www.syncfusion.com/blogs/post/open-save-pdf-locally-flutter "Blog: Open and Save PDF Files Locally in Flutter") - [Flutter Made Easy: 5 Tools to Build Better Apps Faster](https://www.syncfusion.com/blogs/post/build-apps-faster-with-flutter-tools "Blog: Flutter Made Easy: 5 Tools to Build Better Apps Faster") - [Effortlessly Fill and Share PDF Forms using Flutter PDF Viewer](https://www.syncfusion.com/blogs/post/fill-share-pdf-forms-flutter "Blog: Effortlessly Fill and Share PDF Forms using Flutter PDF Viewer")
gayathrigithub7
1,892,571
Empleabilidad en el ámbito cloud, by Experis
En el marco de la participación de Experis en la Alianza Tech de AWS (Skills to Jobs Tech Alliance),...
0
2024-06-18T14:38:47
https://dev.to/aws-espanol/empleabilidad-en-el-ambito-cloud-by-experis-3df
empleabilidad, alianzatechskills2jobs
En el marco de la participación de **Experis** en la Alianza Tech de AWS (Skills to Jobs Tech Alliance), Experis, Amazon Web Services y Fundación Human Age Institute organizaron una jornada para impulsar la empleabilidad de más de 1.200 jóvenes en el ámbito Cloud en el mes de octubre de 2023. **• Programa de la jornada:** https://humanageinstitute.org/evento/aws-poweryou-xperience-oportunidades-profesionales-en-cloud-25-de-octubre/?instance_id=351 **• Nota prensa de la jornada:** https://humanageinstitute.org/experis-amazon-web-services-y-human-age-institute-impulsan-la-empleabilidad-de-mas-de-1-200-jovenes-en-el-ambito-cloud/ Aquí tenéis las diferentes sesiones que se impartieron: **1.¿Sabes qué es Experis y qué es Experis Academy?** Fernando Aguilar, Head of Experis Academy nos presenta Experis, una compañía global con más de 25 años de experiencia en Tecnologías de la Información, y Experis Academy. {% embed https://youtu.be/0Y2hbZ8cDyI %} Fernando Aguilar nos explica el liderazgo de Experis en consultoría tecnológica, con una fuerte presencia en España y operaciones en más de **45 países**. Ofrecen servicios de transformación empresarial, infraestructura de nube y aplicaciones empresariales, con un enfoque en la mejora y el reciclaje profesional a través de su unidad Experis Academy. Las colaboraciones con gigantes del sector, como Amazon Web Services, garantizan que ofrecen una formación profesional muy demandada y cierran la brecha entre los profesionales de la tecnología y la demanda del mercado. **2.Tendencias y desarrollos en Cloud** Miguel Ángel Castaño, manager Clou&Infraestructure Experis, analiza los avances, actuales tendencias y desarrollos en Cloud. {% embed https://youtu.be/Kssen6JpIC8 %} Miguel Ángel Castaño nos explica como la computación en nube es una tecnología crucial para los profesionales de TI, haciendo hincapié en el cambio hacia la administración de sistemas en la nube en lugar de dispositivos físicos. Analiza **la importancia del aprendizaje** continuo y la adaptación a las nuevas tecnologías, como la computación cuántica, los contenedores y la automatización en el campo de la TI. Miguel Ángel también destaca los **beneficios de eficiencia y seguridad** de la computación en nube, y hace hincapié en la necesidad de que los profesionales desarrollen sus habilidades para satisfacer las demandas del cambiante panorama de TI. **3. Roles y carreras en Cloud** Marcos Fernandez, Manager Cloud & Infraestructure Experis, repasa los roles más demandados en el entorno Cloud y analiza los perfiles y responsabilidades _**Enlace en breve**_ **4.Demanda de empleo y salarios en Cloud** Fernando Aguilar, Head of Experis Academy, analiza la demanda de empleo en el entorno Cloud y el rango salarial de sus profesionales {% embed https://youtu.be/g2KuSzdnneo %} Fernando Aguliar comparte la alta demanda de empleo en el sector tecnológico, centrándose en **38 perfiles tecnológicos específicos** que representan el 90% de la demanda. Destaca el desafío para las empresas a la hora de encontrar profesionales con habilidades específicas y aborda la escasez de profesionales tecnológicos, ya que, en general, **el 70% de las ofertas de trabajo permanecen sin cubrir**. Además, hace hincapié en **la necesidad de actualizar continuamente las competencias y los conocimientos** para seguir siendo competitivos en el mercado laboral, y se discuten las estrategias tradicionales y actuales para la adquisición de talento en las empresas, incluidas la recalificación y el fomento del talento interno. Por último, subraya la importancia de adaptarse a las demandas del mercado, adquirir nuevas habilidades constantemente y capacitarse continuamente para mantenerse al día con los cambios tecnológicos en el mercado laboral. **5. Preparación para una carrera en Cloud** Fernando Aguilar, Head of Experis Academy; Bárbara Rodrigo, Cloud & Infraestructure Experis y Andrea Ortiz, consultora selección de perfiles Cloud Experis PERM IT, comparten como preparar una carrera en Cloud. {% embed https://youtu.be/F1GKQl6Pc6w %} Los ponentes proporcionan información sobre las trayectorias profesionales en el campo de la tecnología, haciendo hincapié en la importancia de la adaptabilidad y el aprendizaje continuo. Analizan los **desafíos** a los que se enfrenta la industria, incluida la gestión de proyectos, las relaciones con los clientes y el desarrollo de habilidades. En esta sesión, se enfatiza **la importancia de las habilidades técnicas y no técnicas**, como la comunicación y la resolución de problemas. Además, se comparten consejos sobre cómo navegar en entornos corporativos, **recomendaciones para el desarrollo profesional** y consejos sobre habilidades de comunicación con personas que no tienen experiencia en tecnología. Por último, se destaca la importancia de las certificaciones de AWS, como Cloud Practitioner, que conducen a puestos como el de ingeniero de redes, así como las estrategias de negociación y los factores que influyen en los rangos salariales del sector tecnológico. **6. Consejos para un proceso de selección exitoso en el sector Cloud** Taller impartido por Paula Rojas, impulsora de Empleabilidad en Fundación Human Age Institute y Bárbara Rodrigo, Cloud & Infraestructure Experis en el que dan consejos para un proceso de selección exitoso en el sector Cloud. {% embed https://youtu.be/05YWCLieylI %} Paula Rojas proporciona una **guía sobre la elaboración de currículums**, enfatizando la importancia de usar palabras clave y competencias que se alineen con las ofertas de trabajo. También profundiza en el **uso efectivo de LinkedIn**, haciendo hincapié en la creación de redes, la optimización de perfiles y la interacción con los contactos para el crecimiento profesional. Además, se abordan **consejos sobre cómo mantener un lenguaje corporal equilibrado** y abierto durante las entrevistas para establecer una buena relación y la importancia de usar frases cortas y orientadas a la acción para mejorar la pronunciación vocal. Por último, se destaca la importancia de incorporar diversas habilidades y experiencias en los currículums, incluso si no están directamente relacionados con un trabajo, para mejorar el perfil de cada candidato.
iaasgeek
1,892,573
Papo e Ideias: Conectando Tecnologia e Negócios!
Olá, comunidade Dev.to! 🚀 Quero compartilhar uma novidade incrível com vocês! Estou lançando, junto...
0
2024-06-18T14:38:32
https://dev.to/pedrobarreto/papo-e-ideias-conectando-tecnologia-e-negocios-203d
braziliandevs, noticias, tecnologia, podcast
Olá, comunidade Dev.to! 🚀 Quero compartilhar uma novidade incrível com vocês! Estou lançando, junto com meu amigo Bruno, nosso podcast, Papo e Ideias. Junte-se a nós enquanto discutimos as principais **notícias de tecnologia e negócios da semana de forma leve e descontraída**. Episódio 1: O Que Está Rolando no Mundo da Tecnologia? No nosso primeiro episódio, falamos sobre: **Apple Intelligence**: Como a Apple se destaca com sua nova tecnologia integrada aos seus sistemas. **Ferramentas Multiagentes para ChatGPT**: Como você pode usar multiagentes para melhorar suas interações com o ChatGPT. **Nvidia Ultrapassa Apple em Valor de Mercado**: O que isso significa para o mercado de tecnologia? **Taxação da União Europeia para Carros Elétricos Chineses**: As implicações dessa nova medida. **TikTok como Fonte de Notícias para Jovens Americanos**: Por que o TikTok está se tornando a principal fonte de informação para os jovens? {% youtube AU_AH81FtsA %} Por Que Ouvir? Se você quer **ficar atualizado** com as últimas novidades em tecnologia e negócios de uma forma leve e amigável, o Papo e Ideias é para você. Onde Nos Encontrar? Você pode nos ouvir nas principais plataformas de podcast. Siga-nos e deixe seu feedback! [Youtube](https://www.youtube.com/@papoeideias) [Spotify](https://podcasters.spotify.com/pod/show/papoeideias) Estamos ansiosos para saber o que vocês acham! Deixe seus comentários e sugestões, e não perca os próximos episódios do Papo e Ideias!
pedrobarreto
1,891,724
amber: writing bash scripts in amber instead. pt. 1: commands and error handling
writing shell scripts is zero fun. the bash syntax is a mess, error handling is difficult, and any...
27,793
2024-06-18T14:38:26
https://dev.to/gbhorwood/amber-writing-bash-scripts-in-amber-instead-pt-1-commands-and-error-handling-1aao
bash, linux
writing shell scripts is zero fun. the `bash` syntax is a mess, error handling is difficult, and any script longer than a hundred lines is basically unreadable. but we keep writing bash scripts because they're the right tool for the job and the job must be done. amber aims to fix this pain by being a language that gives us a sane, readable syntax that <a href="https://devopedia.org/transpiler">transpiles</a> into messy bash so we don't have to write messy bash ourselves. this post is a four-parter that will go over the basic features of amber from the perspective of those of us who actually want to _use_ it. we'll start with calling shell commands and handling errors, then look at loops and if statements, the standard library of commands, and finally investigate functions. ![fred reveals](https://gbh.fruitbat.io/wp-content/uploads/2024/06/meme_amber1.jpg "the fred reveals meme")<figcaption> the elegant syntax of amber is pulled away to reveal the messy bash underneath </figcaption> ### is amber a mistake? languages that transpile into other languages don't have a great track record of success. coffeescript, elm, even flutter, were all supposed to make struggling with javascript a thing of the past. none of them got any appreciable traction. facebook released <a href="https://en.wikipedia.org/wiki/HipHop_for_PHP">hiphop</a>, their php-to-c++ transpiler, with a tremendous amount of hype. nobody used it. not even facebook. so, is there any reason to expect amber to succeed? maybe. first off, all those javascript transpilers were hindered by the fact that javascript isn't actually _that_ bad. bash is a nightmare by comparison. secondly, a lot of those languages have strong paradigm preferences that, themselves, don't have a lot of popularity. the number of people who actively want to, say, write a monad instead of some vanilla js is not large. by comparison, amber sits firmly in the `c`-like idiom; a comfortable place for people who know php or python or javascript. finally, a lot of those other transpiler languages didn't address many of the realities of the developer experience. developers use frameworks and rely on the wealth of documentation and examples for those frameworks. getting off the ground with a fresh vue or laravel project is far easier than riding the elm learning curve or forcing your entire framework through hiphop. given this, it's certainly possible that amber will gain at least some user base. its two biggest barriers currently are a) that you have to actually install it (via a curl-to-bash pipeline. `apt` and `yum` packages aren't available) and b) that the community documentation for it is, generously speaking, pretty thin. with that in mind, let's walk through getting amber installed and look at doing shell commands and error handling, and maybe learn to love this language. ## installation the recommended installation process for amber is one of those 'copy and paste this shell script uncritically' things. the script will ask us to escalate to root, so we'll need `sudo` access for this. ```bash curl -s "https://raw.githubusercontent.com/Ph0enixKM/AmberNative/master/setup/install.sh" | bash ``` the result is a success message with two emojis thrown in for effect. ``` Installing Amber... 🚀 [sudo] password for ghorwood: Amber has been installed successfully. 🎉 > Now you can use amber by typing `amber` in your terminal. ``` amber is now installed and we have learned that the transpiler command for amber is `amber`. a good start. ## transpile and run before we write the first line of code, we're going to look at how to transpile and run our amber scripts. there are two basic options. first, to take our amber script and transpile it into a bash file that we can run later, we provide `amber` with two arguments: our input amber file and the path to our output bash file: ```bash amber /path/to/input/amberscript.ab /path/to/output/shellscript.sh ``` if we would like to transpile and run our script all in one command, we can pass `amber` just our amber script file as an argument. ```bash amber /path/to/script.ab ``` the file extension for amber scripts is `ab`, as it should be. ## running shell commands and handling errors< the first, and probably most important, thing people want to do with a shell scripting language is... script their shell. so we'll start with that. there are two components to calling a shell command in amber: 1. the command itself, enclosed between `$` signs </li> 2. the `failed` block that executes if the command fails for any reason</li> the template for this is: ```dart $<some shell command>$ failed { <some code to run if the command fails> } ``` if we want to call `whoami` from our script, it would look something like: ```dart $whoami$ failed { echo "could not run whoami" } ``` the compelling feature here is the `failed` block. checking for and handling errors in bash is a hassle. no one wants to do it and, quite frankly, a lot of developers just don't. with amber, tough, we can just put any error handling we want to inside the braces after `failed`. ```dart // fails for non-root $touch /etc/passwd$ failed { echo "could not touch /etc/passwd" // any error-handling code here } ``` ### suppressing bash's output by default, amber dumps both `STDOUT` and `STDERR` of command calls to the screen. this makes sense, but it can be annoying. for instance, this code ```dart $touch /etc/passwd$ failed { echo "my custom error message" } ``` outputs _both_ our custom error message and the error message sent by `touch`, ie. ``` touch: cannot touch '/etc/passwd': Permission denied my custom error message ``` not what we want. we can suppress the output of `touch` (or any shell command) by prepending it with the keyword `silent`. ```dart silent $touch /etc/passwd$ failed { echo "my customer error message" } ``` using `silent` turns off _all_ output. for instance, the command `whoami` will not show any output here: ```dart silent $whoami$ failed { echo "whoami failed..." } ``` we can also assign `silent` to a block of code containing multiple commands. in this example, neither `whoami` nor `touch` will print their output to screen ```dart silent { $whoami$ failed { echo "could not whoami" } $touch /etc/passwd$ failed { echo "could not touch /etc/passwd" } } ``` ### ignoring errors catching errors in the `failed` block is the default behaviour, but we can turn that off with the keyword `unsafe`. ```dart unsafe $touch /etc/passwd$ ``` if an `unsafe` command fails, it displays its error message and our script continues. if we're really brave, we can combine `unsafe` and `silent`. this results in errors being completely ignored: no output, no handling. ```dart silent unsafe $touch /etc/passwd$ ``` like `silent`, `unsafe` can also be applied to a block of commands. ```dart unsafe { let me = $whoami$ let here = $pwd$ } ``` ## getting the exit status of a command when a shell command completes, it returns an integer as a status code. if everything works, that integer is zero. if there's an error, it's some other number. in amber, the most recent status code is stored in the global variable `status` ```dart silent $touch /etc/passwd$ failed { echo status } echo status ``` in the above example, `touch` fails and sets `status` to `1`. we can access that variable both from inside the `failed` block and anywhere after. most people who write bash scripts don't track these codes (probably because most people who write bash scripts don't do any in-script error handling), so this may seem like a technical detail, but in future posts, we will go over how to leverage `status` to build more robust amber scripts. ## reading input from bash frequently, we want to take the output from a bash command and put it into a variable. we can do this with a straight assignment using the familiar `let` command. ```dart let me = $whoami$ failed { echo "cannot get your name" } echo me ``` if our command fails for some reason, our variable will be null. note that this assignment only works for `STDOUT` output. `STDERR` gets dumped to the screen, but is not assigned to the variable. for example, `nginx -V` outputs version data to `STDERR` for some terrible reason, so if we do: ```dart let nginx_version = $nginx -V$ failed { echo "cannot get nginx version" } echo nginx_version ``` our `nginx_version` variable will be null. since variable assignment is done by reading `STDOUT`, if we want to accept user input from bash's `read` command, we have to echo the input. ```dart let user_input = $read input &amp;&amp; echo \$input$ failed { echo "error reading" } echo user_input ``` note here that we need to escape the `$` in `$input`. ### a note on variable scop like just about every other programming language, variables in amber live in a scope. local scopes are enclosed in braces, including those that define the blocks for things like `failed` and `unsafe`. for instance, if we assign two variables inside an `unsafe` block, those variables only exist inside that local scope. doing: ```dart unsafe { let me = $whoami$ let here = $pwd$ } echo me ``` will result in that `echo` command erroring with: ```dart ERROR Variable 'me' does not exist ``` we can circumvent this by declaring variables in the global scope and then assigning them in the local scope. ```dart // declare in global scope let me = "" let here = "" unsafe { // re-assign global variables in local scope me = $whoami$ here = $pwd$ } echo "me is {me}" echo "here is {here}" ``` this example works because the variables are assigned globally using the `let` keyword and then given values in `unsafe`&#8216;s scope. of course we all know that global variables are bad and should be avoided, and we will look at how to better handle situations like this when we get to functions. ## using variables in commands we can easily use amber variables in shell commands through the miracle of <a href="https://www.codingdrills.com/tutorial/strings-data-structure/string-interpolation">string interpolation</a>. ```dart let filename = "somefile.txt" echo "creating /tmp/{filename}" $touch /tmp/{filename}$ failed { echo "could not create {filename}" } ``` in this example, we took our variable `filename` and put it into the string that is being run as the command by wrapping it in braces. we also used the same technique to include the variable in the string we `echo`ed. ## next up the elegant handling of shell commands and the built-in error handling alone make amber a compelling choice. but, of course, amber does a lot more. the next posts will focus on loops and `if` statements and then functions. > 🔎 this post was originally written in the [grant horwood technical blog](https://gbh.fruitbat.io/2024/06/18/amber-writing-bash-scripts-in-amber-instead-pt-1-commands-and-error-handling/)
gbhorwood
1,892,570
Mixture-of-Agents Enhances Large Language Model Capabilities✨
does not require any fine-tuning and only utilizes the interface for prompting and generation of...
0
2024-06-18T14:35:34
https://dev.to/pratikwayase/mixture-of-agents-enhances-large-language-model-capabilities-1938
llm, datascience, machinelearning, tutorial
does not require any fine-tuning and only utilizes the interface for prompting and generation of LLMs. we do not need to concatenate prompt and all model responses so only one LLM is needed to be used in the last layer. ![llm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nrkxl7p9wygo5u18fdq7.png) ⚡ design of Mixture-of-Agents. 2 parts : proposers & aggregators Proposers : generating useful reference responses for use by other models. While the proposer may not necessarily produce responses with high scores by itself, it should offer more context and diverse perspectives, ultimately contributing to better final responses when used by an aggregator Aggregators : synthesizing responses from other models into a single, high-quality output. An effective aggregator should maintain or integrate inputs that are of lesser quality than its own. ![model perform](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83fccl0zybvn822hi3by.png) By incorporating more aggregators into the process, we can iteratively synthesize and refine the responses, leveraging the strengths of multiple models to produce superior outcomes. Initially, LLMs in the first layer, denoted as agents A1,1,...A1,n, independently generate responses to a given prompt. These responses are then presented to agents in the next layer (A2, 1,... A2, n) for further refinement. This iterative refinement process continues for several cycles until a more robust and comprehensive response is obtained. ![evaluation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vip5t8ixlehilkfd4ag.png) The selection of LLMs for each MoA requires two primary criteria: Performance Metrics: The average win rate of models in layer i plays a significant role in determining their suitability for inclusion in layer i + 1. Diversity Considerations: The diversity of model outputs is also crucial. Responses generated by heterogeneous models contribute significantly more than those produced by the same model By leveraging these criteria — performance and diversity — MoA aims to mitigate individual model deficiencies and enhance overall response quality through collaborative synthesis. ⚡limitations : model cannot decide the first token until the last MoA layer is reached. This potentially results in a high Time to First Token (TTFT), which can negatively impact user experience. To mitigate this issue, we can limit the number of MoA layers, as the first response aggregation has the most significant boost on generation quality. #llm #datascience #machinelearning
pratikwayase
1,892,569
Case Study: How JavaScript Gantt Charts Facilitate Planning
Effective project management hinges on precise planning and efficient execution. Gantt charts have...
0
2024-06-18T14:34:48
https://dev.to/lenormor/case-study-how-javascript-gantt-charts-facilitate-planning-4f7k
webdev, javascript, devops, node
Effective project management hinges on precise planning and efficient execution. Gantt charts have long been a cornerstone for visualizing project schedules, and when combined with the capabilities of JavaScript, they offer unparalleled flexibility and interactivity. In this case study, we will explore how JavaScript Gantt charts, specifically using ScheduleJS, facilitate project planning and enhance overall management processes. ## Introduction to Gantt Charts ![Gantt charts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fxc06pfs2a0cuin163j.png) A Gantt chart is a type of bar chart that illustrates a project schedule. It lays out the start and finish dates of the various elements of a project, providing a visual timeline that helps project managers understand the sequence of tasks and their interdependencies. By using JavaScript for Gantt charts, these visual tools become dynamic, interactive, and easy to integrate into web applications. ## Why Use JavaScript for Gantt Charts? - **Real-Time Updates and Interactivity** JavaScript-powered Gantt charts can update in real-time, which is crucial for adapting to changes in project conditions. This interactivity allows project managers to make adjustments on the fly, ensuring that the schedule remains accurate and relevant. ScheduleJS excels in providing this level of interactivity, making it an ideal choice for modern project management. - **Seamless Integration** JavaScript, the backbone of web development, ensures that Gantt charts can be easily embedded into web applications. Tools like ScheduleJS allow for seamless integration, ensuring that project data is always accessible and up-to-date within the project's management system. - **Customization and Flexibility** With JavaScript, Gantt charts can be highly customized to meet specific project needs. From color schemes and layout modifications to adding custom functionalities, JavaScript libraries like ScheduleJS provide the flexibility needed to create detailed and specific charts. - **Enhanced User Experience** JavaScript-based Gantt charts provide a responsive and smooth user experience. Users can interact directly with the chart, dragging and dropping tasks, zooming in and out of timelines, and more. This interactivity makes the tool intuitive and engaging, improving user adoption and efficiency. ## ScheduleJS: A Versatile Tool for Gantt Charts ScheduleJS is a powerful JavaScript library designed to create interactive and customizable Gantt charts. It offers a range of features that make it a preferred choice for project managers and developers alike. - **User-Friendly Interface** ScheduleJS boasts a user-friendly interface that simplifies the creation and management of Gantt charts. Its intuitive design allows users to quickly add, modify, and remove tasks, making it accessible even to those with limited technical expertise. - **Advanced Scheduling Capabilities** ScheduleJS supports advanced scheduling features such as dependencies, milestones, and constraints. Users can set task priorities, allocate resources, and visualize critical paths, ensuring comprehensive project planning. - **Real-Time Collaboration** ScheduleJS facilitates real-time collaboration among team members. Changes made by one user are instantly reflected across all instances of the Gantt chart, ensuring that everyone is on the same page. This feature is particularly useful for distributed teams working in different locations. - **Integration with Other Tools** ScheduleJS can be easily integrated with other project management tools and systems. Whether using a CRM, ERP, or other software, ScheduleJS can connect seamlessly, pulling and pushing data as needed. This enhances overall project management workflows. - **Scalability** ScheduleJS is designed to scale with your project's needs. It can handle a large number of tasks and dependencies without compromising performance, making it suitable for projects of all sizes. - **Customizable Themes** ScheduleJS allows extensive customization of themes, enabling users to align the Gantt charts with their brand identity. From colors and fonts to the overall layout, every aspect of the chart can be tailored to match the organization’s visual standards. ## Case Study: Implementing JavaScript Gantt Charts with ScheduleJS **Project Overview** Let's consider a case study of a software development company, TechSolutions Inc., which needed a robust project management tool to manage its various software development projects. The company decided to implement JavaScript Gantt charts using ScheduleJS to streamline its project planning and execution processes. **Website:** [ScheduleJS](https://schedulejs.com/) ![ScheduleJS JS Gantt Charts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5on3st14yp7sem7cr3xq.png) ## Initial Challenges TechSolutions Inc. faced several challenges: - Complex Project Dependencies: Managing interdependent tasks across multiple projects was cumbersome. - Resource Allocation: Efficiently allocating resources to avoid overloading team members was difficult. - Real-Time Updates: Keeping project schedules up-to-date with real-time changes was a major concern. - Integration Needs: The new tool needed to integrate seamlessly with existing project management software. ## Implementing ScheduleJS The company chose ScheduleJS for its robust features and ease of integration. Here's how they implemented and benefited from ScheduleJS: **Integration with Existing Systems** ScheduleJS was integrated into TechSolutions' existing project management software. This allowed project managers to import existing project data and start using the Gantt charts without any disruptions. The seamless integration ensured that all project data was synchronized and accessible in real-time. **Creating and Managing Gantt Charts** Project managers used ScheduleJS to create detailed Gantt charts for each project. They could easily add tasks, set start and end dates, define dependencies, and allocate resources. The intuitive interface made it simple for the team to adopt the new tool quickly. **Real-Time Updates and Collaboration** With ScheduleJS, any changes made to the project schedule were reflected in real-time. This feature was particularly beneficial for the development teams spread across different locations. Team members could collaborate effectively, knowing that they were always looking at the most up-to-date information. **Customization for Specific Needs** TechSolutions customized the Gantt charts to align with their branding and specific project requirements. They adjusted the color schemes, added company logos, and customized the layout to make the charts not only functional but also visually appealing. **Outcomes and Benefits** The implementation of ScheduleJS brought about significant improvements in project management at TechSolutions Inc.: **Improved Planning and Scheduling** The Gantt charts provided a clear visual representation of project timelines and dependencies. This helped project managers plan more effectively and anticipate potential bottlenecks. The ability to visualize the entire project at a glance made it easier to allocate resources and prioritize tasks. **Enhanced Collaboration** The real-time collaboration feature of ScheduleJS ensured that all team members were on the same page. This led to better communication and coordination among team members, reducing misunderstandings and increasing overall productivity. **Efficient Resource Management** With a clear view of all tasks and their dependencies, project managers could allocate resources more efficiently. They could identify when specific team members were overloaded and reassign tasks to balance the workload. This resulted in a more even distribution of work and reduced burnout among employees. **Real-Time Adaptability** The ability to make real-time updates to the Gantt charts allowed TechSolutions to adapt quickly to changes in project conditions. Whether it was a change in client requirements or an unexpected delay, the team could update the schedule and adjust their plans accordingly. This flexibility was crucial for keeping projects on track and meeting deadlines. **Increased Transparency and Accountability** The detailed Gantt charts made it easy to track the progress of each task and identify any delays. This increased transparency helped in holding team members accountable for their tasks and ensured that everyone was aware of their responsibilities and deadlines. ## Best Practices for Implementing JavaScript Gantt Charts To maximize the benefits of JavaScript Gantt charts, it is essential to follow some best practices: ![JavaScript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/an4ts1xbrn15smhzadvi.png) **Define Clear Objectives** Before implementing a Gantt chart, define clear project objectives. Understand what you aim to achieve with the chart and how it will be used to facilitate project management. **Choose the Right Tool** Selecting the right JavaScript library is crucial for the success of your Gantt chart implementation. ScheduleJS offers a range of features and customization options that can be tailored to meet your specific needs. **Keep it Simple** While customization is important, it is equally important to keep the Gantt chart simple and easy to understand. Avoid cluttering the chart with too much information, and focus on the key tasks and milestones. **Regularly Update the Chart** A Gantt chart is only useful if it is up-to-date. Ensure that the chart is regularly updated to reflect the current status of the project. This may involve adding new tasks, adjusting timelines, and marking completed tasks. **Involve the Team** Involving the project team in the creation and maintenance of the Gantt chart can improve accuracy and buy-in. Encourage team members to update their task statuses and provide feedback on the chart. ## Conclusion Gantt charts are a powerful tool for project management, offering a clear visual representation of project timelines and dependencies. Leveraging JavaScript to create these charts enhances their interactivity, customization, and integration capabilities. Tools like ScheduleJS make it easier than ever to implement and manage Gantt charts, providing a robust solution for modern project management needs. **Website:** [ScheduleJS](https://schedulejs.com/)
lenormor
1,849,510
Which E2E Tool to Use with React?
When it comes to testing the end-to-end (E2E) integration of web applications, choosing the right...
0
2024-06-18T14:34:04
https://dev.to/razxssd/which-e2e-tool-to-use-with-react-5173
react, playwright, testing, javascript
When it comes to testing the end-to-end (E2E) integration of web applications, choosing the right tools is crucial to ensure comprehensive test coverage and optimal software quality. I found this sentence online and it seems the most suitable given our context. > "If your test is difficult to write, your design is bad" - Randomly Software Engineer That's why I decided to do a brief analysis, a bit closer, of some of the main E2E tools available on the market and their characteristics. However, the entire analysis will be based on their compatibility with React. --- After various analyses, the ones I found primarily used were: - Playwright - Cypress - SauceLabs - Testim - Puppeteer + Jest ![e2e testing tools](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8k3813qm8pna3mj6jd0a.png) ## Playwright Playwright is an open-source automation library that allows testing web applications across different browsers and devices. It is suitable for use with JavaScript React and offers a wide range of features for automating tests. ### Strengths - **Multi-browser and multi-device**: supports testing on a wide range of browsers and devices, including Chrome, Firefox, Safari, and mobile devices like iPhone and Android. - **Execution speed**: it is known for its fast execution speed due to its efficient automation engine. - **Powerful API**: It offers a powerful and flexible API that allows writing complex and high-quality tests. - **Snapshot testing**: Supports snapshot testing to allow visual comparison of user interfaces. - **Automated regression testing**: Playwright's automated tests can be integrated into the CI/CD pipeline to check that the new code has not broken existing functionality. - **Parallel testing for faster execution**: It is able to run multiple tests simultaneously to reduce execution time. ### Weaknesses - **Steep learning curve**: The learning curve might be steep for new users due to its flexibility and power. - **Non-web applications**: Playwright is specifically designed for web applications. - For native desktop or mobile native application testing, other tools would be more suitable. - **Limited resources / Simple projects**: Not suitable for simple web projects, the overhead of setting up and maintaining Playwright might not be justified. ## Cypress Cypress is a modern end-to-end testing framework designed specifically for web applications. It offers a comprehensive set of features for writing, running, and debugging tests in real-time, including the ability to perform mocking. ### Strengths - **Ease of use**: Cypress is known for its ease of use and simplicity in configuring and writing tests. - **Real-time testing**: It offers the ability to see the test in real-time during execution. - **Simple debugging**: It has built-in features for debugging tests, making it easier for developers to identify and fix issues. - **Mocking and stubbing**: Cypress offers built-in features for mocking and stubbing network requests. - Using the **cy.request()** command, Cypress isolates API interactions, facilitating a clearer separation between UI and API interactions. ### Weaknesses - **Limited browser support**: Currently, browser support is limited to Chrome and Firefox. - Cypress's **cy.request()** focuses on response mimicking and does not provide detailed network interaction control. ## SauceLabs Sauce Labs is a cloud-based platform for automated testing of web and mobile applications. It offers a wide range of features for running tests in parallel, testing on different browsers and devices, and integrating with continuous integration (CI) tools. ### Strengths - **Cloud Testing**: Sauce Labs offers the ability to run tests on a wide range of devices and browsers in the cloud. - **Compatibility**: It offers excellent compatibility with a wide range of browsers and devices, including mobile devices. - **Parallel and distributed**: It allows parallel and distributed execution of tests, reducing overall execution times. - **Integration with CI/CD**: It is well integrated with continuous integration (CI) and continuous deployment (CD) tools. ### Weaknesses - **Cost**: It may be costly compared to other solutions, especially if running tests on a large number of browser/device configurations. - **Complex setup**: Initial setup may be complex due to its cloud-based nature. ## Testim Testim is an AI-driven testing platform that automates the creation, execution, and maintenance of tests. It uses AI to analyze application behavior and generate test scripts automatically. ### Strengths - **AI-driven testing**: Testim uses artificial intelligence to help create, execute, and maintain tests automatically. - **Ease of use**: It is designed to be easy to use, with an intuitive interface that allows writing tests quickly. - **Automatic test creation**: It can automatically record user actions and generate tests based on these actions. - **Test versioning**: It offers built-in test versioning, allowing tracking of test changes and iterations. ### Weaknesses - **Limited manual control**: Since it is AI-driven, it may not offer the desired level of manual control for some use cases, difficult to customize. - **Cost**: Testim may also be costly, especially for smaller companies or projects with limited budgets. ## Puppeteer & Jest Puppeteer is a Node.js library developed by Google that provides a high-level API to control headless Chrome or Chromium instances. It is suitable for testing React applications, especially in end-to-end testing scenarios that require interaction with the browser's DOM. ### Strengths - **Full control over the browser**: It provides a high-level API that allows full control over headless Chrome or Chromium instances. This enables to perform a wide range of tasks, including navigation, interaction with the DOM, form submissions, and more. - **Support for modern web features**: supports the latest web standards and features, making it suitable for testing modern web applications built with technologies like React, Angular, Vue.js, etc. - **Integration with Chrome DevTools** - **Headless mode**: can run Chrome or Chromium in headless mode, which means it operates without a graphical user interface. This makes it ideal for running tests in headless environments such as continuous integration (CI) pipelines. ### Weaknesses - **Limited cross-browser support**: is primarily designed for testing with Chrome or Chromium browsers. While it can be used with other browsers via the WebDriver protocol, support for non-Chromium browsers may be limited. - **Resource-intensive**: requires the installation of a full Chrome or Chromium browser, which can consume significant system resources, especially when running multiple instances in parallel. - **Steep learning curve**: Puppeteer's API can be complex, especially for beginners. Users may need to invest time in learning how to use Puppeteer effectively for their testing needs. --- In the next article, we will compare 2 of these technologies to see which one might be a better choice based on our context. I hope the article was helpful, let me know if you know of other similar tools or if you use one different from those mentioned :) --- ## End I post once a week on React that I share via LinkedIn, follow me for more information and tips. LinkedIn: https://www.linkedin.com/in/eduardcapanu/ X (Twitter): https://twitter.com/capanu2
razxssd
1,892,568
Introduction of CSS, What is CSS, Why we use CSS and How CSS describe the HTML elements
What is CSS? CSS stands for Cascading Style Sheets CSS describes how HTML elements are to...
0
2024-06-18T14:33:54
https://dev.to/wasifali/introduction-of-css-what-is-css-why-we-use-css-and-how-css-describe-the-html-elements-1nb6
webdev, css, learning, html
## **What is CSS?** CSS stands for Cascading Style Sheets CSS describes how HTML elements are to be displayed on screen, paper, or in other media CSS saves a lot of work. It can control the layout of multiple web pages all at once External stylesheets are stored in CSS files Why we Use CSS? CSS is used to define styles for your web pages, including the design, layout and variations in display for different devices and screen sizes. ## **Example** ```CSS body { background-color: lightblue; } h1 { color: white; text-align: center; } p { font-family: verdana; font-size: 20px; } ``` ## **CSS Solved a Big Problem** HTML was never intended to contain tags for formatting a web page! HTML was created to describe the content of a web page, like: `<h1>`This is a heading`</h1>` `<p>`This is a paragraph.`</p>` ## **CSS Saves a Lot of Work!** The style definitions are normally saved in external .css files. ## **CSS Syntax** A CSS rule consists of a selector and a declaration block. h1 {color:blue;font-size:12px;} The selector points to the HTML element you want to style. The declaration block contains one or more declarations separated by semicolons. Each declaration includes a CSS property name and a value, separated by a colon. ## **Example** ```CSS p { color: red; text-align: center; } ``` ## **Example Explained** `p` is a selector in CSS (it points to the HTML element you want to style: `<p>`). `color` is a property, and `red` is the property value `text-align` is a property, and `center` is the property value ## **CSS Selectors** CSS selectors are used to "find" (or select) the HTML elements you want to style. We can divide CSS selectors into five categories: Simple selectors (select elements based on name, id, class) Combinator selectors (select elements based on a specific relationship between them) Pseudo-class selectors (select elements based on a certain state) Pseudo-elements selectors (select and style a part of an element) Attribute selectors (select elements based on an attribute or attribute value) ## **The CSS element Selector** The element selector selects HTML elements based on the element name. ## **Example** ```CSS p { text-align: center; color: red; } ```
wasifali
1,892,567
Performance: Los valores de Lighthouse y PageSpeed Insights son diferentes
Existen dos tipos de herramientas para hacer pruebas de performance en web: 1. Herramientas de...
0
2024-06-18T14:33:49
https://dev.to/marianocodes/por-que-los-valores-de-lighthouse-son-diferentes-a-los-de-pagespeed-insights-7f6
webdev, lighthouse, performance, corevitals
Existen dos tipos de herramientas para hacer pruebas de performance en web: **1. Herramientas de laboratorio** Funcionan en condiciones específicas, por ejemplo: • No consideran el cache, bfCache, ni AMP • Ambientes controlados (geografía, dispositivo, velocidad) • Sirven en la etapa de desarrollo, antes de producción • Son estables, generalmente el score es el mismo aunque se corra la herramienta varias veces **2. Herramientas de campo** Utilizan datos reales para comparar los resultados y determinar el score. Pueden ser datos recolectados manualmente o utilizando el Chrome UX Report 🧪 Lighthouse en los Devtools de Chrome funciona como una herramienta de laboratorio. Lo puedes determinar porque al final del reporte vienen las condiciones en las que se corrieron las pruebas, por ejemplo: • Initial page load • Slow 4G throttling • Single page session • Using Chromium 125.0.0.0 with devtools • Emulated Moto G Power with Lighthouse 11.7.1 ⚡️ PageSpeed Insights por su parte entrega una información similar pero puedes detallar que las pruebas corren en diferentes condiciones: • Full visit durations • All Chrome versions • Various mobile devices • Various network connections • Latest 28-day collection period • Many samples (Chrome UX Report) Así que no te asustes, está bien que los valores sean diferentes, solo tienes que entender la razón. --- **Te ayudo a ser un mejor Web Developer Click + [Mariano Alvarez](https://www.linkedin.com/in/marianocodes/) + Seguir + 🔔** --- Like si te gustó este contenido ❤️
marianocodes
1,892,566
vavacasi
Хотите испытать удачу и получить большие выигрыши? Тогда вам в Вавада! Наше казино предлагает...
0
2024-06-18T14:32:48
https://dev.to/vavacasi/vavacasi-11df
Хотите испытать удачу и получить большие выигрыши? Тогда вам в [Вавада](https://vavacasi.com/)! Наше казино предлагает уникальный выбор игр, включая самые популярные слоты и настольные игры. Вавада славится своей щедрой бонусной политикой и постоянными акциями, которые помогут вам увеличить ваши шансы на победу. Удобный интерфейс сайта и круглосуточная поддержка делают игру еще более приятной. Присоединяйтесь к Вавада и начните выигрывать уже сегодня – ваш путь к успеху начинается здесь!
vavacasi
1,892,565
Hey Dev.to Community!
Excited to dive into this vibrant community of developers! I'm new here and eager to share insights,...
0
2024-06-18T14:30:47
https://dev.to/kampala/hey-devto-community-2i91
Excited to dive into this vibrant community of developers! I'm new here and eager to share insights, learn from your expertise, and contribute wherever I can. Let's connect, collaborate, and code together! #introduction #developer #community
kampala
1,892,456
Whatsapp Webhook Setup and Nodejs Code Example
Solution If you are getting a validation error the problem is not with ngrok, localtunnel...
0
2024-06-18T13:56:43
https://dev.to/greggcbs/whatsapp-webhook-full-setup-example-55lb
whatsapp, api, dashboard, node
## Solution If you are getting a validation error the problem is not with ngrok, localtunnel or whatever proxy you are using, its with your webhook code. Your webhook needs to be a **POST** and a **GET** because meta dashboard does a GET for url validation but when the webhook is fired they do POSTs. Here is the webhook code: ```js // this is a GET and POST endpoint async whatsapp(ctx) { const token = "secret"; // same secret you type in the secret input in the whatsapp dashboard // these params are sent when you do the // callback url validation in the dashboard const mode = ctx.request.query['hub.mode']; const challenge = ctx.request.query['hub.challenge']; const verify_token = ctx.request.query['hub.verify_token']; // confirms with whatsapp webhook/callback url is good (only happens once) if (mode === "subscribe" && verify_token === token) { return ctx.send(challenge, 200); } // handle webhook post data (type taken from attached repo in article, see below) const body = ctx.request.body as WebhookObject; // do some stuff here // then reply to whatsapp with status 200 otherwise it will // repeatedly send the webhook for same messages ctx.send(200) } ``` ## Types If you want the types for what is returned from the webhook then copy and paste them from this repo, the author did a great job at typing, although it is a little outdated: https://github.com/WhatsApp/WhatsApp-Nodejs-SDK/blob/main/src/types/webhooks.ts ## Error (you may have encountered) "The callback URL or verify token couldn't be validated. Please verify the provided information or try again later." ![whatsapp webhook error](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xedthvxcz6exmlehvu4p.png) Once you have your code correct and your webhook endpoint is set to GET and POST, then your webhook will verify. ## Fun tip (vscode built in proxy) In vscode there is a tab in your terminal/console that says 'ports', you click on there and then you click on add port and type in your server port, it creates a tunnel/proxy for you the same as ngrok or localtunnel would: ![vscode built in proxy tunnel 'ports'](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/64jxb70lg88q7a7j5c9c.png)
greggcbs
1,891,959
Build a Portable AI Companion with Discord’s User-Installable Apps (GPT 4o)
Discord User-Installable Apps are here, which means you can bring Discord Bots with you wherever you...
0
2024-06-18T14:30:00
https://dev.to/waveplay/build-a-portable-ai-companion-with-discords-user-installable-apps-gpt-4o-5f20
discord, node, programming, javascript
Discord **[User-Installable Apps](https://discord.com/developers/docs/change-log#userinstallable-apps-preview)** are here, which means you can bring **[Discord Bots](https://docs.roboplay.dev/discord-bots/getting-started)** with you wherever you go - DMs and even other servers your bot isn't in. Best of all, they're not that different from regular bots! So, what if you could have an AI companion that's always there to help you out, tell you jokes, and even give you advice? We'll be using **[Discord.js](https://discord.js.org/)** and **[Robo.js](https://roboplay.dev/docs)** to create one. Beginners are welcome! ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k2ibjaqnlx8gtnlj3d43.png) Exciting, right? Let's get started. ## Creating Your Bot Project Make sure you have **[Node.js](https://nodejs.org/)** and an editor like **[VS Code](https://code.visualstudio.com/)** installed then run the following in your **[terminal](https://www.freecodecamp.org/news/command-line-for-beginners/)**: ```bash npx create-robo my-ai-companion -k bot ``` This will create a new **Discord.js** project ready to use as a **Discord Bot**, made easier with the **Robo.js** framework. We'll be opting out of **[TypeScript](https://docs.roboplay.dev/robojs/typescript)** to keep things simple. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jasmcp2d8o4wmmfgnm8w.png) {% embed https://dev.to/waveplay/how-to-build-a-discord-activity-easily-with-robojs-5bng %} ## Adding AI Capabilities Now that we have a shiny new bot, let's give it some AI capabilities! Normally, you'd have to handle commands, events, and AI logic yourself, but with **Robo.js**, you can use plugins to get it instantly. Install the **[@robojs/ai](https://docs.roboplay.dev/plugins/ai)** plugin using your **terminal**: ```bash npx robo add @robojs/ai ``` **Ta-dah!** Your bot can now chat with users! ✨ ... at least, it will once you get an OpenAI key. **[Sign up for OpenAI](https://platform.openai.com/signup)** and get your **[API key](https://platform.openai.com/api-keys)**. Once you have it, add it to your project's `.env` file: ```env OPENAI_API_KEY="your-api-key-here" ``` You now have the `/chat` and `/imagine` slash commands from **@robojs/ai**. ## Making Your App User-Installable User installable apps are new, so let's enable this feature in our project's configuration. Open the `robo.mjs` file in the `config` folder and add the following: ```js export default { // ... rest of config experimental: { userInstall: true } } ``` This will tell **Robo.js** to register all commands as user installable. Next, enable the "User Install" authorization method for this app in the **[Discord Developer Portal](https://discord.com/developers/applications)** and give it the `application.commands` scope. Oh, and make sure the "Public Bot" setting is enabled. You can share the **Discord Provided Link** to let users install this app. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0etr52w3691tlhd2yp5t.png) ## Trying It Out Use the **Discord Provided Link** to install the app to your account then run your project in development mode: ```bash npm run dev ``` Go on a test server or DM someone and try out the `/chat` command. Your AI companion should respond to you! ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pufuow6b47vc0vxcmmj1.png) Wanna personalize your AI companion? Open the `ai.mjs` file in the `/config/plugins/robojs` folder and change the `systemMessage` to something you like better. You can change the `model` to something smarter like `gpt-4o`, or even feed it documents to learn from. ```js export default { model: 'gpt-4o', systemMessage: 'You are Darth Vader. Respond like a Sith Lord.' } ``` Learn more in the **[@robojs/ai documentation](https://docs.roboplay.dev/plugins/ai)**. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/frf3sd0y0sx8pei6aeg8.png) ## Creating a Context Command Talking to your AI is fun and all, but what if you want it to respond to other messages? **User-Installable Apps** support **[Context Commands](https://docs.roboplay.dev/discord-bots/commands)** too! They're different from **[Slash Commands](https://docs.roboplay.dev/discord-bots/context-menu)** as they take the context of the message into account. Let's create a context command called `Reply` that responds to any message with "I have no idea what you're talking about" as a placeholder for now. Create a new file in the `/src/context/message` folder called `Reply.js`. Remember to create the folder if it doesn't exist. ```js export default (interaction) => { interaction.reply("I have no idea what you're talking about") } ``` Right click on the message you want to reply to, look for "Apps", and click on the `Reply` context command. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijndol3597gyp32svwzn.png) Now let's replace the placeholder with a more meaningful response. **@robojs/ai** exports an `AI` object to get responses programatically. Update `Reply.js` to use it: ```js import { AI } from '@robojs/ai' export default async (_interaction, message) => { const response = await AI.chatSync([ { content: message.content, role: 'user' } ], {}) return response.text ?? "I don't know what to say" } ``` Now run the `Reply` context command on a message again and watch your AI companion respond! ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqi7098o4hexk1mrc87u.png) **Why does it work, you ask?** You may have noticed that our bot was able to access `message.content` despite the "Message Content" intent being disabled. This is because **Context Commands** have access to message content by default. The `interaction` object is also optional when using **Robo.js**. You can simply return the result instead of using `interaction.reply`. Replies are even automatically deferred when your default export is `async`. ## Enjoy Your New Companion! Now that you have your own User-Installable AI companion, you can take it with you wherever you go on Discord. You can even share it with your friends! You can find the full code for this project written in **TypeScript** on as a **[Robo.js Template](https://docs.roboplay.dev/templates/overview)**. ➞ [🔗 **Template:** Purrth Vader AI](https://github.com/Wave-Play/robo.js/tree/main/templates/discord-bots/purrth-vader) Check out the **[Robo.js documentation](https://roboplay.dev/docs)** for more cool features and plugins. You can even use it to **[build Discord Activities in seconds](https://dev.to/waveplay/how-to-build-a-discord-activity-easily-with-robojs-5bng)**! Don't forget to **[join our Discord server](https://roboplay.dev/discord)** to chat with other developers, ask questions, and share your projects. We're here to help you build amazing apps with **Robo.js**! 🚀 {% embed https://roboplay.dev/discord %} Our very own Robo, Sage, is there to answer any questions about Robo.js, Discord.js, and more.
waveplay-staff
1,892,564
Descubriendo AWS DeepRacer: Consejos de Empleadores de la Alianza Tech de AWS.
Esta competición desafía a los estudiantes a entrenar un modelo de aprendizaje por refuerzo que...
25,290
2024-06-18T14:28:54
https://dev.to/aws-espanol/descubriendo-aws-deepracer-consejos-de-empleadores-de-la-alianza-tech-de-aws-1kep
alianzatechskills2jobs, deepracer
--- series: Descubriendo AWS DeepRacer --- Esta competición desafía a los estudiantes a entrenar un modelo de aprendizaje por refuerzo que controle un vehículo autónomo de carreras en la nube. Los participantes usarán la misma configuración de vehículo y pista, pudiendo modificar únicamente el algoritmo de reinforcement learning (aprendizaje por refuerzo). **1. Student League AWS DeepRacer 2024: recomendaciones y beneficios by NTT DATA** Javier Santana, Expert Architect - Digital Architecture, {% embed https://www.youtube.com/watch?v=o1zXRH99nz0 %} En el marco de la participación de NTT DATA en la Alianza Tech de AWS (Skills to Jobs Tech Alliance), **Javier Santana** comparte con nosotros consejos y recomendaciones para participar en la competición de AWS DeepRacer, así como su utilidad y aplicación al ámbito profesional. Javier comparte recomendaciones para optimizar la función de recompensa y crear un modelo ideal. También analiza los beneficios de AWS DeepRacer más allá de la competencia y examina la relación entre AWS DeepRacer, Reinforcement Learning y Deep Learning. En concreto, enfatiza la importancia de utilizar una función de recompensa para maximizar el aprendizaje y equilibrar el tiempo de entrenamiento y el aprendizaje efectivo. Asimismo, discute estrategias para optimizar el rendimiento en la competencia de carreras autónomas, como mantenerse en la pista y cerca de la línea central, alentar al automóvil a completar una vuelta completa y ajustar la velocidad en función de la curvatura de la pista. Asimismo, la sesión resalta la importancia de las habilidades relacionadas con la robótica, la automatización y la inteligencia artificial, que tienen una demanda creciente. También enfatiza conceptos de aprendizaje automático como el aprendizaje por refuerzo y muestra diferentes algoritmos como PPO y SAC para los modelos de Deep Racer. Finalmente, explora el uso de la IA generativa en análisis de pistas, líneas de carrera óptimas y sus aplicaciones más amplias, mostrando los beneficios educativos y oportunidades profesionales en tecnología. A continuación tenéis links a posiciones juniors de entrada de **NTT DATA:** https://careers.emeal.nttdata.com/s/offer/a1J2p000009dDxyEAE/aws?language=es Únete a la comunidad de desarrolladores de AWS para aprender más sobre cloud y tecnología: https://dev.to/aws-espanol/impulsa-tu-carrera-unete-a-la-comunidad-de-desarrolladores-de-aws-en-iberia-user-groups-h0m
iaasgeek
1,892,091
Debugging Kubernetes - Troubleshooting Guide
Identifying Configuration Issues Common Causes and Solutions Detailed Investigation...
20,817
2024-06-18T14:26:44
https://debugagent.com/debugging-kubernetes-troubleshooting-guide
kubernetes, devops, development, tutorial
- [Identifying Configuration Issues](#identifying-configuration-issues) * [Common Causes and Solutions](#common-causes-and-solutions) * [Detailed Investigation Steps](#detailed-investigation-steps) - [Dealing with Image Pull Errors](#dealing-with-image-pull-errors) * [Troubleshooting Steps](#troubleshooting-steps) - [Handling Node Issues](#handling-node-issues) * [Preventive Measures](#preventive-measures) - [Managing Missing Configuration Keys or Secrets](#managing-missing-configuration-keys-or-secrets) - [Utilizing Buildg for Interactive Debugging](#utilizing-buildg-for-interactive-debugging) - [Conclusion](#conclusion) As Kubernetes continues to revolutionize the way we manage and deploy applications, understanding its intricacies becomes essential for developers and operations teams alike. If you don't have a dedicated DevOps team you probably shouldn't be working with Kubernetes. Despite that, in some cases a DevOps engineer might not be available while we're debugging an issue. For these situations and for general familiarity we should still familiarize ourselves with common Kubernetes issues to bridge the gap between development and operations. I think this also provides an important skill that helps us understand the work of DevOps better, with that understanding we can improve as a cohesive team. This guide explores prevalent Kubernetes errors and provides troubleshooting tips to help developers navigate the complex landscape of container orchestration. {% embed https://youtu.be/Q3cy8i4tsyQ %} As a side note, if you like the content of this and the other posts in this series check out my [Debugging book](https://www.amazon.com/dp/1484290410/) that covers **t**his subject. If you have friends that are learning to code I'd appreciate a reference to my [Java Basics book.](https://www.amazon.com/Java-Basics-Practical-Introduction-Full-Stack-ebook/dp/B0CCPGZ8W1/) If you want to get back to Java after a while check out my [Java 8 to 21 book](https://www.amazon.com/Java-21-Explore-cutting-edge-features/dp/9355513925/)**.** ## Identifying Configuration Issues When you encounter configuration issues in Kubernetes, the first place to check is the status column using the `kubectl get pods` command. Common errors manifest here, requiring further inspection with `kubectl describe pod`. ```bash $ kubectl get pods NAME READY STATUS RESTARTS AGE my-first-pod-id-xxxx 1/1 Running 0 13s my-second-pod-id-xxxx 1/1 Running 0 13s ``` ### Common Causes and Solutions **Insufficient Resources**: Notice that this means resources for the POD itself and not resources within the container. It means the hardware or surrounding VM is hitting a limit. **Symptom**: Pods fail to schedule due to resource constraints. **Solution**: Scale up the cluster by adding more nodes to accommodate the resource requirements. **Volume Mounting Failures**: **Symptom**: Pods cannot mount volumes correctly. **Solution**: Ensure storage is defined accurately in the pod specification and check the storage class and Persistent Volume (PV) configurations. ### Detailed Investigation Steps We can use `kubectl describe pod`: This command provides a detailed description of the pod, including events that have occurred. By examining these events, we can pinpoint the exact cause of the issue. Another important step is resource quota analysis. Sometimes, resource constraints are due to namespace-level resource quotas. Use `kubectl get resourcequotas` to check if quotas are limiting pod creation. ## Dealing with Image Pull Errors Errors like `ErrImagePull` or `ImagePullBackOff` indicate issues with fetching container images. These errors are typically related to image availability or access permissions. ### Troubleshooting Steps The first step is checking the image name which we can do with the following command: ```bash docker pull <image-name> ``` We then need to verify the image name for typos or invalid characters. I pipe the command through grep to verify the name is 100% identical, some typos are just notoriously hard to spot. Credentials can also be a major pitfall. E.g. an authorization failure when pulling images from private repositories. We must ensure that Docker registry credentials are correctly configured in Kubernetes secrets. Network configuration should also be reviewed. Ensure that the Kubernetes nodes have network access to the Docker registry. Network policies or firewall rules might block access. There are quite a few additional pitfalls such as problems with image tags. Ensure you are using the correct image tags. Latest tags might not always point to the expected image version. If you're using a private registry you might be experiencing access issues. Make sure your credentials are up-to-date and the registry is accessible from all nodes in all regions. ## Handling Node Issues Node-related errors often point to physical or virtual machine issues. These issues can disrupt the normal operation of the Kubernetes cluster and need prompt attention. To check node status use the command: ```bash kubectl get nodes ``` We can then identify problematic nodes in the resulting output. It's a cliché but sometimes rebooting nodes is the best solution to some problems. We can reboot the affected machine or VM. Kubernetes should attempt to "self-heal" and recover within a few minutes. To investigate node conditions we can use the command: ```bash kubectl describe node <node-name> ``` We should look for conditions such as `MemoryPressure`, `DiskPressure`, or `NetworkUnavailable`. These conditions provide clues about the underlying issue we should address in the node. ### Preventive Measures Node monitoring should be used to with tools such as Prometheus, Grafana to keep an eye on node health and performance. These work great for the low level Kubernetes related issues, we can also use them for high level application issues. There are some automated healing tools such as the Kubernetes Cluster Autoscaler that we can leverage to automatically manage the number of nodes in your cluster based on workload demands. Personally, I'm not a huge fan as I'm afraid of a cascading failure that would trigger additional resource consumption. ## Managing Missing Configuration Keys or Secrets Missing configuration keys or secrets are common issues that disrupt Kubernetes deployments. Proper management of these elements is crucial for smooth operation. We need to use ConfigMaps and secrets. These let us store configuration values and sensitive information securely. To avoid that we need to ensure that ConfigMaps and Secrets are correctly referenced in your pod specifications. Inspect pod descriptions using the command: ```bash kubectl describe pod <pod-name> ``` Review the output and look for missing configuration details. Rectify any misconfigurations. ConfigMap and secret creation can be verified using the command: ```bash kubectl get configmaps ``` and: ```bash kubectl get secrets ``` Ensure that the required ConfigMaps and Secrets exist in the namespace and contain the expected data. It's best to keep non-sensitive parts of ConfigMaps in version control while excluding Secrets for security. Furthermore, you should use different ConfigMaps and Secrets for different environments (development, staging, production) to avoid configuration leaks. ## Utilizing Buildg for Interactive Debugging Buildg is a relatively new tool that enhances the debugging process for Docker configurations by allowing interactive debugging. It provides Interactive Debugging for configuration issues in a way that's similar to a standard debugging. It lets us step through the `Dockerfile` stages and set breakpoints. Buildg is compatible with VSCode and other IDEs via the Debug Adapter Protocol (DAP). Buildg lets us inspect container state at each stage of the build process to identify issues early. To install buildg follow the instructions on the [Buildg GitHub page](https://github.com/ktock/buildg). ![](https://github.com/ktock/buildg/raw/main/docs/images/vscode-dap.png) ## Conclusion Debugging Kubernetes can be challenging, but with the right knowledge and tools, developers can effectively identify and resolve common issues. By understanding configuration problems, image pull errors, node issues, and the importance of ConfigMaps and Secrets, developers can contribute to more robust and reliable Kubernetes deployments. Tools like Buildg offer promising advancements in interactive debugging, further bridging the gap between development and operations. As Kubernetes continues to evolve, staying informed about new tools and best practices will be essential for successful application management and deployment. By proactively addressing these common issues, developers can ensure smoother, more efficient Kubernetes operations, ultimately leading to more resilient and scalable applications.
codenameone
1,892,562
Enclave Games Shop is now open!
Remember Enclave’s swag is out blog post? We got asked about that print a few times recently, and...
0
2024-06-18T14:27:35
https://enclavegames.com/blog/shop-open/
spreadshop, spreadshirt, swag, enclavegames
--- title: Enclave Games Shop is now open! published: true date: 2024-06-18 14:25:23 UTC tags: spreadshop,spreadshirt,swag,enclavegames canonical_url: https://enclavegames.com/blog/shop-open/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rndayoyyz8b628mj2i9r.png --- Remember [Enclave’s swag is out](https://enclavegames.com/blog/swag-out/) blog post? We got asked about that print a few times recently, and decided to share it with you all! After launching [js13kGames Shop](https://medium.com/js13kgames/js13k-shop-is-now-open-175278bd4fce) and [Gamedev.js Shop](https://gamedevjs.com/competitions/opening-gamedev-js-swag-shop/) on the same Spreadshirt platform we’ve decided to open [Enclave Games Shop](https://enclavegames.myspreadshop.com) as well. This particular design is a bit different than the one we printed ourselves earlier though: the “old” hoodie had the big logo on the back, while the new Spreadshirt version have it on the front. ![Enclave Games Shop products](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j03e95ew1jeuz11rpel1.png) This design is available on t-shirts, but also hoodies and tote bags as well. Plus we also added a second, smaller version of the logo on the front only for t-shirts, as it just looks cool. Grab anything [from the Shop](https://enclavegames.myspreadshop.com) if you’d like to support what we’re doing and wear it proudly, every single sale helps - **thank you**!
end3r
1,892,401
JavaScript Main Concepts < In Depth > Part 1
1. Event Loop, Call Stack, Callback Queue Explained here 2. this Keyword The...
0
2024-06-18T14:24:40
https://dev.to/rajatoberoi/javascript-main-concepts-edg
javascript, backenddevelopment, node, programming
## 1. Event Loop, Call Stack, Callback Queue [Explained here ](https://dev.to/rajatoberoi/understanding-the-event-loop-callback-queue-and-call-stack-in-javascript-1k7c) ## 2. this Keyword The value of this in JavaScript depends on how a function is called, not where it is defined. - If function is a method i.e. a part of object, then this references the object itself. ``` const video = { title: 'Netflix', play() { console.log(this); } } video.play() //{ title: 'Netflix', play: [Function: play] } video.stop = function() { console.log(this) } video.stop() //{ title: 'Netflix', play: [Function: play], stop: [Function] } ``` - If function is a Regular function --> this reference the global object, i.e. global object in node and window object in web. ``` function playVideo() { console.log(this); } // playVideo() //Global object ``` - If function is a Constructor function. this references the new object created from this constructor. ``` function Video(title) { this.title = title; console.log(this); } const newVideo = new Video('Netflix'); //Video { title: 'Netflix' } // new keyword creates empty object {} then this will reference the new empty object. ``` < Passing this to self {Use case} > ``` const hotStar = { title: 'Sports', tags: ['India Vs Pakistan', 'India Vs USA', 'Ireland VS USA'], showTags() { this.tags.forEach(function (tag) { console.log(this.title, tag); }) } } hotStar.showTags() ``` Output: undefined 'India Vs Pakistan' undefined 'India Vs USA' undefined 'Ireland VS USA' Reason: - When showTags is called, this refers to the hotStar object. - However, inside the forEach callback function, function is a regular function and not a method and as explained above, in regular function this references global object (window object in browsers) or be undefined in strict mode. - So there is no property in global object named 'title' hence undefined. To Solve such scenario: To maintain a reference to the hotStar object within the forEach callback, the showTags method saves the reference in a variable called self(most commonly variable name used is self, we can give any name to it.) ``` const hotStar = { title: 'Sports', tags: ['India Vs Pakistan', 'India Vs USA', 'Ireland VS USA'], showTags() { let self = this; this.tags.forEach(function (tag) { console.log(self.title, tag); }) } } hotStar.showTags() ``` Output: Sports India Vs Pakistan Sports India Vs USA Sports Ireland VS USA ## 3. Callbacks - General Callback: Any function that is passed as an argument to another function and is executed after some kind of event or action. - Asynchronous Callback: A specific type of callback that is executed after an asynchronous operation completes. These are often associated with events or tasks that are scheduled to run in the future, such as I/O operations, timers, or network requests. ``` function getDBdata(rechargeNumber, cb) { console.log(`Executing select query in MySQL for ${rechargeNumber}`) cb(rechargeNumber); } function callMerchantApi(recharge_number) { console.log(`Calling merchant REST API for ${recharge_number}`) } getDBdata('1241421414', callMerchantApi) //callMerchantApi is a callback ``` - In this JavaScript code, we have two functions: getDBdata and callMerchantApi. - The getDBdata function accepts a rechargeNumber(Customer Number) and a callback function cb. Inside getDBdata, a message is logged to the console, and then the callback function cb is called with rechargeNumber as its argument. Function passed in a function. - Another Example Here below setTimeout is a function for adding a delay, and after timeout/delay of 3 seconds is achieved, callback(which here is a arrow function) gets triggered and console message is printed. ``` setTimeout(() => { console.log("This message is shown after 3 seconds"); }, 3000); ``` Major Disadvantage of Callback is Nested Callbacks. - Callbacks can lead to deeply nested code (callback hell), which is hard to read and maintain. - To address these drawbacks, modern JavaScript offers Promises and the async/await syntax, which simplify asynchronous code and improve readability. ## 4. Promises - In JavaScript, a Promise is an object that represents the eventual completion (or failure) of an asynchronous operation and its resulting value. Promises provide a cleaner, more flexible way to work with asynchronous code compared to traditional callback functions. A Promise has three states: Pending: The initial state, neither fulfilled nor rejected. Fulfilled: The operation completed successfully. Rejected: The operation failed. To a promise, we can attach 3 methods: .then(): Gets called after a promise resolved. .catch(): Gets called after a promise rejected. .finally(): Always gets called, whether the promise resolved or rejected. - The .then method receives the value passed to the resolve method. - The .catch method receives the value passed to the rejected method ``` getUserFromMySQL(custId) .then((result) => { console.log(result) }) .catch((err) => { console.log(`Something went wrong!! ${err}`) }) .finally(() => { console.log('Promise all done!!') }) ``` Creating and Using a Promise: ``` //Creating let somePromise = new Promise((resolve, reject) => { // Some asynchronous operation let success = true; // This could be the result of the operation if (success) { resolve("Operation successful"); } else { reject("Operation failed"); } }); //Using somePromise .then(result => { console.log(result); // "Operation successful" }) .catch(error => { console.error(error); // "Operation failed" }) ``` < Promise Execution Behind The Scenes > - 1. new Promise Constructor function is called. It also receives a executor function. `new Promise((resolve, reject) => { //Some Async Stuff Here. })` - 2. New Promise object is created in memory. This object contains some internal slots like PromiseState, PromiseResult, PromiseFulfillReactions, PromiseRejectReactions and PromiseIsHandled. {We cannot access these internal slots} - PromiseState will be pending if the promise is not resolved or rejected. // Initially: // PromiseState: "pending" // PromiseResult: undefined // PromiseFulfillReactions: [] // PromiseRejectReactions: [] // PromiseIsHandled: false ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g55n4nw59d7v040jm1x7.png) - We can resolve or reject by calling resolve or reject that are made available to use by executor function. - When we call resolve, the PromiseState is set to fulfilled. // After resolve is called: // PromiseState: "fulfilled" // PromiseResult: undefined --> As we have not passed anything while calling resolve() // PromiseFulfillReactions: [onFulfilled handler] // PromiseRejectReactions: [] // PromiseIsHandled: false ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ea5pba6l30utd30s9p5u.png) - Resolving or Rejecting with some data. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9qcf86utfs2r9yknl7j.png) Using .then and .catch ``` const getCarDetails = new Promise((resolve, reject) => { const customerId = 'Elon Musk'; readBillsFromCassandra(customerId, (error, data) => { if (error) { reject('DB Client Not Ready!!'); } else { resolve(data); } }); }); function readBillsFromCassandra(custId, callback) { console.log(`Fetching data from Cassandra keyspace for ${custId}`); // Simulating async data fetching with a timeout setTimeout(() => { // Simulate successful data retrieval callback(null, {"car": "Tesla", "color": "black", "type": "EV"}); // To simulate an error, uncomment the following line // callback(new Error('Cassandra error'), null); }, 1000); } getCarDetails .then((result) => { console.log(`Result of promise is ${JSON.stringify(result)}`); }) .catch((error) => { console.log(`Something went wrong, Error: ${error}`); }); ``` - When resolve is called, it gets added to the call stack, Promise state is set to fulfilled, result is set to the value we pass to the resolve, and the promise reaction record handler receives that promise result _{"car": "Tesla", "color": "black", "type": "EV"}_ in above example. < Code Flow BTS > Example 1: ``` new Promise((resolve) => { setTimeout(() => { resolve('Done!') }, 1000); }) .then((result) => { console.log('Result is ', result) }) ``` - new Promise constructor added to call stack and this creates the promise object. - The executor function (resolve) => { setTimeout(...) } is executed immediately. - setTimeout gets added to call stack and registers a event-callback pair in Node API. Timer Starts. The new Promise constructor completes, and the Promise is now pending, waiting to be resolved. - next line, The .then method is called on the Promise object. - Timeout achieved, callback we passed to setTimeOut is now added to Task Queue/Callback Queue. - From task queue it goes to Call stack and gets executed. Promise state changes to fulfilled. - .then handler now moves to micro task queue. (result) => { console.log('Result is', result) } when the Promise is resolved. The callback is added to the micro task queue (also known as the job queue). - The JavaScript runtime continues to execute other code (if any) or sits idle, checking the event loop. - setTimeout callback is popped out after execution, handler function moves to call stack and gets executed. - When our handler was in micro task queue, our other tasks in our code keeps on executing, only when the call stack is empty this handler gets added to it. - This means we can handle the promise results in a Non Blocking Way. - .then itself also creates a promise record. Which allows us to chain .then to each other, like below. ``` new Promise((resolve) => { resolve(1) }) .then((result) => { return result * 2; }) .then((result) => { return result * 3; }) .then((result) => { console.log(result) //Output: 6 }) ``` Example 2 < Code Flow BTS > ``` new Promise((resolve) => { setTimeout(() => { console.log(1) resolve(2) }, 1000); }) .then((result) => { console.log(result) }); console.log(3); ``` Output Sequence 3 1 2 Explanation: - Main Execution Context: new Promise((resolve) => {...}) is encountered, and the Promise constructor is called. Which creates a Promise Object with PromiseState: pending - Promise Executor Function: The executor function (resolve) => { setTimeout(...) } is executed immediately. setTimeout is called with a callback and a delay of 1000 milliseconds. It registers an event-callback pair. - Adding .then Handler: The .then method is called on the Promise object. The callback (result) => { console.log(result) } is registered to be called once the Promise is resolved. - logging 3 to the console. console.log(3) is executed immediately. - Event Loop and Task Execution: The JavaScript runtime continues executing other code (if any) or sits idle, checking the event loop. After 1000 milliseconds, the callback passed to setTimeout is moved from the macro task queue to the call stack. The callback () => { console.log(1); resolve(2); } is executed. console.log(1) is executed, logging 1 to the console. resolve(2) is called, which changes the state of the Promise from pending to fulfilled with the value 2. The .then callback (result) => { console.log(result) } is enqueued in the micro task queue. - Microtask Queue Execution: After the current macrotask (the setTimeout callback) completes, the event loop checks the microtask queue. The .then callback (result) => { console.log(result) } is dequeued from the microtask queue and moved to the call stack. The .then callback is executed with result being 2. console.log(result) is executed, logging 2 to the console. ## 5. Async/Await ES7 introduced a new way to add async behaviour in JavaScript and make easier to work with promises based code! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvj74rzs3pzgpxgjtfgj.png) First, Let's analyse the behaviour changes when we add a keyword 'async' in front of a function. ``` const doWork = () => { //If we do not return anything by default undefined is returned. } console.log(doWork()); //Output: undefined ``` Now, adding async ``` const doWork = async () => { //If we do not return anything by default undefined is returned. } console.log(doWork()); //Promise { undefined } ``` - Hence, **Async functions always returns a promise, and those promise are fulfilled by the value developer choose to return from the function. ** Now, lets explicitly return a string. ``` const doWork = async () => { return 'John Wick' } console.log(doWork()); //Promise { 'John Wick' } ``` So the return value of doWork function here is not a string, instead it is a Promise that gets fulfilled with this string 'John Wick'. - Since, we are getting a promise in return, we can use .then and .catch methods. ``` const doWork = async () => { //throw new Error('Db client not connected!') return 'John Wick' } doWork() ``` Output: My name is John Wick - If doWork throws an error, it is same as reject promise, and will be catched. Uncomment throw error to stimulate .catch Await Operator Old Way ``` const add = (a, b) => { return new Promise((resolve, reject) => { //setTimeOut is used to stimulate Async opearation. Here it can be any thing, like REST API call, or Fetching data from Database. setTimeout(() => { resolve(a + b) }, 2000) }) } add(1, 3) .then((result) => { console.log(`Result received is ${result}`) }) .catch((err) => { console.log(`Received error: ${err}`) }) ``` With using Async-Await ``` const add = (a, b) => { return new Promise((resolve, reject) => { //setTimeOut is used to stimulate Async opearation. Here it can be any thing, like REST API call, or Fetching data from Database. setTimeout(() => { resolve(a + b) }, 2000) }) } const doWork = async () => { try { const result = await add(1, 3); console.log(`Result is ${result}`); } catch (err) { console.log(`Received error: ${err}`) } } doWork() ``` - This way JavaScript's async and await keywords allow developers to write asynchronous code in a style that resembles synchronous code. This can make the code easier to read and understand. - The doWork function is declared with the async keyword, making it an asynchronous function. This allows the use of the await keyword inside it. - Within doWork, the await keyword is used before calling add(1, 3). This pauses the execution of doWork until the Promise returned by add is resolved. - While the execution is paused, other operations can continue (i.e., the event loop remains unblocked). Using Async and Await doesn't make things faster, it's just makes things easier to work with. Check below snippet for doWork ``` const doWork = async () => { try { const sum = await add(1, 3); //Wait 2 seconds const sum1 = await add(sum, 10);//Wait 2 seconds const sum2 = await add(sum1, 100);//Wait 2 seconds console.log(`Final sum is ${sum2}`) //Final sum is 114 } catch (err) { console.log(`Received error: ${err}`) } } ``` - Out all 3 functions calls runs in order, other asynchronous things happening behind the scenes. - If first await promise rejects, none of the code down below that would run. - await calls operator sequentially, sum2 will not kick off until we get a value for sum. In some cases this is desired. Taking below example where sum, sum2, sum3 are not dependent ``` const doWork = async () => { try { const sum = await add(1, 3); const sum2 = await add(4, 10); const sum3 = await add(100, 200); return `${sum}, ${sum2}, ${sum3}` } catch (error) { console.log(`Something went wrong! Error: ${error}`) } } ``` If we want all the sum simultaneously by not blocking each other we can rewrite the above code as: ``` async function getAllSum() { let [ sum, sum2, sum3 ] = await Promise.all([ add(1, 3), add(4, 10), add(100, 200) ]); return `${sum}, ${sum2}, ${sum3}`; } ``` One of the problem with Promise chaining is that, it is difficult to have all values in same scope. For example ``` add(1, 1) .then((sum) => { console.log(sum) return add(sum, 10); }) .then((sum2) => { console.log(sum2) }) .catch((err) => { console.log(err) }) ``` - What if we want to have access to both sum at the same time to do something like save in database. We would have to create a variable in parent scope and reassign them in .then. - In async await, we have access to all individual sums in the same scope. [click for Part 2](https://dev.to/rajatoberoi/javascript-main-concepts-in-depth-part-2-2mfn)
rajatoberoi
1,892,560
FastAPI Beyond CRUD Part 10 - JWT Authentication (Project Endpoints with HTTP Bearer Auth)
In this tutorial, we dive into securing our API endpoints using HTTP Bearer Authentication. Bearer...
0
2024-06-18T14:21:45
https://dev.to/jod35/fastapi-beyond-crud-part-10-jwt-authentication-project-endpoints-with-http-bearer-auth-j1c
fastapi, python, programming, api
In this tutorial, we dive into securing our API endpoints using HTTP Bearer Authentication. Bearer Authentication allows users to access protected endpoints by including a token in the "Authorization" header of their HTTP requests, formatted as "Bearer token". {%youtube 9mx6LojqNCQ%}
jod35
1,892,481
Mike McQuaid on 15 years of Homebrew
This week we’re talking to Mike McQuaid, project leader and longest tenured maintainer of Homebrew, a...
0
2024-06-18T14:20:31
https://onceamaintainer.substack.com/p/once-a-maintainer-mike-mcquaid
ruby, opensource
This week we’re talking to [Mike McQuaid](https://github.com/MikeMcQuaid), project leader and longest tenured maintainer of [Homebrew](https://brew.sh), a package manager for macOS and Linux used by tens of millions of developers worldwide. After ten years at GitHub, Mike is now CTO of [Workbrew](https://workbrew.com), a startup for managing a fleet of machines running Homebrew. Mike spoke with us from Edinburgh, Scotland. Once a Maintainer is written by the team at [Infield](https://www.infield.ai), a platform for managing open source dependency upgrades. **What was your first exposure to computers and eventually writing software?** As a kid my first exposure was at my school. We had BBC Micros, which is what we grew up with in the UK. This was in the early 90’s. And my dad noticed that pretty much any task, no matter how boring, if it involved computers I would happily do it. He was in finance and back before you could get online stock prices, every Saturday he would get the Financial Times and there would be the big broadsheet with the stock prices, and I had some little spreadsheet program and I would basically just go through and enter all the stock prices into his spreadsheet and spend like two hours on a Saturday afternoon doing that. And for me that was delightful. And my dad was like, OK, what's going on with this kid? And then I just evolved my own interest through PC gaming and fiddling around trying to get the games to work on my computer. Eventually I went off to university and did a computer science and business degree and got a job as a software engineer. This was 17 years ago. **What was the environment like at your first job?** My first job was for BT, British Telecom. It’s sort of similar to AT&T if you’re in the States. And this was back in the days when it wasn’t that AI was going to possibly destroy your job, it was offshoring. Why would you be British and interested in writing software, because in two years all the jobs will be in India because they’re just as good for half the money, etc. etc. That was my first experience at a massive company. I’d been dabbling with open source for a few years, and I learned pretty quickly that the open source way of doing things - it goes down well in the open source community, but at your real job you get pulled into your manager’s office, like what have you done? So that didn’t work super well with me. After that I left and was the first employee at a startup called Mendeley in London. I’d been hacking on KDE, the Linux desktop environment, for a few years at that point. Hired a bunch of people from the KDE community there, stayed there about a year, did some consulting around Qt for KDAB, a Swedish company. And then I kind of got this sense that maybe the cross-platform desktop app world was not future-proof, you know, starting to smell things. I did a bit of research. I wanted to work for a company using Ruby on Rails based in the Bay Area and pretty quickly came upon Github. I applied, and they said “Not now, you’re too fresh.” So I went to work for this company AllTrails. And then several applications later, I got a job at Github where I worked for ten years. I guess that’s a nice segue back to the open source stuff since I left there last year and started my own company with some former Github people called Workbrew, and we’re building product solutions for bigger companies around Homebrew. **I’m having this Zoom from a KDE desktop. So I’m curious how did you get into working on it?** So I did Google Summer of Code the summer after I graduated, which was in 2007. Back then you could have journals, and you could post journals to your blog. And I basically built that integration so you could post journals to your blog. I built blogging support for WordPress and a couple of other blogs as my summer code project. And then I ended up sticking around and being involved with like bits and pieces on the KDE side for, you know, maybe a kind of couple years. And then I moved to Mac and I was doing some KDE porting to Mac and then I sort of gave up on it as a lost cause. I remember a period in the middle there where it was like you're gonna run all of your Linux apps everywhere - I think I tried to install Amarok or something on Windows, and it sort of worked. Not the way the world ended up going. No, it never worked quite as nicely not on Linux, unfortunately. **What was your first exposure to Ruby?** During university I had started dabbling with Linux and using various Linux distros, submitting patches to things and modifying things on my system or whatever. I’d seen a tiny amount of Ruby but mostly I got into KDE and playing with lower level Linux bits and pieces. Ruby didn’t come until later on when I was at the consultancy and I heard about this project called Homebrew from a friend of a friend in London. And that was the first serious Ruby that I wrote in 2009 or so, and I’ve been using it for the majority of my career ever since. We hear a lot about how intimidated people are to get involved with open source. So when you heard about Homebrew, what made you get over that hump and get involved? Hopefully without sounding too arrogant, I never felt particularly intimidated to get involved with open source projects. I think particularly in the Linux era and the pre-Github era, the barriers to entry were so high that it felt like if you managed to get through those barriers people were generally like, we're going to be your friend now. And if you're in my city, I'll take you out for dinner. I’d been to a few open source conferences by this point and everyone was very friendly and welcoming and accepting. So with Homebrew, one of the guys I hired at Mendeley was friends with this other guy who worked at another startup in London. And he was the guy that created Homebrew, Max Howell. I think having that connection and in the first year or so, being involved with Homebrew, meeting up and getting beers with Max and talking about our thoughts, it felt easy. What sucked me into Homebrew initially was just this idea of scratching your own itch - I’m using this, I need this, and I’m going to add a new thing because it’s easier if it’s in here and then everyone else can use it. And then over time it becomes less about me building for myself and more about, what does everyone who uses this project need and how can I help them? **How do you manage the roadmap? We talk to some maintainers where the projects, especially the huge ones, are very formally run. Others are still very informal. How do you think about that?** Yeah, I would say we’re on the moderately anarchistic end in terms of feature roadmaps and stuff like that. At least until you get to open source projects run entirely by corporations, maintainers have no ability to make anyone work on anything they don’t want to. I'm the longest running maintainer, I've been the project leader. But if I say like, hey Bob, I want you to ship this thing by the end of the month, then Bob can just be like, nah. And I don't really have any ability to do anything beyond that. So in some ways it was quite similar when I was a principal engineer at GitHub, because when you're in those roles, you have no direct reports and so you don’t have twenty engineers to do what you tell them, but at the same time, you have an awful lot of cultural influence. So instead of it being like, I will tell you what to do, it becomes like, hey, I've had an idea, who wants to join me on this journey? And there's some people with within Homebrew particularly who would like to be doing more Homebrew stuff. So I guess a big change for me in the last few years is that I try to push as much of my personal Homebrew to do list into issues that are tagged ‘help wanted’. Sometimes those are done by me, and sometimes those are done by random people in the community. There are five open issues in the main Homebrew repo right now and all of them are things that five years ago I would have maybe just had in my Apple notes somewhere. But I learned that there are a lot of people that might go “hey, that’s a good idea” and jump in and get involved. **Do you guys have any kind of regular meetings among the core maintainer team?** Once a year we try and get as many maintainers as possible to meet together in person for our AGM. We try and collocate that with FOSDEM, which is in Brussels every year in early February, to try to make a bit of a weekend of it. We're trying to do more events now. We're doing a hackathon focused on performance and security next month. And historically we’ve had some vaguely regular Zoom calls, but it’s hard to sort out the timing for those. Most of our private communication is happening in Slack. That's where we have the conversation about what do you think we should be doing? Or, there’s this issue right now. Could someone jump on and help with that? **So this list of issues here, I see this one about Sorbet. How does that conversation, like “we should add type checking to Homebrew”, get shepherded through?** That's a pretty good example actually. You inadvertently stumbled on one of the more amusing but contentious ones. All of our governance documentation is public but if you’re not a Homebrew maintainer, it can be a bit of a struggle to get through it all. We have the project leadership committee, which is essentially managing the money and the governance structure. And historically that was also maintainers. But now two of the five people in that are not maintainers and never have been. Then we have the technical steering committee, which is managing roadmaps and deciding these types of things. And then we have a project leader, which is like an elected position as well, which is me. And I'm the only person who's ever been the project leader so far. But the way it works in reality is that the technical steering committee exists to help resolve conflicts in technical direction I’ve been unable to resolve myself with the other maintainers. And that's what happened with the Sorbet stuff. Interestingly, a few years ago when it was initially proposed, I was not a fan of the idea. Now, however, I'm actually a big proponent of Sorbet. I think we should double down on it and use it even more. **What convinced you?** Well sometimes the way it can go with open source projects is that someone gets an idea, that one person is very enthusiastic. And people get excited and say yeah we want to get involved, great. But what can happen in the worst case is that none of the people who say they’re going to step up actually step up, and the person who pushed it to begin with got bored and wandered off, and now you’re left with these problems. And then it becomes someone else’s problem, often mine, to clean up the mess. So I thought that was the way it was going to go with Sorbet. But it turned out that over time actually more people got interested in it, and more people got involved. And personally over at Workbrew we started using it to solve a bunch of problems, and at GitHub a bit as well, and so now having used it to solve problems I’m like ok, this is better than I thought it was. **So in the 15+ years that you’ve been working in open source, do you feel that the way that open source projects are run have changed? What have you noticed in terms of open source culture over the last 15 years?** I think the really big thing in the last 15 years is where and how open source is happening. So 15 years ago I'm not sure whether node.js existed - certainly npm if it existed was pretty minor. Whereas now most engineers out there are writing JavaScript, right? If you're a new engineer coming out of a bootcamp or learning a new language, it would be a strange choice for you not to learn JavaScript and do that on the front end and the back end and then I'm full stack and yadda, yadda, yadda. So I think that has been interesting just because the JavaScript ecosystem has its own culture and way of doing things. You have millions of dependencies and often those are quite small and maintained by like a single person. Not fewer, big chunky open source projects. Like KDE is essentially a big umbrella project that probably has, I don't know how many active contributors it has, but it wouldn't surprise me if it's hundreds or thousands or whatever. And it's carved into a wide variety of pieces. Whereas I feel like open source in general has become a bit more decentralized and there's a lot more small projects of one or two people here and there. Things have also moved away from the Linux world where it used to be. Like when I was at university, open source and using Linux were almost one to one, right? If you were a big C# Microsoft stack Windows type person, it would be relatively unusual for you to use any open source at all. But obviously over time, with GitHub and Microsoft acquiring them and Microsoft themselves getting a lot more into open source, it's like everything is open source now in some respect but what open source means has also changed. There used to be the assumption that open source projects are community run and maybe they're loosely affiliated with a company or whatever. In the earlier days of GitHub, if you looked at the repos with the most stars or whatever, Homebrew was often up there. Whereas now it's all like VSCode and next.js. Almost all the projects that are up there are ones that have major corporate backing basically. That makes open source a very different world and makes it harder to be an indie, volunteer-run project. **Are there any other open source projects that you’re impressed by or watching?** Ruby on Rails always continues to impress me. I'm using it again at the Workbrew startup that I'm building, and it’s the first time I've built a completely greenfield app from scratch. We’ve got a couple apps I guess, but one’s in Rails and one’s in Go and you know, Go is really good at what it does, but writing it doesn't make me happy in the same way Ruby does. There's been such an amount of time and effort and care and love that's gone into making it very pleasant for developers to use.
allieoopz
1,891,615
Get to know Xperience by Kentico: The Page Selector form component
Form Components are a great way of providing some pretty powerful content editing functionality to...
0
2024-06-18T14:19:59
https://dev.to/michael419/get-to-know-xperience-by-kentico-the-page-selector-form-component-59a2
kentico, xperience
Form Components are a great way of providing some pretty powerful content editing functionality to marketers and editors in Xperience by Kentico. They enable us web developers to create fields in editing dialogues within the admin UI that, in-turn, provide functionality to marketers for them to enter and select data to display on their website. This is achieved by programmatically assigning form components to the properties of widgets, sections, page templates, and much more, by using `Form component attributes`. Xperience by Kentico provides a whole host of form components out-of-the-box, and this is really powerful as it gives us web developers the flexibility to provide different content editing experiences to suit the needs of marketers, who these days, are looking to content manage their marketing channels faster, and more effectively, than ever. ## What is the Page Selector? One of my favourite form components is the Page Selector, which enables you to provide a method for editors to select pages from the site tree in a website channel. As a developer, you can then work with the data of those selected pages to provide functionality in custom components. ## A trip down memory lane... The Page Selector has been through a number of iterations in-line with Kentico's development of their DXP. Each time, they've added to its capabilities to make the feature more-and-more useful... ### Kentico Xperience 12 The page selector started its journey in Kentico Xperience 12, where it was introduced as a straightforward method for enabling editors to select a single page from the content tree. ![The Page Selector form component in Kentico Xperience 12](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jo2qmuanrdwsg1dabqe2.png)*Kentico Experience 12 Page Selector* Under this guise, the component returned the `NodeGuid` of the selected page. Its configuration was fairly limited, but it did have a `RootPath` property that allowed you to limit the selection of pages to those that were located in a specific section of the content tree, otherwise the entire tree was available. ### Kentico Xperience 13 In Kentico Xperience 13, Kentico added the ability for marketers to select multiple pages. In addition, they added the `MaximumPages` property that enabled developers to limit the number of pages that could be selected. ![The Page Selector form component in Kentico Xperience 13](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne71yawfut35fb2a9j2d.png)*Kentico Xperience 13 Page Selector* This was a great improvement for editing efficency. Imagine you were in the scenario where marketers had to select three pages to populate a widget with call-to-action links, you could now have one form component to select all three pages to link to, instead of having three separate form components, thus reducing clutter in the widget dialog box, and producing a much more streamlined experience. ### Xperience by Kentico In Xperience by Kentico, the form component was rebuilt from the ground up to work with Kentico's newly redesigned admin UI, which is built in React, sitting on .Net. ![The Page Selector form component in Xpereince by Kentico](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2g2hkb5rvifzf7xw9cfr.png)*Xperience by Kentico Page Selector* The thing I really like though is that they took the opportunity to add the killer feature of making selected pages sortable by the marketer. ![The Page Selector form component in Xperience by Kentico](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/883hs53nej0csjschdvx.png) With KX13, you had to select the desired pages in the order in which you wanted them to be displayed. And if you wanted to change the order, you had to deselect all pages, and then reselect them in the order. Now with Xperience by Kentico, that issue has disappeared, with the addition of the `Sortable` property, which when set to `true`, enables the drag-and-drop handles, illustrated in the above screenshot. ## A practical example Imagine you are building an Article Cards widget for the page builder, and you'd like marketers to be able to select the articles they want to display in the widget. Assuming you have content modelled your articles to be pages in the content tree, you can create a property for your widget, and utilise the `WebPageSelectorComponent`, which is the form component attribute you need to use when you want to render the Page Selector for a widget property: ```csharp using System.Collections.Generic; using System.Linq; using CMS.Websites; using Kentico.PageBuilder.Web.Mvc; using Kentico.Xperience.Admin.Websites.FormAnnotations; public class CardWidgetProperties : IWidgetProperties { [WebPageSelectorComponent( TreePath = "/Articles", MaximumPages = 3, Sortable = true, ItemModifierType = typeof(WebPagesWithUrlWebPagePanelItemModifier), Label = "Select pages", Order = 2)] public IEnumerable<WebPageRelatedItem> SelectedArticlePages { get; set; } = Enumerable.Empty<WebPageRelatedItem>(); } ``` As you can see, I've added the following configuration properties: - `TreePath` - so that it only offers up pages that sit within our Article section of the tree. We want to lead marketers to the correct section of the content tree to make their life easier. It wouldn't be a great experience if we just displayed the entire content tree, otherwise as the tree grows in size, it will make it more difficult for them to find the correct section in the tree. - `MaximumPages` - we've ensured marketers can only select a maximum of 3 pages, as (hypothetically) in this case the design and solution dictate a maximum limit of article cards to be displayed - `Sortable` - as we'd like marketers to be to control the sort order of the article cards separate for each instance of the widget. - `ItemModifierType` - lets us set a type that implements the `IWebPagePanelItemModifier` interface, and in this case, we are using the built-in `WebPagesWithUrlWebPagePanelItemModifier` type, which ensure that only pages that have the URL feature enabled are selectable in the content tree. Content-only pages are still displayed, but in a disabled state. And of course, there are the generic `Label` and `Order` properties for defining the text for the property's label, and the order in which it appears in the widget dialog in relation to other properties. ## In conclusion The Page Selector is now a very useful, mature tool in the Kentico web developer's arsenal that helps to streamline the editing process for marketers. - Read [Kentico's Page Selector documentation](https://docs.kentico.com/developers-and-admins/customization/extend-the-administration-interface/ui-form-components/reference-admin-ui-form-components#page-selector) - View [Kentico extensive list of Admin UI form components](https://docs.kentico.com/developers-and-admins/customization/extend-the-administration-interface/ui-form-components/reference-admin-ui-form-components)
michael419
1,892,467
Internet Of Things(IoT)
Imagine everyday objects like refrigerators, thermostats, or fitness trackers talking to each other...
0
2024-06-18T14:18:30
https://dev.to/prakharkapoorcoder/internet-of-thingsiot-4am7
devchallenge, cschallenge, computerscience, beginners
Imagine everyday objects like refrigerators, thermostats, or fitness trackers talking to each other and the internet. That's the **Internet of Things (IoT)**! It connects devices to collect data and automate tasks, making our lives easier and smarter.
prakharkapoorcoder
1,892,457
Understanding Loops - A Tangible Example
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T13:57:26
https://dev.to/terabytetiger/understanding-loops-a-tangible-example-2o0p
devchallenge, cschallenge, computerscience, beginners
--- title: Understanding Loops - A Tangible Example cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4tfs5o2wz0j2jiz3l3fp.png published: true tags: devchallenge, cschallenge, computerscience, beginners --- *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer <!-- Explain a computer science concept in 256 characters or less. --> ![Image of a short JavaScript function that we'll be stepping through in the guide. First, it sets the age variable to 25 (or whatever your age is), then has a comment to grab a piece of paper and draw a birthday cake. Then there's a short while loop that while age is greater than 0, draw a candle then subtract 1 from the age variable. Outside of the loop is another comment that it's celebration time.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xihugelj71cll6qnctpv.png) ### 1 - Setup Get paper, draw a cake, and set `age` to your age. ### 2 - Do Loop `age` isn't 0, so draw a candle and subtract 1 from `age`. ### 3 - Checkup Is `age` still greater than 0? Go to step 2 Otherwise, go to 4. ### 4 - 🎉 Celebration time! --- ## Additional Context I didn't include markdown in my count - ended up at 248 according to VS Code: ![Image of text highlighted and showing 248 characters were selected](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0oix7d94bw3wqlazjx1l.png)
terabytetiger
1,892,465
The Advantages of Gantt Charts in JavaScript for Project Management
In today's fast-paced world, efficient project management is crucial for the success of any...
0
2024-06-18T14:15:29
https://dev.to/lenormor/the-advantages-of-gantt-charts-in-javascript-for-project-management-cdm
javascript, webdev, programming, productivity
In today's fast-paced world, efficient project management is crucial for the success of any organization. Among the numerous tools available, Gantt charts have stood the test of time as one of the most effective ways to visualize and manage project schedules. Leveraging JavaScript to create these charts brings a whole new level of interactivity and flexibility. In this article, we will delve into the advantages of using Gantt charts in JavaScript for project management, with a special focus on tools like ScheduleJS. ## What is a Gantt Chart? ![Gantt Charts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7w2zvpxq0a107v98p32.png) A Gantt chart is a type of bar chart that represents a project schedule. It displays the start and finish dates of the various elements of a project, allowing project managers to see at a glance what needs to be done and when. Gantt charts are instrumental in planning, coordinating, and tracking specific tasks in a project. ## Why Use Gantt Charts in JavaScript? - **1. Interactivity and Real-Time Updates** One of the primary advantages of using JavaScript to create Gantt charts is the level of interactivity it offers. Unlike static charts, JavaScript-powered Gantt charts can be updated in real-time. This means that as project conditions change, the Gantt chart can be dynamically adjusted to reflect these changes. Tools like ScheduleJS excel in providing such real-time interactivity, making them invaluable for modern project management. - **2. Easy Integration with Web Applications** JavaScript is the backbone of web development, making it an ideal choice for creating Gantt charts that need to be integrated into web applications. Whether you are building a project management tool from scratch or adding functionality to an existing system, JavaScript libraries like ScheduleJS make it seamless to embed Gantt charts into your web application. This integration ensures that project data is readily accessible and up-to-date, promoting better decision-making. - **3. Customization and Flexibility** JavaScript offers unparalleled customization options. With libraries like ScheduleJS, developers can tailor Gantt charts to meet the specific needs of their projects. From adjusting the color schemes to modifying the layout and adding custom functionalities, the flexibility provided by JavaScript ensures that the Gantt charts can be as detailed and specific as required. - **4. Enhanced User Experience** User experience is a critical factor in the adoption and effectiveness of any project management tool. JavaScript-based Gantt charts provide a smooth and responsive user experience. Users can drag and drop tasks, zoom in and out of timelines, and interact with the chart elements directly. This level of interactivity enhances user engagement and makes the management process more intuitive. - **5. Cross-Platform Compatibility** JavaScript is inherently cross-platform, meaning that Gantt charts created with JavaScript will work across different devices and operating systems. Whether team members are using desktops, tablets, or smartphones, they can access and interact with the Gantt charts without any compatibility issues. This is particularly beneficial for remote teams and organizations with a BYOD (Bring Your Own Device) policy. ## [ScheduleJS](https://schedulejs.com/): A Powerful Tool for Creating Gantt Charts ![ScheduelJS JS Gantt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/773lkbxump54z5v807rg.png) When it comes to creating Gantt charts in JavaScript, ScheduleJS stands out as a robust and versatile tool. ScheduleJS is a JavaScript library specifically designed for creating interactive and highly customizable Gantt charts. Here are some of the key features that make ScheduleJS a preferred choice for project managers and developers: - **User-Friendly Interface** ScheduleJS offers a user-friendly interface that simplifies the creation and management of Gantt charts. The intuitive design allows users to quickly add, modify, and remove tasks, making it accessible even to those with limited technical expertise. - **Advanced Scheduling Capabilities** With ScheduleJS, users can take advantage of advanced scheduling capabilities. The library supports dependencies, milestones, and constraints, allowing for comprehensive project planning. Users can set task priorities, allocate resources, and visualize critical paths, ensuring that all aspects of the project are accounted for. - **Real-Time Collaboration** ScheduleJS facilitates real-time collaboration among team members. Changes made by one user are instantly reflected across all instances of the Gantt chart, ensuring that everyone is on the same page. This feature is particularly useful for distributed teams working in different locations. - **Integration with Other Tools** ScheduleJS can be easily integrated with other project management tools and systems. Whether you are using a CRM, ERP, or any other software, ScheduleJS can seamlessly connect to these systems, pulling and pushing data as needed. This integration capability enhances the overall efficiency of project management workflows. - **Scalability** Whether you are managing a small project or a large, complex one, ScheduleJS is designed to scale with your needs. The library can handle a large number of tasks and dependencies without compromising performance, making it suitable for projects of all sizes. - **Customizable Themes** ScheduleJS allows for extensive customization of themes, enabling users to align the Gantt charts with their brand identity. From colors and fonts to the overall layout, every aspect of the chart can be tailored to match the organization’s visual standards. ## Practical Applications of JavaScript Gantt Charts ![JS Gantt Charts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ohspinfmltwmwebu3lmr.png) - **Project Planning and Scheduling** One of the most common applications of Gantt charts is in project planning and scheduling. By visualizing the timeline of tasks and their dependencies, project managers can allocate resources more efficiently and identify potential bottlenecks before they become critical issues. - **Progress Tracking** Gantt charts are also essential for tracking the progress of a project. With a clear visual representation of completed, ongoing, and pending tasks, stakeholders can quickly assess whether the project is on track. Tools like ScheduleJS provide progress indicators and completion percentages, making it easier to monitor progress. - **Resource Management** Effective resource management is crucial for the success of any project. Gantt charts help project managers allocate resources effectively by showing which tasks require specific resources at different times. This ensures that resources are not over-allocated or under-utilized. - **Risk Management** By visualizing the entire project timeline and dependencies, Gantt charts enable project managers to identify and mitigate risks. Potential delays and their impact on the project can be analyzed, allowing for proactive risk management. - **Communication and Reporting** Gantt charts serve as a powerful communication tool. They provide a clear and concise visual representation of the project schedule, making it easier to communicate timelines and expectations to stakeholders. Regular updates to the Gantt chart can also be used for reporting purposes, ensuring that everyone is informed about the project's status. ## Best Practices for Implementing JavaScript Gantt Charts - **Define Clear Objectives** Before implementing a Gantt chart, it is essential to define clear project objectives. Understand what you aim to achieve with the chart and how it will be used to facilitate project management. - **Choose the Right Tool** Selecting the right JavaScript library is crucial for the success of your Gantt chart implementation. Tools like ScheduleJS offer a range of features and customization options that can be tailored to meet your specific needs. - **Keep it Simple** While customization is important, it is equally important to keep the Gantt chart simple and easy to understand. Avoid cluttering the chart with too much information, and focus on the key tasks and milestones. - **Regularly Update the Chart** A Gantt chart is only useful if it is up-to-date. Ensure that the chart is regularly updated to reflect the current status of the project. This may involve adding new tasks, adjusting timelines, and marking completed tasks. - **Involve the Team** Involving the project team in the creation and maintenance of the Gantt chart can improve accuracy and buy-in. Encourage team members to update their task statuses and provide feedback on the chart. ## Conclusion Gantt charts are a powerful tool for project management, offering a clear visual representation of project timelines and dependencies. Leveraging JavaScript to create these charts enhances their interactivity, customization, and integration capabilities. Tools like [ScheduleJS](https://schedulejs.com/) make it easier than ever to implement and manage Gantt charts, providing a robust solution for modern project management needs.
lenormor
1,892,464
Driving into the Future: Next-Gen ECU Technologies Transforming Automotive Performance Market
Introduction: In the dynamic landscape of automotive engineering, the evolution of Automotive...
0
2024-06-18T14:11:10
https://dev.to/chanda_simran/driving-into-the-future-next-gen-ecu-technologies-transforming-automotive-performance-market-2h50
marketstrategy, globalinsights, marketgrowth
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ni8kibs779whv2y6akx.jpg) **Introduction:** In the dynamic landscape of automotive engineering, the evolution of [**Automotive Electronic Control Unit Market**](urhttps://www.nextmsc.com/report/automotive-electronic-control-unit-ecu-marketl) stands at the forefront of innovation. These electronic brains of modern vehicles are undergoing a transformative phase, integrating cutting-edge technologies like neural networks and artificial intelligence (AI) to enhance automotive performance and efficiency. In this article, we delve into the realm of Next-Gen ECU Technologies, exploring their emergence, impact, and the exciting future they herald for the automotive industry. **Download FREE Sample:** https://www.nextmsc.com/automotive-electronic-control-unit-ecu-market/request-sample **Understanding Next-Gen ECU Technologies:** Traditional ECUs have long relied on predefined algorithms and logic to govern vehicle functions, but the advent of Next-Gen ECU Technologies marks a paradigm shift. Neural networks, inspired by the human brain's interconnected neurons, enable ECUs to learn and adapt to real-world driving scenarios. Through machine learning algorithms, these ECUs analyze vast amounts of data from sensors, cameras, and other sources to make intelligent decisions on-the-fly. AI-based ECUs take this concept further by incorporating advanced artificial intelligence algorithms. These AI-driven systems possess the capability to anticipate driver behavior, optimize vehicle performance, and even predict maintenance requirements, ushering in a new era of predictive and proactive automotive management. **Impact on Automotive Performance and Efficiency:** The integration of Next-Gen ECU Technologies has profound implications for automotive performance and efficiency. Here's how these technologies are reshaping key aspects of vehicle operation: **Enhanced Performance:** Neural network-based ECUs optimize engine performance by dynamically adjusting fuel injection, ignition timing, and other parameters based on driving conditions. AI-driven ECUs fine-tune vehicle dynamics in real-time, providing optimal traction control, stability management, and handling characteristics. **Improved Fuel Efficiency:** By continuously analyzing driving patterns and environmental factors, Next-Gen ECUs optimize fuel consumption, leading to significant improvements in overall efficiency. AI-based predictive algorithms anticipate upcoming road conditions and traffic patterns, enabling the vehicle to operate more efficiently and conserve fuel. **Advanced Driver Assistance:** Next-Gen ECUs play a pivotal role in enabling advanced driver assistance systems (ADAS), such as autonomous emergency braking, adaptive cruise control, and lane-keeping assistance. Neural network-driven ECUs improve the accuracy and responsiveness of ADAS features, enhancing both safety and driving comfort. **Predictive Maintenance:** AI-based ECUs monitor vehicle components in real-time, detecting subtle changes indicative of potential failures. By proactively identifying maintenance needs, these ECUs help prevent costly breakdowns and minimize downtime, ensuring optimal vehicle reliability. **The Road Ahead:** As Next-Gen ECU Technologies continue to evolve, their impact on the automotive industry will only intensify. Manufacturers are investing heavily in research and development to unlock the full potential of these cutting-edge systems. From autonomous driving capabilities to personalized driving experiences, the future holds endless possibilities fueled by innovation in ECU technology. **Seamless Connectivity:** Next-Gen ECUs facilitate seamless connectivity with external devices and infrastructure, enabling features such as vehicle-to-vehicle communication and integration with smart city systems. Through real-time data exchange, vehicles equipped with these ECUs can receive traffic updates, route optimizations, and other relevant information, enhancing both safety and convenience. **Personalized User Experience:** AI-driven ECUs have the capability to learn and adapt to individual driver preferences, customizing various vehicle settings such as climate control, seating position, and entertainment options. By providing personalized recommendations and adjustments, these ECUs create a more tailored and enjoyable driving experience for occupants. **Energy Management:** Next-Gen ECUs optimize energy management in hybrid and electric vehicles, dynamically balancing power distribution between internal combustion engines, electric motors, and battery systems. Through intelligent energy routing and regeneration strategies, these ECUs maximize overall efficiency and extend the driving range of electric vehicles. **Environmental Impact:** By optimizing engine performance and reducing fuel consumption, Next-Gen ECUs contribute to lower emissions and reduced environmental impact. AI-based predictive algorithms help minimize the ecological footprint of vehicles by promoting eco-friendly driving behaviors and optimizing energy usage. **Enhanced Security:** With connectivity becoming ubiquitous, Next-Gen ECUs prioritize cybersecurity to safeguard against potential threats such as hacking and unauthorized access. Advanced encryption protocols and intrusion detection systems are integrated into these ECUs to ensure the integrity and confidentiality of vehicle data and communication channels. **Regulatory Compliance:** Next-Gen ECUs are designed to meet stringent regulatory standards for automotive safety, emissions, and cybersecurity. Manufacturers leverage these technologies to ensure compliance with evolving regulatory requirements, fostering trust and confidence among consumers and regulatory bodies alike. Incorporating these additional points will provide readers with a comprehensive understanding of the multifaceted impact of Next-Gen ECU Technologies on the automotive industry, further enhancing the relevance and value of the article. **Conclusion:** The automotive landscape is undergoing a profound transformation driven by Next-Gen ECU Technologies. With neural networks and AI-based ECUs at the helm, vehicles are becoming smarter, more efficient, and safer than ever before. As these technologies continue to mature, they promise to revolutionize the way we perceive and interact with automobiles, ushering in a new era of mobility and connectivity. Embracing these advancements will not only elevate the driving experience but also pave the way for a sustainable and intelligent automotive future.
chanda_simran
1,892,463
Next.js Made Me a Bad Engineer
The Realisation I was tasked with porting our Single Page Web Application (SPA) to a...
0
2024-06-18T14:10:22
https://dev.to/kuvambhardwaj/nextjs-made-me-a-bad-engineer-2oel
webdev, javascript, discuss
## The Realisation I was tasked with porting our Single Page Web Application (SPA) to a Server-Side Rendered (SSR) setup. The goal was simple: provided a set of pages, they need to be made SEO-friendly. And right off the bat, 2 classic options came to my mind: SSG & SSR. The problem with statically generating all pages upfront (SSG) was build-time, I estimated it to be ~11 hours. The problem with SSR was speed, it took ~5 seconds before content was seen on the page. The best of both worlds seemed ISR-like setup, where as requests come in, we cache the content & serve the cached content for X amount of time and after time's up the cache is invalidated for the route & content is re-fetched from the origin server. The problem was *the framework*. Or at least, that's what my naive vercel-fanboy brain thought (sorry theo). You see, I was working on a Vue.js project flavoured with a framework called *Quasar*. And apparently, you can't do something like this there... ```json // courtesy of Vercel/Next.js (same thing) export const revalidate = 3600 ``` So, in the next standup, I called out how *Quasar doesn't support ISR and therefore is not the right choice for us here & we should move to a different & more popular framework like Nuxt because it EXPLICITLY supports ISR.* The majority of responses were positive to this suggestion but one response stood out: > that seems good but did you try hooking up a CDN first? ## The Problem with Me (and you) I am not new to the field, I wrote my first Javascript code 3 years ago and *I'm something of a Web Developer myself.* Even after having a couple years of experience, I fell into the framework trap. > Just because a framework solves a problem doesn't mean that the problem can't be solved without the framework. > > \-me And what's more problematic is that my suggestion was well-received by the majority. ## The Solution Use CloudFront for caching and `Cache-Control` headers in the response. ```plaintext Cache-Control: public, max-age=604800, stale-while-revalidate=86400 ``` I'm embarrassed by how simple it is. > READ THE FRICKIN DOCS: > > [https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#stale-content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#stale-content) > > [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) > > [https://docs.netlify.com/platform/caching](https://docs.netlify.com/platform/caching/) ### Implementation Details #### SEO Friendliness Focussing on our main problem: make the site SEO friendly. With SPA, in any library/framework, the rendering is happening on the browser (client-side). Any SEO crawlers looking to index your site usually have javascript disabled, which means that all of that rendering is gone and the crawlers will just see a blank page. > "Disable Javascript" while in developer mode in your browser to get an idea of what exactly the crawlers can see ;) So, my first step was to migrate the application to SSR (Server-Side Rendering). Now, with SSR, all that rendering is happening on the server and at last, the server gives out a nicely populated (rendered) HTML. #### Caching Mechanism In my little bit of research, I found 3 plausible options: 1. Cloudflare Cache 2. Netlify CDN 3. AWS CloudFront I chose AWS CloudFront. Reasons for not going with anyone else: * Cloudflare CDN - had limits of 512MB on caching file sizes (really don't know why, it could also be my doc-reading skill issues) * Netlify CDN - "$55 per 100GB" for bandwidth above free trial? please! Actually, I've been thinking... It's about time we cut some slack for AWS. Obviously, AWS will be a skill issue for front-end people who like one-click deploys and outrageous serverless prices. But the services it provides far outweigh its crappy-looking website's UX which by the way they updated to a better UX a while ago and I blame the culture around "if you don't have a good-looking website, your business is crap". And AWS's CloudFront solution makes the most sense, very neatly decoupling the cache layer from the origin server and the tech stack used. Here's a diagram I made to make you understand it better. ![explanation of isr with caching CDN](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2wknjvwhb70jtrz05jk.png) When setting up, CloudFront requires your server's URL and then provides you a ".cloudfront.net" URL. This ".cloudfront.net" URL is the cache layer and the caching behaviour is controlled by the "Cache-Control" header supplied in response by your origin server. ## Call For Discussion I have some spicy hot takes here for you to fight me on and I'll try my best to stand by them in the comments :) Do you think Next.js and its introduction of Server Components is any good? Or for that matter, is the general direction of Vercel good for Engineers? Because I don't think since `getServerSideProps` days there were any meaningful features at all introduced that ackshually solved problems. With Vercel's "frontend cloud" philosophy, we will be broke with the unreal pricing just for the illusion of DX. > If you complain about the developer experience you're not a developer at all. > > \-me again Take me up on that in the comments, thanks for reading!
kuvambhardwaj
1,892,462
Git Commands Cheat Sheet
Git is a distributed version control system (DVCS) that allows multiple developers to collaborate on...
0
2024-06-18T14:08:17
https://dev.to/madgan95/git-commands-cheat-sheet-51ma
beginners, programming, opensource, git
Git is a distributed version control system (DVCS) that allows multiple developers to collaborate on a project, tracking changes to files and coordinating work seamlessly. ## Initializing a Repository: ``` git init ``` ## Cloning a Repository: ``` git clone -b <branch-name> <url> <folder-name> ``` ## Working with Branches: **Create a new branch:** ``` git checkout -b <new-branch> ``` **Switch branches:** ``` git checkout <branch-name> ``` **Create and Switch branches:** ``` git switch -c <new-branch> ``` **List all branches:** ``` git branch ``` ## Staging and Committing Changes: **Add files to staging area:** ``` git add . ``` **Commit changes:** ``` git commit -m "message" ``` **View changes before staging:** ``` git diff ``` ## Remote Repositories: **Add a remote repository:** ``` git remote add origin <remote-repository-URL> ``` **List all remotes:** ``` git remote -v ``` **Push changes to a remote repository:** ``` git push -u origin <branch-name> ``` **Pull changes from a remote repository:** ``` git pull origin <branch-name> ``` ## Stashing Changes: Stashing in Git allows you to temporarily save changes that are not yet ready to be committed, so we can switch to another branch without losing your progress. It is used when you need to quickly switch contexts or update your working directory without committing incomplete work. **Stash changes:** ``` git stash ``` **Apply stashed changes:** ``` git stash apply ``` ## Configuration: **Change user name:** ``` git config --global user.name "name" ``` **Change user email:** ``` git config --global user.email "email" ``` ## Reset: **Soft reset:** ``` git reset --soft HEAD^ ``` Moves to the previous commit and discards all changes. **Hard reset:** ``` git reset --hard HEAD^ ``` Moves to the previous commit but keeps all changes. ( **HEAD** is a reference to the current commit & **notation HEAD^** refers to the parent commit of the current HEAD )
madgan95
1,892,041
LeetCode Day 11 Binary Tree Part 1
Tree Structure 2 ways to realize Binary Tree 1, Linked method. Similar to a Linked list,...
0
2024-06-18T14:08:01
https://dev.to/flame_chan_llll/leetcode-day-11-binary-tree-part-1-2ne3
leetcode, java, datastructures
# Tree Structure 2 ways to realize Binary Tree 1, Linked method. Similar to a Linked list, use logical linking by references instead of a physically linked data structure 2, Array is a physically linked data structure and we can realize it in logic as well. e.g. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9olu8wnj20nm3n58aai0.png) so if the parent node is i, the left child will be i*2+1, and the right child will be i*2 + 2 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tprcsfaytxqoz4zyas2w.png) And if the Tree is not Binary Tree and it is a tree that each parent has 3 children. In this circumstance, if the parent node is i, the child from left to right, the 1st child will be i*3+1, the 2nd child will be i*3+2, the 3rd child will be i*3 +3 etc. And the normal init Tree is easy to realize: ``` public class MyTree { public class TreeNode<E> { public E val; public TreeNode<E> lefTNode; public TreeNode<E> rightNode; public TreeNode() { } public TreeNode(E val){ this.val = val; } public TreeNode(E val, TreeNode<E> leftNode, TreeNode<E> rightNode) { this.val = val; this.lefTNode = leftNode; this.rightNode = rightNode; } } ``` There are 4 methods to traverse binary Tree 1, Preorder: mid -> left -> right 2,Inorder: left -> mid -> right 3,Postorder: left -> right -> mid 4,levelorder: array order # LeetCode No.144 Binary Tree Preorder Traversal Given the root of a binary tree, return the preorder traversal of its nodes' values. Example 1: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlu5i5oacqbjmitsh8lt.png) > Input: root = [1,null,2,3] > Output: [1,2,3] Example 2: >Input: root = [] >Output: [] Example 3: >Input: root = [1] >Output: [1] ### 1. ``` public List<Integer> preorderTraversal(TreeNode root) { List<Integer> list = new ArrayList<>(); if(root != null){ list.add(root.val); if(root.left!=null){ list.addAll(preorderTraversal(root.left)); } if(root.right!=null){ list.addAll(preorderTraversal(root.right)); } } return list; } ``` ### 2. ``` ``` # LeetCode No. 94 Binary Tree Inorder Traversal [Original Page](https://leetcode.com/problems/binary-tree-inorder-traversal/description/) ``` public List<Integer> inorderTraversal(TreeNode root) { List<Integer> list = new ArrayList<>(); if(root != null){ list.addAll(inorderTraversal(root.left)); list.add(root.val); list.addAll(inorderTraversal(root.right)); } return list; } ``` # LeetCode No. 145. Binary Tree Postorder Traversal [Original Page](https://leetcode.com/problems/binary-tree-postorder-traversal/description/) ``` public List<Integer> postorderTraversal(TreeNode root) { List<Integer> list = new ArrayList<>(); if(root != null){ list.addAll(postorderTraversal(root.left)); list.addAll(postorderTraversal(root.right)); list.add(root.val); } return list; } ```
flame_chan_llll
1,892,461
Day 7 of 30
So for today I reading related to Hacking and did some React.JS learning. **&lt;u&gt;What I...
0
2024-06-18T14:05:56
https://dev.to/francis_ngugi/day-7-of-30-10gj
react, hacking
So for today I reading related to Hacking and did some React.JS learning. **<u>What I did in TryHackMe</u>** -So I read about SMB and how it works and How to enumerate service running on SMB and how to exploit it. **<u>What I learnt on React.JS</u>** -For today I did a simple project that focused on the Admin role on a quiz app where the admin could add a qsn, delete a qsn and Change the answer of the qsn and all of the qsns were to be fetched in a db.json mock server: i)The code: https://github.com/FrancisNgigi05/react-hooks-fetch-crud-lab -After the project I did some learning on Client-side routing and did a small project: i)The learning part code: https://github.com/FrancisNgigi05/react-hooks-react-router-code-along ii) The small project I did to practice what I had learnt: https://github.com/FrancisNgigi05/react-hooks-react-router-routes-lab
francis_ngugi
1,892,627
Introducing the New .NET MAUI Digital Gauge Control
TL;DR: Experience the future of digital interaction with the cutting-edge Syncfusion .NET MAUI...
0
2024-06-19T15:02:03
https://www.syncfusion.com/blogs/post/dotnetmaui-digital-gauge-control
dotnetmaui, mobile, maui, ui
--- title: Introducing the New .NET MAUI Digital Gauge Control published: true date: 2024-06-18 14:04:49 UTC tags: dotnetmaui, mobile, maui, ui canonical_url: https://www.syncfusion.com/blogs/post/dotnetmaui-digital-gauge-control cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v03akpcll16qb0nytnh2.png --- **TL;DR:** Experience the future of digital interaction with the cutting-edge Syncfusion .NET MAUI Digital Gauge control, unveiled in our 2024 Volume 2 release! Let’s explore its features and the steps to get started with it. We are thrilled to introduce the new [Syncfusion .NET MAUI Digital Gauge](https://www.syncfusion.com/maui-controls/maui-digital-gauge ".NET MAUI Digital Gauge Control") control in our most recent launch, [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2")! This state-of-the-art control is designed to showcase alphanumeric and special characters in a sleek digital display format. It’s a perfect fit for many applications, including dashboards, real-time monitoring systems, and data visualization. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/NET-MAUI-Digital-Gauge.gif" alt=".NET MAUI Digital Gauge" style="width:100%"> <figcaption>.NET MAUI Digital Gauge</figcaption> </figure> Let’s explore its key features. ## Key features - [Character segment types](#Character) - [Character display types](#display) - [Appearance customization](#Appearance) ## <a name="Character">Character segment types</a> The .NET MAUI Digital Gauge control offers various segment types for displaying characters, including seven, fourteen, sixteen, and 8×8 dot matrices. This flexibility allows for the precise representation of multiple data types in apps that require clear and distinct character displays. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/7-segment-character-type-in-.NET-MAUI-Digital-Gauge.png" alt="7 segment character type in .NET MAUI Digital Gauge" style="width:100%"> <figcaption>7 segment character type in .NET MAUI Digital Gauge</figcaption> </figure> <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/14-segment-character-type-in-.NET-MAUI-Digital-Gauge.png" alt="14 segment character type in .NET MAUI Digital Gauge" style="width:100%"> <figcaption>14 segment character type in .NET MAUI Digital Gauge</figcaption> </figure> <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/16-segment-character-type-in-.NET-MAUI-Digital-Gauge.png" alt="16 segment character type in .NET MAUI Digital Gauge" style="width:100%"> <figcaption>16 segment character type in .NET MAUI Digital Gauge</figcaption> </figure> <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/88-dot-matrices-character-type-in-.NET-MAUI-Digital-Gauge.png" alt="8*8 dot matrices character type in .NET MAUI Digital Gauge" style="width:100%"> <figcaption>8*8 dot matrices character type in .NET MAUI Digital Gauge</figcaption> </figure> ## <a name="display">Character display types</a> The .NET MAUI Digital Gauge control can display letters, numbers, and special characters in digital format. It is beneficial for applications like digital clocks where clear and precise character representation is crucial. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/NET-MAUI-Digital-Gauge-displaying-numbers.png" alt=".NET MAUI Digital Gauge displaying numbers" style="width:100%"> <figcaption>.NET MAUI Digital Gauge displaying numbers</figcaption> </figure> <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/NET-MAUI-Digital-Gauge-displaying-alphabetic-letters.png" alt=".NET MAUI Digital Gauge displaying alphabetic letters" style="width:100%"> <figcaption>.NET MAUI Digital Gauge displaying alphabetic letters</figcaption> </figure> <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/NET-MAUI-Digital-Gauge-displaying-special-characters.png" alt=".NET MAUI Digital Gauge displaying special characters" style="width:100%"> <figcaption>.NET MAUI Digital Gauge displaying special characters</figcaption> </figure> ## <a name="Appearance">Appearance customization</a> To enhance visual appeal, you can customize the character’s appearance with color, size, and spacing. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Customizing-characters-in-.NET-MAUI-Digital-Gauge.png" alt="Customizing characters in .NET MAUI Digital Gauge" style="width:100%"> <figcaption>Customizing characters in .NET MAUI Digital Gauge</figcaption> </figure> ## Getting started with .NET MAUI Digital Gauge control We’ve seen some of the key features of the Syncfusion .NET MAUI Digital Gauge control. Now, let’s see how to integrate it into your application. 1.[Create a new .NET MAUI application](https://learn.microsoft.com/en-us/dotnet/maui/get-started/first-app?view=net-maui-8.0 "Build your first .NET MAUI app") in [Visual Studio](https://visualstudio.microsoft.com/vs/ "Visual Studio"). 2.**Syncfusion.Maui.Core** NuGet is a dependent package for all Syncfusion .NET MAUI controls. In the **MauiProgram.cs** file, register the handler for Syncfusion core. ```csharp using Syncfusion.Maui.Core.Hosting; builder .UseMauiApp<App>() .ConfigureSyncfusionCore() ``` 3.Syncfusion .NET MAUI components are available in the [NuGet Gallery](https://www.nuget.org/ "NuGet Gallery"). To add the **SfDigitalGauge** to your project, open the **NuGet Package Manager** in Visual Studio and search for [Syncfusion.Maui.Gauges](https://www.nuget.org/packages/Syncfusion.Maui.Gauges "Syncfusion.Maui.Gauges NuGet package"), and install it. 4.Now, import the control namespace **Syncfusion.Maui.Gauges** in your XAML or C# code and initialize the **SfDigitalGauge** control **.** 5.Finally, add the [Text](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Gauges.SfDigitalGauge.html#Syncfusion_Maui_Gauges_SfDigitalGauge_Text "Text property of .NET MAUI Digital Gauge") property to display the value in the digital gauge and set the [CharacterType](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Gauges.SfDigitalGauge.html#Syncfusion_Maui_Gauges_SfDigitalGauge_CharacterType "CharacterType property of .NET MAUI Digital Gauge") based on your requirements. Refer to the following code example. **MainPage.xaml** ```xml xmlns:gauge="clr-namespace:Syncfusion.Maui.Gauges;assembly=Syncfusion.Maui.Gauges"<gauge:SfDigitalGauge Text="06:14:56 PM" CharacterType="FourteenSegment" /> ``` Refer to the following output image. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Integrating-Digital-Gauge-control-in-the-.NET-MAUI-app.png" alt="Integrating Digital Gauge control in the .NET MAUI app" style="width:100%"> <figcaption>Integrating Digital Gauge control in the .NET MAUI app</figcaption> </figure> ## GitHub reference For more details, refer to the [.NET MAUI Digital Gauge demo on GitHub](https://github.com/SyncfusionExamples/maui-digital-gauge/tree/master/GettingStarted ".NET MAUI Digital Gauge demo on GitHub"). ## Conclusion Thanks for reading! In this blog, we’ve explored the features of the new Syncfusion [.NET MAUI Digital Gauge](https://www.syncfusion.com/maui-controls/maui-digital-gauge ".NET MAUI Digital Gauge") control rolled out in the [2024 volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 volume 2") release. For more information, please check the [.NET MAUI Digital Gauge documentation](https://help.syncfusion.com/maui/digitalgauge/overview "Getting started with the .NET MAUI Digital Gauge"). You can also check out all the other new updates of this release in our [Release Notes](https://help.syncfusion.com/common/essential-studio/release-notes/v26.1.35 "Essential Studio Release Notes") and [What’s New](https://www.syncfusion.com/products/whatsnew "Essential Studio What’s New") pages. For current Syncfusion customers, the newest version of Essential Studio is available from the [license and downloads page](https://www.syncfusion.com/account/downloads "Essential Studio License and Downloads page"). If you are not a customer, try our 30-day [free trial](https://www.syncfusion.com/downloads "Get free evaluation of the Essential Studio products") to check out these new features. If you have questions, contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [feedback portal](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"), or [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"). We are always happy to assist you! ## Related blogs - [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!") - [Introducing the 12th Set of New .NET MAUI Controls and Features](https://www.syncfusion.com/blogs/post/syncfusion-dotnet-maui-2024-volume-2 "Blog: Introducing the 12th Set of New .NET MAUI Controls and Features") - [What’s New in .NET MAUI Charts: 2024 Volume 2](https://www.syncfusion.com/blogs/post/dotnet-maui-charts-2024-volume-2 "Blog: What’s New in .NET MAUI Charts: 2024 Volume 2") - [How to Lazy Load JSON Data in .NET MAUI DataGrid](https://www.syncfusion.com/blogs/post/lazy-load-json-data-dotnetmaui-grid "Blog: How to Lazy Load JSON Data in .NET MAUI DataGrid")
gayathrigithub7
1,892,460
Residential Mortgage Service | Commercial Loan Servicing Companies
Find the best mortgage and loan servicing companies offering superior customer service, efficient...
0
2024-06-18T14:04:22
https://dev.to/covey_financial_2a83d934b/residential-mortgage-service-commercial-loan-servicing-companies-3n26
Find the best mortgage and loan servicing companies offering superior customer service, efficient payment handling, and reliable financial management. Get top-rated providers dedicated to streamlining your mortgage and loan servicing experience. Visit : []( ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmqrckzi0p8rr0aijclj.png) ) #MortgageServicingCompanies #LoanServicingCompanies #ResidentialMortgageServices #TopMortgageServicingCompanies #BestMortgageServicingCompanies #CommercialLoanServicingCompanies
covey_financial_2a83d934b
1,892,459
Watch the outcome of TailwindCSS vs Bootstrap!
When you are struggling to pick a CSS framework, two popular options stand out: TailwindCSS and...
0
2024-06-18T13:59:47
https://dev.to/zoltan_fehervari_52b16d1d/watch-the-outcome-of-tailwindcss-vs-bootstrap-1o2o
tailwindcss, bootstrap
When you are struggling to pick a CSS framework, two popular options stand out: [TailwindCSS and Bootstrap](https://bluebirdinternational.com/tailwindcss-vs-bootstrap/). Each has unique features and advantages, and understanding their differences can help you make an informed decision for your web development needs. Please read on! ## Understanding TailwindCSS ## Features Utility-First: TailwindCSS is a modern framework that focuses on utility classes, providing a comprehensive set of styling options that are highly customizable. Utility Classes: These atomic-level styling options allow for extensive customization without writing custom CSS. TailwindCSS also includes responsive utility classes for mobile-friendly designs. ## Exploring Bootstrap ### Components Pre-Built Components: Bootstrap offers a wide range of pre-built components like navigation menus, forms, buttons, and alerts, which can save development time. Grid System: Bootstrap’s responsive 12-column grid system simplifies layout design, making it easy to create complex, flexible layouts. ## Performance Considerations ## File Size TailwindCSS: Uses a modular approach, generating smaller file sizes and improving performance. Bootstrap: Larger file size due to comprehensive components, which can affect performance on slower networks. ## Loading Speed TailwindCSS: Optimized for fast loading times with minimal server requests. Bootstrap: Loading speed varies with the number of components used; optimizing by including only necessary components is crucial. ## Customization Options **TailwindCSS** Utility Classes: Allows for high flexibility and quick development without custom CSS. Theme Configuration: Offers built-in options for easy theme customization. **Bootstrap** Predefined Classes: Customization by overriding default variables or writing custom CSS. Component Customization: Offers extensive control but can be restrictive for unique designs. ## Hard studies and Documentation ### Learn, learn and learn TailwindCSS: Steeper initially, but once utility classes and naming conventions are understood, it becomes easier. Bootstrap: Moderate learning path with a mix of utility classes and pre-built components. Documentation TailwindCSS: Detailed documentation for every utility class, but can be time-consuming to navigate. Bootstrap: Comprehensive and well-structured documentation with extensive examples and customization tools. ## Community and Support **TailwindCSS** Growing community with active support forums and third-party resources. Official Discord server for discussions and Q&A sessions. **Bootstrap** Established community with a broad range of resources and support forums. Extensive guides and official forums for community interaction. ## Real-World Use Cases **TailwindCSS** E-commerce Websites: Quick creation of responsive, visually appealing sites. Landing Pages: Customizable designs for brand representation. SaaS Applications: Modular approach for complex UIs. **Bootstrap** Content-Heavy Websites: Grid-based framework for organized content. Responsive Web Applications: Easy-to-use components for various devices. Prototyping: Quick mockup creation for early testing and feedback. ## The real Conclusion: TailwindCSS vs Bootstrap **Factors to Consider** Flexibility and Customization: TailwindCSS offers more control with utility classes. Pre-Built Components: Bootstrap simplifies development with ready-to-use components. Performance: Both frameworks offer optimizations, but file size and loading speed should be considered. The studies: Both have comprehensive documentation, but TailwindCSS may take longer to master initially. Community Support: Bootstrap’s larger, established community offers more resources.
zoltan_fehervari_52b16d1d
1,892,458
Need Help While Learning the Web Devlopment
Can anyone suggest any video or link through which I can understand How is the website built from...
0
2024-06-18T13:58:52
https://dev.to/raj_kumar_3be6c725b7b7beb/need-help-while-learning-the-web-devlopment-3265
Can anyone suggest any video or link through which I can understand 1. How is the website built from scratch to End ? 2. What are the tools in each stage required during web dev? 3. How can I connect the dots of web development to understand it in depth as i learn from myself only? I need guidance and if any books tutorial references or any article or youtube video any thing that Can clear my issue...! So that I can Visualize web development deeply and understand things properly...! Especially how things work during development ? Actually a lot of technology are now present during building a website but at which stages of web development (building websites) we require Which technology we don't know how to use them effectively. Help!!!
raj_kumar_3be6c725b7b7beb
1,892,455
RocketLane - Round 3 (Technical with CTO)
Q1. What do you do as a Research Engineer in your company? Q2. What is a recent bug that you have...
0
2024-06-18T13:55:00
https://dev.to/alamfatima1999/rocketlane-round-3-technical-with-cto-53nj
Q1. What do you do as a Research Engineer in your company? Q2. What is a recent bug that you have solved? Q3. What are you using in the backend? Q4. Are you using any mechanism to authenticate your user? Ans. JWT Token. Q5. Does server produce this token? Q6. How does client get this token? Q7. Will this token be sent every time the client wants to communicate? Q8. Can we send the token in a cookie? Q9. What does a cookie look like? Q10. Why is it ";" separated? Q11. What is the structure of a cookie? Q12. Does user send it explicitly (the cookie)? Ans. No, the browser sends it automatically. And we can also send any additional cookie if the user wants. Q13. What are Http methods? Q14. What other data can you send in it? Ans. Payload, cookies, params. Q15. Javas=Script Question? There is normal table <td>, <tr> and if I click on a particular cell then that cell should be active. How will you achieve it? Answer Link - https://dev.to/fatimaalam1234/react-interview-html-table-question-36b3 Additional Question -> Q16. How JWT is updated?
alamfatima1999
1,892,078
My Experience at JSNation and React Summit Amsterdam
This was my first time at the biggest JavaScript Festival and React Conference in the world (not an...
0
2024-06-18T13:52:33
https://dev.to/infoxicator/my-experience-at-jsnation-and-react-summit-amsterdam-4cbb
react, community, javascript, frontend
This was my first time at the biggest JavaScript Festival and React Conference in the world (not an exaggeration). It was also my first time as an attendee and I enjoyed not having to worry about giving a talk and just experiencing the conference! Apart from photobombing the stage TVs every single time I had a chance 😅, I was also lucky enough to have **1 on 1 conversations** with the smartest and most influential speakers and library authors in our industry. ### Here's what we talked about! ![My profile pic on the tv screens](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ye312n4sw8730gutq49d.jpg) ## Ryan Carniato, Creator of SolidJS Ryan and I discussed the “**missing link**” in the evolution of client-side applications. There is still a gap between Multiple Page applications (MPAs) and Single Page Applications (SPAs) with client-side routing where we are still forced to choose one or the other. There have been improvements with Astro and React Server Components but they still have a performance degradation element that stops it from being complete. My question to him was if the current stack is not enough for most use cases (sites that are not e-commerce and so sensitive to performance impact) his answer was (paraphrasing): **"Mediocrity should not stop progress".** He is doing some work on Solid Start that might bring a breakthrough in this area, so I am really excited about what he's cooking 👀. We also discussed migrations, how difficult it is for a company to adopt a new technology and how the landscape is very different from when React came out. Applications are getting old and we are due to a new wave of technologies that use the best parts of what we learned in the past 10 years. {% twitter 1801517170850967767 %} ## Tobias K, Creator of Webpack I had a long chat with Tobias about improving chunking in large applications to reduce bundle size. We also discussed the lack of a visualisation tool to represent the module graph. I am using the tool he created [https://webpack.github.io/analyse/](https://webpack.github.io/analyse/) but it hasn't been maintained for a while and he acknowledged it doesn’t work well with large stats files. I also asked of course, **what's the status of TurboPack**, his thoughts on RSPack and support for Module Federation. He was very diplomatic and asked me to keep an eye out for later this year. ## Dominik, Maintainer of React Query Spend a lot of time with Dominik walking around Amsterdam and **helping him #FreeSuspense 😂**. Apart from having fun, missing the walking tour and trying to find presents for our kids, we also had the chance to talk about Local First applications, State Management in large apps, how we at Postman are pushing React Query to the limit and our custom implementation of "Mobx Query" and "Broadcast client". He asked me to write some articles about those usages of React Query that we could showcase and share with the community. It was also great to be there during the conversation with Sathya from the React Core team at Meta when the decision was made to hold React 19 to ensure the client-side suspense data fetching story was correctly implemented. **React history was made and I was there for it!** (Special thanks to Dominik for recommending me the best present for my daughter, what a legend!) {% twitter 1801669381182685508 %} ## Evan Bacon, Creator of Expo Router Had a great long chat with Evan. Went over so many topics including, Generative UI, Full Self Driving, Expo router, how to make cool demos, Apple and how they are good at marketing but not so good at execution. We talked about the difficulties in the distribution of native applications, multiple versions, backward compatibility, major version policy, rollbacks and forward. He also mentioned the cool features of Expo that help with the distribution of Native Apps. We also discussed his demo of React Server Components generated by Ai and streamed to React Native, the future after the Demo what it means for **Server Driven UI, personalisation and the usefulness of generative UI beyond cool demos.** {% twitter 1801630044030111889 %} ## Brooks Lybrand, Devrel React Router (Remix) I watched Brooks talk about bringing React Router and the newer features of React to existing CRA and Webpack apps. React Router v7 and the migration path look very promising and it aligns with the architecture I have been working on which will make it so simple to eventually upgrade to React 19. My only complaint was he didn't have any spare Limited Edition Remix hoodies to give away. I also helped him design the new React Router Remix logo but Ryan got very mad at us 😂 {% twitter 1801624394466476066 %} ## Juri S, Developer Relations, NX Juri and I talked about how we are using the NX monorepo at Postman and how it would be a good idea to showcase the architecture and improvements we made to NX for Micro-Frontends and independent app deployments as a case study. I also gave him a demo of the breaking change detection system that my colleague Patrick and our team created to suggest semantic versioning package versions and that it would be awesome to include it in the NX release command. ## Mo Khazali I discussed with Mo and Evan about bringing Module Federation to React Native, Evan is not that keen since it is an “organisational issue” not a user-facing or a DX issue but it was interesting to show our perspective on where it could be useful to send runtime modules over the air for native apps when multiple teams are deploying different parts of a large React Native Application. After our chat I found out that Module Federation is already supported in React Native if you use [Re-Pack](https://re-pack.dev/docs/module-federation) ## Una Kravets It was great to meet Una in person finally. I watched her talk at C3 DevFest and even though I had seen the ThunderCats intro before it was still really funny. I hope she doesn't remember my name because I am in trouble with the Google Developer Experts program for not reporting my engagements... oops 😅 ## Special Mentions These are just summaries of what I remember, some conversations went off for hours sometimes. I should have a hidden microphone and release these chats as a podcast series. Thanks to [Niall Maher](https://x.com/nialljoemaher), Carolina, [Jesse Hall](https://x.com/codeSTACKr) and the Irish mafia for putting up with me for 3 days straight. [Daniel A(l)fonso](https://x.com/danieljcafonso), [Atila](https://x.com/AtilaFassina), Mi Parcero [Erick Wendel](https://x.com/erickwendel_) and [David K](https://x.com/DavidKPiano) for hanging out with me as the infiltrated non-speaker and for giving me free food truck tokens. Finally thanks [Josh Goldberg](https://x.com/JoshuaKGoldberg), for being a great friend and listening to me yapping until 2 AM. We didn't talk about tech at all but we talked about so many things! Next time he's going to convince me TypeScript is not considered self-harm at this point 😂 {% twitter 1802131541687927171 %} ## One last thing... The Conference! This is the best frontend conference, full stop. I have been going to GitNation events for a while and it is always a pleasure to be part of this community. Rob, Daria, Anna, Alex, Lera and the rest of the crew are amazing at what they do and they really care about the details and the community. Congratulations on another amazing event! and see you at React Advanced London. ![Arriving at the conf by bike](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfed9wl6ksiivrchs28i.jpeg)
infoxicator
1,892,454
What is Multicanais?
Welcome to MulticanaisBr.Pro! If you are a football fan and want to watch football games online for...
0
2024-06-18T13:52:06
https://dev.to/william_willi/what-is-multicanais-p1c
sport, footbal
Welcome to MulticanaisBr.Pro! If you are a football fan and want to watch football games online for free, you are in the right place. Viewers can watch the Multichannel Game live in HD quality without paying anything. Each Multichannel is 100% free to watch! [Multicanais](https://multicanaisbr.pro/) fans can now enjoy VIP TV Ao Vivo BBB 24 streaming. All the main sports such as Football, Formula 1, Boxing, Combat and much more can be watched for free. Many sports channels like Premier Brasil, ESPN Brasil, ESPN Brasil 2, ESPN Brasil 3, ESPN Brasil 4, NBA, Eurosport 1 HD and UFC are possible to stream and many new channels will be available soon! Furthermore, you can watch the games of your favorite teams like Real Madrid, Flamengo, Vasco, etc. Our mission is to provide football fans with uninterrupted streaming of their dream games and teams. We keep updating match times every day to let users know about upcoming matches. After visiting our multichannel website, you will be able to watch highlights of various matches or games from Football, Basketball, NBA, Fights and much more.
william_willi
1,892,739
How to be the best programmer, according to Daniel Terhorst-North
Photo: Craft Conf For more content like this subscribe to the ShiftMag newsletter. Or rather,...
0
2024-07-04T15:51:40
https://shiftmag.dev/the-best-programmer-daniel-terhorst-north-3526/
career, craftconf, danielterhorstnorth, thebestprogrammer
--- title: How to be the best programmer, according to Daniel Terhorst-North published: true date: 2024-06-18 13:52:06 UTC tags: Career,CraftConf,DanielTerhorstNorth,thebestprogrammer canonical_url: https://shiftmag.dev/the-best-programmer-daniel-terhorst-north-3526/ --- ![](https://shiftmag.dev/wp-content/uploads/2024/06/daniel-terhorst-north.png?x43006) _Photo: Craft Conf_ _For more content like this **[subscribe to the ShiftMag newsletter](https://shiftmag.dev/newsletter/)**._ Or rather, they make themselves, carefully and deliberately, over time, said Dan as he explained more in detail what he had meant by ‘the best programmer he knows’ at the [Craft conference in Budapest](https://craft-conf.com/2024). His presentation was inspired by [a Twitter thread](https://x.com/tastapod/status/1010461873270153216?s=20) he wrote about **the qualities of the best programmers** that went viral. Dan’s blog post about[the worst programmer he knows](https://dannorth.net/the-worst-programmer/), but actually about the absurdities of measuring developer productivity, also went viral, but that’s another story. The best programmer he talked about is a real person; he’s known that person for over 20 years. And that person is not the best programmer because they are the best at solving LeetCode or the best at solving algorithmic problems (those programmers are going to be **the first ones replaced by the LLMs** , says Dan). That particular best programmer he knows **doesn’t have a computer science degree** , say Dan, but he’s not the best because of that. Some other best programmers have degrees. This particular one Dan described also happens to be male, but that’s also not why he is the best programmer. Some of the best programmers are female. But why is he the best programmer, then? As Dan says, the best programmers have _an insatiable curiosity and the belief that they can convince a computer to do anything. They also have a healthy disregard for language and tool zealotry._ Here are the characteristics and ways of working Dan has managed to pinpoint that make a programmer to be the best programmer: ## Getting the job done The programmer’s job is to deliver. If you don’t get the job done, it doesn’t matter how good you are! ## 1.1. Just start The best programmer doesn’t start by doing extensive research on the problem at hand, reading tutorials, etc. He just starts, even if he doesn’t know everything about the task. He does one thing, and if it doesn’t work, he then tries something else. He resists the urge to procrastinate and knows that doing (and doing something wrong) is researching! ## 1.2. Know that you don’t know Most people want to do something only if they can do it well. We fear showing that we’re not good at something. The best programmers know that it’s just ego and that they don’t have to do a good or the right job every time. They write their code with a motto: _ **‘If it’s not good, I’ll rewrite it!’** _ ## 1.3 Iterate wildly They are confident in trying, failing, learning, and repeating. And doing so repeatedly. Write the best code you can, knowing half of it won’t be there next week (but half will, so don’t write shit code!). ## 1.4 Build a product The best programmer is aware that his job is not to show software craftmanship or build beautiful software. Their job is to build a product! That’s why they’re **not emotionally attached to the code they wrote** but to the outcome. They’re invested in the outcome, the product they build; code is just the means to that end. The best programmer is interested in the problems of its final users when building a product. If he’s building a product for nurses, he would go and watch and talk to them and then represent those findings in code. ## 1.5. Solve for now The best programmers solve the real problem, not some fancy generalized version of it. They learn to see what is really there and develop’ at first sight.’ ![](https://shiftmag.dev/wp-content/uploads/2024/06/daniel-terhorst-north1-1024x538.png?x43006) _Photo: Craft Conf_ ## Choosing the right tool Dan admitted that he had completely changed his opinion on how to choose the right tech stack/tool, and it took him a while. Now, he thinks that the best developers choose the right tool for the job, even if they haven’t used that tool before. ## 2.1. Teams can learn Choose the right tool for the product, not for the team. It’s easy to choose to use Java on a project if that’s what the team knows. Teams can learn; they weren’t born knowing Java. The best developer figures out if the investment in learning the tool is worth it to solve the problem the right way. Why is that so important? Because **code outlives teams and organizations.** ## 2.2. Do the simplest thing, not the easiest It’s important to know that it’s not the same. The best programmer does not write code that’s easiest for him to write; he writes simple, obvious code that is easy to change later. ## 2.3 The right tool may change The best programmers write code that is easy to decompose, restructure, and rewrite. As Kent Beck put it – make the change easy and then make the easy change. You make the change easy by minimizing the blast radius, writing small, self-contained hacks, and using compostable materials that can be thrown away easily. ## 2.4 Be a polyglot It’s crucial to explore languages, tools, and paradigms—not to be a know-it-all Leetcode smartass but to get different perspectives and points of view. Try hackathons and challenges like Advent of Code! The more things you have experienced and played with, the better you will be at picking the right tool. **Be a full-stack developer!** Be curious about everything that makes a great web page, be it front-end, APIs or architecture. **Be REALLY full stack!** Strive to also learn about processes, business, or hardware. > [Thread] What do I mean by “the best programmer I know”? Let’s start with the assumption I think I’m a decent programmer. Here are some examples: > > 1. He sees what is really there. I see what I am conditioned to see. Once he points it out, it was obvious all along. [https://t.co/dGPzMKfb29](https://t.co/dGPzMKfb29) > > — Daniel Terhorst-North is @tastapod@mas.to (@tastapod) [June 23, 2018](https://twitter.com/tastapod/status/1010461873270153216?ref_src=twsrc%5Etfw) <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ## Care about the team No best programmer works in isolation, and caring about the team is a big part of what makes him the best. ## 3.1 Helping others The best developer finds joy in helping others. Sometimes, it’s just by saying encouraging words, and sometimes, it’s by teaching ( which is also the best way to learn)! ## 3.2 Send the team home No one should be working late; a rested team is effective team. ## 3.3 Be kind Assume everyone is doing their best ## 3.4 Stay in the loop Join communities, try new things, practice, practice, practice! The best programmer is the best because he works really hard for it. There is no innate software engineering talent; it’s conscious behavior. ## 3.5 Care about yourself Although he works hard, he also does other things and has interests outside of programming. He makes sure to go home on time (sleep is the best debugger!), Above all, says Dan, the best programmer he knows is kind – to his colleagues and to himself. You might recognize some of these traits in yourself, Dan concludes – you might decide you want to aspire to some of them; you might choose to refer to them as an interviewer or as a candidate: > My hope is simply that you find this useful and, in some way, inspiring. The post [How to be the best programmer, according to Daniel Terhorst-North](https://shiftmag.dev/the-best-programmer-daniel-terhorst-north-3526/) appeared first on [ShiftMag](https://shiftmag.dev).
shiftmag
1,892,453
Career Tips
Originally published on ℳontegasppα And Giulia C.’S Thoughts. Excuse my lack of modesty, but I’m...
0
2024-06-18T13:50:52
https://dev.to/cacilhas/career-tips-242a
career, tips
Originally published on [ℳontegasppα And Giulia C.’S Thoughts](https://montegasppa.cacilhas.info/2024/06/career-tips.html). ----- Excuse my lack of modesty, but I’m pretty good at what I do. I’m not even close to the best though, but I’m way above average. It’s due to what I’ve learned over more than 20 years of work experience, and I’d like to share some of the simpler and more efficient tips I got. Affection --------- Don’t fall in love with technology and, even more important, don’t get attached to your code. The code you wrote is **NOT** your son – neither your boyfriend, nor your girlfriend, nor your mother, nor whatelse. Your code is a by-product of turning coffee into shit, a “stuff.” And just like many stuff, it’s meant to be used, not loved. Look at your code as a necessary evil to be discarded as soon as it’s no longer needed. Do the same facing technologies: programming languages, frameworks, engines, and even methodologies are tools, not rules. Use them wisely, discard each one when you can. Design ------ Avoid writing code. Design it first. Of course you can prospect by coding, however discard it when you head the final code, keep only the cleanest one. My current job is to find value by just deleting code, and I have virtually implemented features by deleting code! How is it possible? Imagine a TV that isn’t working. It’s plugged to an adapter… which is plugged to another adapter… which is plugged to a third one! The last one is plugged to an extension. Than you look at that, remove the adapters, and plug the TV electrical connector directly to the extension, and *voilà*! The TV magically turns on! That’s pretty much what I’ve been doing, but why? Because people don’t design. They simply write random code like typing monkeys, hoping something will eventually work. It leads us to the next topic. Rely but don’t trust -------------------- Use methodologies, but don’t let methodologies use you. Rely on [TDD](https://www.wikiwand.com/en/Test-driven_development), [TDD](https://www.manning.com/books/type-driven-development-with-idris), [SOLID](https://www.wikiwand.com/en/SOLID), [DDD](https://www.wikiwand.com/en/Domain-driven_design), [CDD](https://youtu.be/Kdpfhj3VM04), [AOP](https://www.wikiwand.com/en/Aspect-oriented_programming), [ADT](https://www.wikiwand.com/en/Algebraic_data_type), but always revisit the concepts and adapt them to your real-world needs. ## Conformism Never ever be fully satisfied by yourself. I never know enough, I’m always looking for the next subject or getting deeper in something I’ve learned, so you should too. --- Those are the tips I have for you today.
cacilhas
1,892,452
This is your Python Developer Roadmap!
Grab it while you can… Whether you’re just starting your Python journey or looking to enhance your...
0
2024-06-18T13:46:58
https://dev.to/zoltan_fehervari_52b16d1d/this-is-your-python-developer-roadmap-5h42
python, pythondeveloper, pythondeveloperroadmap, pythoncareer
Grab it while you can… Whether you’re just starting your Python journey or looking to enhance your skills, [my Python Developer Roadmap](https://bluebirdinternational.com/python-developer-roadmap/) will guide you towards mastering advanced coding skills. ## Key Takeaways - My roadmap is designed to help you excel in Python programming, regardless of your experience level. - From basics to advanced concepts, we cover essential aspects of Python coding for success. - Learn about Python’s versatility and real-world applications. - Build a strong foundation and advance to expert-level topics. - Discover tools and best practices for productivity and clean code. - Develop a professional portfolio, contribute to open-source projects, and specialize in Python to advance your career. Why Python is a Top Choice for Modern Developers Python stands out due to its readability, versatility, and strong community support. It’s ideal for web development, data science, AI, and more. Python’s straightforward syntax makes it accessible to beginners while simplifying complex concepts for experienced programmers. Its robust libraries and frameworks make it indispensable in modern development, paving the way for career growth and innovation. ### Python Application Areas and Key Libraries ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uvx4kyqhspbg5vgj3wzp.png) ### Advantages of Python 1. Readability: Easy to learn and understand, with an elegant code structure. 2. Extensive Libraries: Simplifies complex tasks and reduces development time. 3. Community Support: A large, active community offers resources and guidance. 4. Platform Independence: Runs seamlessly across different platforms. 5. Integration Capabilities: Easily interfaces with existing APIs and libraries. ## Fundamentals for Python Developers Starting with Python involves understanding variables, data types, and basic operations. Mastering these basics lays a solid foundation for more complex tasks. Control Structures and Loops: Implement if-else statements, for and while loops, and control mechanisms to solve logical problems and implement algorithms efficiently. ## Essential Libraries for Beginners - NumPy: Mathematical computations - Pandas: Data manipulation - Matplotlib: Data visualization - Django: Web development - Flask: Web development - Scikit-learn: Machine learning ## Intermediate Skills ### Advanced Data Structures - Sets: Store unique elements and perform set operations. - Deque: Efficiently append or remove elements from both ends. - List Comprehensions and Lambda Functions: Create lists and anonymous functions concisely. ### Error Handling - Use try-except blocks to handle exceptions and prevent program crashes. - Understand built-in exceptions, and use else and finally clauses for additional control. ### Modules and Packages - Organize code using modules and packages. - Import modules using various techniques. - Utilize built-in and third-party modules from PyPI. ### Choosing the Right Environment - PyCharm: Suited for large-scale projects. - VS Code or Jupyter Notebook: Ideal for lightweight scripting or data science tasks. ## Advanced Concepts ### Object-Oriented Programming (OOP) - Focus on classes, objects, inheritance, encapsulation, and polymorphism for reusable and maintainable code. - Apply design patterns like Singleton, Factory, and Observer. ### Asynchronous Programming with Asyncio - Manage tasks concurrently for improved performance and responsiveness. - Optimize I/O-bound operations and enhance scalability. ## Working with Databases and APIs ### Database Integration - Use libraries like Psycopg2 (PostgreSQL) and pymongo (MongoDB) for database connections and CRUD operations. ### RESTful APIs - Develop APIs using Flask or Django REST Framework. - Consume APIs using libraries like Requests and aiohttp. ## Modern Developer Tools and Best Practices ### Version Control with Git - Use Git for cloning, branching, staging, committing, pushing, and merging code. - Collaborate on platforms like GitHub, GitLab, and Bitbucket. ### Writing Clean Code with PEP 8 - Follow naming conventions, indentation, code layout, error handling, and documentation standards for clean and maintainable code. ## Specializing in Python ### Continuous Learning and Community Involvement - Stay updated with blogs, newsletters, webinars, and online courses. - Seek mentorship and participate in forums and open-source projects. ### Popular Python Specializations - Data Visualization: Matplotlib, Seaborn, Plotly - Machine Learning: Scikit-learn, TensorFlow, Keras - Cybersecurity: Nmap, Scapy, Burp Suite - Web Development: Django, Flask, Pyramid
zoltan_fehervari_52b16d1d
1,892,451
Earn Up to $4K/Month Recurring Per Subscription with FastestEngineer’s Affiliate Program!
Join FastestEngineer’s lucrative affiliate program and start earning up to $4,000 monthly on a...
0
2024-06-18T13:43:56
https://dev.to/chovy/earn-up-to-4kmonth-recurring-per-subscription-with-fastestengineers-affiliate-program-376j
javascript, webdev, affiliate, programming
Join FastestEngineer’s lucrative affiliate program and start earning up to $4,000 monthly on a recurring basis for each subscription you refer! As our affiliate partner, you'll be promoting a cutting-edge SaaS platform that helps entrepreneurs rapidly launch their businesses. Not only will you benefit from a high commission rate, but your subscribers will also get a 10% discount when they sign up through your link. Dive into the thriving SaaS market and start boosting your earnings today. Sign up now and transform your affiliate efforts into a significant recurring income with FastestEngineer: https://fastest.engineer/affiliate
chovy
1,892,449
How to Configure ESLint for TypeScript Projects
TL;DR One thing that is important in projects but often neglected is code style and...
0
2024-06-18T13:41:03
https://dev.to/jupri-organization/how-to-configure-eslint-for-typescript-projects-1aip
typescript, tutorial, productivity, learning
## TL;DR One thing that is important in projects but often neglected is code style and standardization. Fortunately for those who really wants to enforce certain code style or standards to their projects, eslint got our back! ESLint is quite easy to setup but when you add typescript, then the complexity increases a little bit but don't be afraid as this article will guide you through it. ## Installing Essential Packages before we can use eslint, we must install some essential packages. ```bash $ npm install -D -E eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin ``` - **eslint**: Provides a flexible configuration system that allows you to define custom rules or leverage existing rule sets for specific coding styles - **@typescript-eslint/parser**: A parser plugin specifically designed for ESLint to handle TypeScript code. - **@typescript-eslint/eslint-plugin**: Provides a collection of ESLint rules specifically designed for TypeScript code. ## Setting up the ESLint Now that we have installed the essential packages, we can now start creating the eslint config. Let's start by creating a file called `.eslintrc.json` and inside the file we write the following. ```json { "parser": "@typescript-eslint/parser", "plugins": [ "@typescript-eslint", ], "extends": ["eslint:recommended", "plugin:@typescript-eslint/recommended"], parserOptions: { sourceType: "module", ecmaVersion: 2023, }, } ``` - **parser: "@typescript-eslint/parser"**: This line specifies the parser ESLint should use. Setting it to @typescript-eslint/parser ensures that ESLint can understand the syntax and types specific to TypeScript code. - **plugins: ["@typescript-eslint"]**: This section defines the plugins used for linting. This plugin provides a collection of rules specifically designed for TypeScript code. - **extends: ["eslint:recommended", "plugin:@typescript-eslint/recommended"]**: This section defines the base configurations that our setup inherits from. - **"eslint:recommended"**: This extends your configuration with ESLint's recommended set of rules for JavaScript code - **"plugin:@typescript-eslint/recommended"**: This extends our configuration with the recommended set of rules from the @typescript-eslint plugin - **sourceType: "module"**:This option specifies the type of source code being parsed. Here, it's set to "module", indicating that the code is expected to be written in a module system like ES modules - **ecmaVersion: 2023**: This option specifies the ECMAScript version that the parser should expect the code to be written in. That's the configuration that we needed, as well as the explanation for each properties. ## Exploring the Rules Now after setting up the ESLint, you are left with exploring the various rules of eslint and match it into your liking. You can explore the available rules in [ESLint docs](https://eslint.org/docs/latest/rules/) and [TypeScript ESLint docs](https://typescript-eslint.io/rules/). If you don't feel like exploring the rules and want to just install a plugin, don't worry we got your back. We have made a plugin that is curated and tailored to improve your productivity, project's maintainability, and type safety. Check out our [eslint-plugin-typescript](https://www.npmjs.com/package/@jupri-lab/eslint-config-typescript) as well as [eslint-plugin-typescript-react](https://www.npmjs.com/package/@jupri-lab/eslint-config-typescript-react)
mikhaelesa
1,892,448
A Brief History Of The Internet And The World Wide Web
The internet is a global network of interconnected computers that communicate through standardized...
0
2024-06-18T13:37:36
https://dev.to/baraq/a-brief-history-of-the-internet-and-the-world-wide-web-1mhm
webdev, learning, html, frontend
The internet is a global network of interconnected computers that communicate through standardized protocols. It enables the exchange of data, information, and services through various technologies, such as email, websites, social media, and online applications. The World Wide Web(WWW) on other hand, commonly known as the web, is a system of interlinked hypertext documents and multimedia content that can be accessed via the internet. Created by Tim Berners-Lee in 1989, the web allows users to navigate through web pages using hyperlinks. **History of the Internet** Before 1957, computers primarily operated on a single task at a time. Early computers, such as the ENIAC (Electronic Numerical Integrator and Computer) and UNIVAC (UNIVersal Automatic Computer), were designed to process one job or task sequentially. This mode of operation is known as “serial processing. These early machines did not have the capability to handle multiple tasks simultaneously due to limited processing power and lack of advanced operating systems. Users would submit a job, which the computer would process to completion before starting the next job. This single-task approach was sufficient for the simpler computational needs and hardware limitations of the time. The shift toward handling multiple tasks began with the development of batch processing and later, time-sharing systems, which emerged in the late 1950s and 1960s. ![cold war image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ub1xtmnx9zlt75hwf5z1.jpeg) The Cold War space race significantly contributed to the development of the internet through a combination of technological innovation. I remember taking a course on this particular topic in my history class in the uni. It was a proxy war between the United States and the Soviet Union. On October 4th 1957, the first satellite called “Sputnik one” was launched by the Soviet Union. The fear of missile launch from space awakened the technological consciousness of the United States, in which she founded the Advanced Research Project Agency(ARPA). One of ARPA’s projects was to develop robust, reliable communication systems that could withstand potential disruptions, such as a nuclear attack. This led to research into decentralized communication networks, which formed the basis of the ARPANET, ![Arpanet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z23n4u0vhxtd9uwuv9d5.jpg) the precursor to the internet. Packet Switching was also key to the development of the ARPANET, allowed data to be broken into small packets, sent independently across the network, and reassembled at the destination. The Cyclades also played a crucial role in the development of the internet. It was a pioneering computer network developed in France in the early 1970s, primarily led by computer scientist Louis Pouzin. Cyclades introduced and refined several key concepts, particularly in the realm of packet-switched networks. February 28, 1990, marked a significant milestone in the development of the internet, specifically relating to the transition from the ARPANET to the modern internet. The transition from ARPANET to the broader internet was facilitated by the adoption of the Transfer Control Protocol(TCP)/Internet Protocol(IP) protocol suite, which became the standard for network communication. TCP/IP’s robustness, scalability, and ability to interconnect diverse networks were crucial in enabling the creation of a global network of networks. **The World Wide Web (WWW)** ![world wide web](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k9txma5jfoc15rx6sp3i.jpg) Thirty-five years ago the very first website went online, behind this invention is a computer scientist, Tim Berners-Lee, born in London on June 8th 1955. He came about it while working at CERN, the European Organization for Nuclear Research. Berners-Lee envisioned a system that would enable researchers to share information seamlessly across different computer systems. To realize this vision, he developed three fundamental technologies: HTML (HyperText Markup Language), which allowed for the creation of web pages; URI (Uniform Resource Identifier), which later became URL (Uniform Resource Locator), providing a way to address and access web resources; and HTTP (HyperText Transfer Protocol), enabling the retrieval of linked resources across the web. The first successful communication between a web browser and a server occurred in mid-November 1989, marking the birth of the web. By 1990, the first web page was created, hosted on Berners-Lee’s NeXT computer at CERN. ![Mosaic browser](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vh5xc6cdjzrkkfsg4fwf.jpg) In 1993, the introduction of the Mosaic web browser, developed by Marc Andreessen and Eric Bina at the National Center for Supercomputing Applications (NCSA), significantly boosted the web’s popularity. Mosaic was user-friendly, supported images, and was available on multiple platforms, making it accessible to a broader audience. The web’s rapid expansion led to the development of dynamic and interactive content, search engines, e-commerce platforms, and social media, transforming how information was accessed and shared globally. The web’s rapid expansion led to the development of dynamic and interactive content, search engines, e-commerce platforms, and social media, transforming how information was accessed and shared globally. The web’s evolution continued with the advent of Web 2.0 in the early 2000s, emphasizing user-generated content, interactivity, and collaboration. Technologies such as AJAX (Asynchronous JavaScript and XML) enabled more dynamic web applications, leading to the proliferation of platforms like Wikipedia, YouTube, and Facebook. Today, the World Wide Web is an integral part of daily life, supporting a vast array of applications and services that connect people, businesses, and information worldwide. Its development has been marked by continuous innovation and adaptation, driven by the collaborative efforts of researchers, developers, and users across the globe.
baraq
1,892,446
Multi-Layered Kubernetes Security: From Pod to Cluster Level
Kubernetes, a container orchestration platform, provides a robust and scalable environment for...
0
2024-06-18T13:36:25
https://dev.to/platform_engineers/multi-layered-kubernetes-security-from-pod-to-cluster-level-3gdn
Kubernetes, a container orchestration platform, provides a robust and scalable environment for deploying and managing applications. However, ensuring the security of these applications and the underlying infrastructure is crucial. This blog post delves into the multi-layered security approach for Kubernetes, covering security measures from the pod level to the cluster level. ### Pod Level Security At the pod level, security revolves around the containers running within the pod. Here are some key security measures: 1. **Container Runtime Security**: - **Runtime Class**: Kubernetes provides runtime classes to define the container runtime configuration. This allows for specifying the runtime environment, including security settings. ```yaml apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: my-runtime-class handler: runc ``` 2. **Image Security**: - **Image Scanning**: Regularly scan container images for vulnerabilities using tools like Clair or Anchore. - **Image Signing**: Use image signing tools like Docker Content Trust to ensure the authenticity of images. 3. **Network Policies**: - **Pod-to-Pod Communication**: Implement network policies to control communication between pods. ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: pod-communication-policy spec: podSelector: matchLabels: app: my-app ingress: - from: - podSelector: matchLabels: app: my-app ports: - 80 ``` ### Deployment Level Security At the deployment level, security focuses on the configuration and management of deployments. 1. **Role-Based Access Control (RBAC)**: - **Service Accounts**: Use service accounts to manage access to deployments. - **Roles and RoleBindings**: Define roles and role bindings to control access to deployments. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: deployment-manager rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] ``` 2. **Secrets Management**: - **Secrets**: Store sensitive data like passwords and API keys as secrets. - **Secrets Encryption**: Use tools like Kubernetes Secrets Store CSI Driver to encrypt secrets. ### Cluster Level Security At the cluster level, security encompasses the overall Kubernetes cluster configuration and management. 1. **Cluster Configuration**: - **API Server Configuration**: Configure the API server to use secure communication protocols like HTTPS. - **Etcd Encryption**: Encrypt etcd data to protect sensitive cluster information. 2. **Network Security**: - **Network Policies**: Implement network policies to control communication between pods and external networks. - **Calico Network Policy**: Use Calico to manage network policies and provide additional security features. ### Conclusion In conclusion, a multi-layered security approach is essential for ensuring the [security of Kubernetes](https://platformengineers.io/blog/securing-kubernetes-beyond-rbac-and-pod-security-policies-psp/) deployments. By implementing security measures at the pod, deployment, and cluster levels, you can create a robust and secure environment for your applications.
shahangita
1,892,445
One Byte Explainer Challenge solution(Recursion)
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T13:35:40
https://dev.to/sakutiriko/one-byte-explainer-challenge-solutionrecursion-13bo
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer <!-- Explain a computer science concept in 256 characters or less. --> ## Additional Context <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! --> Recursion is a computer programming technique where a function solves a problem by calling smaller versions of itself. It is crucial in computer science for tasks like sorting and traversing data structures, allowing smooth and efficient operations.
sakutiriko
1,892,444
A Modern Python Toolkit: Pydantic, Ruff, MyPy, and UV
In the ever-changing world of Python development, tools such as Pydantic, Ruff, MyPy, and UV have...
0
2024-06-18T13:33:54
https://developer-service.blog/a-modern-python-toolkit-pydantic-ruff-mypy-and-uv/
python, pydantic, ruff, mypy
In the ever-changing world of Python development, tools such as Pydantic, Ruff, MyPy, and UV have become essential for improving productivity and code quality. Let's take a closer look at how these tools can be incorporated into your work. --- ## Pydantic: Data Validation and Settings Management Pydantic is a library for data validation and settings management that uses Python type annotations. It ensures data integrity by validating and parsing data, making it perfect for handling complex configurations and data structures. Pydantic works well with FastAPI and other frameworks, providing seamless validation of request and response data. ### Key Features: - Uses type annotations for data validation. - Automatically parses and converts data. - Works with FastAPI for API development. - Provides efficient and user-friendly error handling. **Example:** ``` from pydantic import BaseModel, ValidationError class User(BaseModel): id: int name: str age: int try: user = User(id=1, name='John Doe', age='five') # This will raise a ValidationError except ValidationError as e: print(e.json()) # Correct Usage user = User(id=1, name='John Doe', age=25) print(user) ``` Don't forget to install it with: `pip install pydantic` --- ## Ruff: Fast and Lightweight Linter Ruff is an incredibly fast linter and code formatter designed to handle large codebases efficiently. It's written in Rust and aims to provide real-time feedback without sacrificing speed or accuracy. Ruff is designed to replace tools like Flake8 and supports a wide range of linting rules. ### Key Features: - 10-100x faster than traditional linters. - Supports a wide range of linting rules. - Requires minimal configuration. -Provides fast feedback during development. **Example:** Create a .ruff.toml configuration file, with for example: ``` line-length = 88 indent-width = 4 ``` Run Ruff: `ruff check .` Don't forget to install it with: `pip install ruff` --- ## MyPy: Static Type Checking MyPy brings static type checking to Python. By enforcing type hints, MyPy helps catch type-related errors early in the development process, improving code robustness and readability. It's especially useful for large codebases where dynamic typing can lead to runtime errors. ### Key Features: - Provides static type checking for Python code. - Helps detect and prevent type-related errors. - Improves code readability and maintainability. -Works with Pydantic for seamless data validation. **Example:** Consider this code example: ``` def greet(name: str) -> str: return f"Hello, {name}!" ``` Run MyPy: `mypy script.py` Don't forget to install it with: `pip install mypy` --- ## UV: Fast Package Installer and Resolver UV is a modern package installer and resolver written in Rust, designed to replace common tools like pip, pip-tools, and virtualenv. UV aims to provide a faster and more efficient package management experience, with features like advanced dependency resolution and a global cache for dependency deduplication. ### Key Features: - 10-100x faster than pip and pip-tools. - Can replace pip, pip-tools, and virtualenv. - Saves disk space with a global dependency cache. - Supports macOS, Linux, and Windows. **Example:** Install packages with UV: `uv pip install requests` Produces this output: ``` Resolved 5 packages in 213ms Downloaded 5 packages in 249ms Installed 5 packages in 147ms + certifi==2024.6.2 + charset-normalizer==3.3.2 + idna==3.7 + requests==2.32.3 + urllib3==2.2.2 ``` --- ## Integration in a Workflow Incorporating these tools into your Python development workflow can significantly improve efficiency and code quality. Here's a typical workflow using these tools: - Define Data Models with Pydantic: Use Pydantic to define and validate data models, ensuring that only valid data is processed. - Lint and Format Code with Ruff: Run Ruff to quickly lint and format your codebase, catching potential issues early. - Type Checking with MyPy: Use MyPy to enforce type hints and perform static type checking, catching type-related errors before runtime. - Manage Dependencies with UV: Use UV to install and manage dependencies efficiently, leveraging its fast resolution and installation capabilities. --- ## Conclusion Including Pydantic, Ruff, MyPy, and UV in your Python projects can lead to more robust, maintainable, and efficient code. These tools work well together, offering a comprehensive toolkit for modern Python development.
devasservice
1,892,441
Filtering and Mapping in JavaScript
When I started learning to code, I began with simple things like creating variables. But after a few...
0
2024-06-18T13:33:50
https://dev.to/tamikaxuross/filtering-and-mapping-in-javascript-lac
javascript, map, filter, keywords
When I started learning to code, I began with simple things like creating variables. But after a few months in my programming course, I discovered two powerful tools in JavaScript: filter() and map(). These tools make working with arrays of data much easier and more efficient. ## What's the Difference Between the Two? ## The filter() Method The filter() method lets you create a new array with only the elements that pass a test you provide. Imagine you have a list of numbers and you want to pick out only the even numbers. Here's how you can do that with filter(): ``` const numbers = [1, 2, 3, 4, 5, 6]; const evenNumbers = numbers.filter(number => number % 2 === 0); console.log(evenNumbers); // [2, 4, 6] ``` In this example, number % 2 === 0 is the test to check if a number is even. The filter() method goes through each number in the list and includes only those that pass the test (i.e., the even numbers). ## The map() Method The map() method creates a new array by transforming every element in the original array according to a function you provide. For example, if you want to double each number in an array, you can use map(): ``` const numbers = [1, 2, 3, 4, 5, 6]; const doubledNumbers = numbers.map(number => number * 2); console.log(doubledNumbers); // [2, 4, 6, 8, 10, 12] ``` Here, number => number * 2 is the function that doubles each number. The map() method applies this function to every number in the list, creating a new list with the doubled values. ## Transforming Data with map() One of the main uses of map() is to change or transform data. This is really useful when working with lists of objects. For example, if you have a list of user objects and you want to get just their names, you can use map(): ``` const users = [ { name: 'Alice', age: 25 }, { name: 'Bob', age: 30 }, { name: 'Charlie', age: 35 } ]; const userNames = users.map(user => user.name); console.log(userNames); // ['Alice', 'Bob', 'Charlie'] ``` ## Picking Out Data with filter() Filtering data means picking out elements that meet certain criteria. For example, if you want to select users who are older than 28, you can use filter(): ``` const users = [ { name: 'Alice', age: 25 }, { name: 'Bob', age: 30 }, { name: 'Charlie', age: 35 } ]; const usersAbove28 = users.filter(user => user.age > 28); console.log(usersAbove28); // [{ name: 'Bob', age: 30 }, { name: 'Charlie', age: 35 }] ``` ## Combining filter() and map() Sometimes, you need to use both filter() and map() together to get exactly what you want. For example, if you want the names of users who are older than 28, you can combine both methods: ``` const users = [ { name: 'Alice', age: 25 }, { name: 'Bob', age: 30 }, { name: 'Charlie', age: 35 } ]; const userNamesAbove28 = users .filter(user => user.age > 28) .map(user => user.name); console.log(userNamesAbove28); // ['Bob', 'Charlie'] ``` In this example, filter() first selects the users older than 28. Then map() extracts just their names. ## Conclusion Understanding and using filter() and map() in JavaScript is super helpful for working with arrays. These methods make it easy to process and transform data, making your code cleaner and easier to understand. Whether you're picking out certain elements with filter() or transforming data with map(), mastering these methods will make you a better programmer.
tamikaxuross
1,892,443
8 Best Tips for Building a Construction Budget
Construction Company in North Bangalore Introduction Creating a construction budget is a critical...
0
2024-06-18T13:33:45
https://dev.to/tvasteconstructions/8-best-tips-for-building-a-construction-budget-56d0
Construction Company in North Bangalore Introduction Creating a construction budget is a critical step in ensuring the success of any construction project, whether it's a small renovation or a large-scale build. A well-crafted budget not only helps in managing costs effectively but also minimizes financial risks and ensures the project stays on track. Eight essential tips for building a robust construction budget. These tips will guide you through detailed project scoping, accurate cost estimation, contingency planning, and more, equipping you with the knowledge to manage your construction project's finances with confidence and precision. 1. Detailed Project Scope The foundation of any effective budget is a complete project scope. Outline every aspect of the project, including materials, labour, equipment, permits, and overhead costs. Be meticulous about detailing every task, no matter how minor it seems. This thoroughness will help in creating a more accurate budget and avoid surprises later on. 2. Accurate Cost Estimation Accurate cost estimation is the backbone of a reliable budget. Use historical data from past projects, consult with experts, and utilize construction estimating software to get precise figures. Include costs for materials, labour, machinery, and any subcontractor fees. Always add a margin for error to accommodate price fluctuations and unexpected expenses. 3. Include a Contingency Fund No matter how well-planned your budget is, there will always be unforeseen expenses. Allocate a contingency fund, typically 5-10% of the total budget, to cover unexpected costs. This fund acts as a financial cushion, ensuring the project remains on track even when unexpected issues arise. 4. Regularly Review and Update the Budget A construction budget is not a static document; it should be reviewed and updated regularly. As the project progresses, new costs may emerge, and some expenses may change. Regularly comparing actual expenses against the budget helps in identifying any discrepancies early and allows for timely adjustments. 5. Detailed Schedule A detailed project schedule complements your budget. Outline all phases of the construction process, with specific timelines for each task. This helps in aligning your budget with the project’s timeline, ensuring funds are available when needed. Delays can be costly, so a well-planned schedule helps in avoiding unnecessary expenses. 6. Track and Document Expenses Implement a system for tracking and documenting all expenses related to the project. Use construction management software or a detailed spreadsheet to record every transaction. This not only helps in monitoring the budget but also provides a clear financial record for future reference and audits. 7. Vendor and Contractor Management Effective vendor and contractor management is crucial for staying within budget. Obtain multiple bids for materials and services to ensure you are getting the best prices. Establish clear contracts with vendors and contractors, detailing costs, payment schedules, and delivery timelines. Good relationships and clear agreements can prevent cost overruns and delays. 8. Plan for Permits and Regulations Permits and regulatory compliance can be a significant expense in construction projects. Research all necessary permits and include these costs in your budget. Ensure compliance with local regulations to avoid fines and legal issues, which can severely impact your budget and project timeline. Conclusion Building a construction budget requires careful planning, detailed documentation, and regular monitoring. By following these eight tips, you can create a realistic and effective budget that helps ensure the financial health of your construction project. Remember, the key to a successful construction budget is in the details and the ability to adapt to changes as the project progresses. With a well-crafted budget, you can manage costs effectively, mitigate risks, and achieve project success. Tvaste Constructions is the best Construction Company in North Bangalore. Get to know more information contact us. Contact Us: Phone Number: +91-7406554350 E-Mail: info@tvasteconstructions.com Website: www.tvasteconstructions.com
tvasteconstructions
1,892,418
VueJS: o que é, como funciona e como começar a usar o framework
Você trabalha com desenvolvimento Front End (ou estuda essa área) e procura uma ferramenta flexível e...
26,330
2024-06-18T13:29:43
https://dev.to/sucodelarangela/vuejs-o-que-e-como-funciona-e-como-comecar-a-usar-o-framework-joi
Você trabalha com **desenvolvimento Front End** (ou estuda essa área) e procura uma ferramenta flexível e eficaz para criar projetos incríveis? Então, este guia é para você! Neste artigo escrito em parceria e com exclusividade para a [**Alura**](https://www.alura.com.br/), você conhecerá uma ferramenta para o desenvolvimento web: o **Vue.js**. Vamos explorar juntos o que é, para que serve e como se aprofundar nesse framework cada vez mais popular. É só clicar no link abaixo e acessar o artigo gratuitamente! [Vue.js: o que é, como funciona e como começar a usar esse framework JS](https://www.alura.com.br/artigos/vue-js) <img src="https://www.alura.com.br/artigos/assets/vue-js/vue-js.jpg" alt="Escola de front-end"> Boa leitura!
sucodelarangela
1,890,839
Develop a consciousness of internal quality to maintain our productivity
I love coding because it allows me to create what I want from my imagination. But it's not fun when...
0
2024-06-18T13:27:00
https://dev.to/seachicken/develop-a-consciousness-of-internal-quality-to-maintain-our-productivity-2bpg
opensource, codequality
[](url)I love coding because it allows me to create what I want from my imagination. But it's not fun when software design is more complex and buggy. Developers need to maintain maintainability by constantly checking the scope of impact when making changes to the code and continuing to test and refactor. This movement has senior developers who catch the [code smells](https://tidyfirst.substack.com/p/code-smells), but it is always difficult to be conscious of impacting code changes. We should easily know the impacting code changes to support developers. ## The slowing pace of development As a developer, you've probably experienced the frustration of a service becoming more complex over time, leading to a slower pace of development. Even if it works fine when you add a feature, if the scope of the change is difficult to understand and the design is hard for future colleagues to understand, it will gradually become more likely to break. Creating unintended relationships in a program is easy, so maintaining maintainability within a team can be a tough challenge. ## Developing while looking at the impact map The [Inga plugin's Impact Map](https://plugins.jetbrains.com/plugin/24358-inga) visualizes the modules and components affected by the developer's code changes in real-time. {% embed https://www.youtube.com/watch?v=D1PpRi0yvKY %} The impact can be checked in the following development phases. **Individual development phase:** By integrating with the IDE, the Inga plugin empowers developers to develop while always being aware of the impact on the internal design. This control helps to prevent unintended negative effects, giving developers more confidence in their coding. **Team code review phase:** Since the team, not the individual, protects the design, reports can be posted to a pull request by [CI](https://github.com/seachicken/inga-action) so that the whole team can check the impact during code review. Inga will contribute to productivity. If you sympathize with this concept, you can try it out without hesitation, as it is free. Also, since code analysis is performed entirely on the local machine, you can use it with peace of mind. We welcome feedback on both good and bad points. Thank you. ## References - https://www.martinfowler.com/articles/is-quality-worth-cost.html
seachicken
1,892,438
How To Use Copilot to Easily Create PowerPoint Presentations In Minutes
Introduction Creating effective and visually appealing PowerPoint presentations can be a...
0
2024-06-18T13:23:11
https://dev.to/byteswiftdigital/how-to-use-copilot-to-easily-create-powerpoint-presentations-in-minutes-17ej
tutorial, ai, office, powerpoint
## Introduction Creating effective and visually appealing PowerPoint presentations can be a time-consuming and challenging task, often requiring significant effort and attention to detail. However, with advanced artificial intelligence (AI) technologies, the process of creating presentations has become more streamlined and efficient. One such useful tool is Copilot, an AI-powered assistant that can help users generate content, add slides, incorporate visuals, and organize information with ease. This comprehensive guide will explore how to leverage Copilot's capabilities to create impressive PowerPoint presentations rapidly. ## Setting Up Copilot in PowerPoint Before utilizing Copilot, it is necessary to ensure you have the required subscriptions and proper setup. To use Copilot in PowerPoint, you will need a Microsoft 365 subscription ($6.99 monthly) and a Copilot Pro subscription ($20 monthly). After acquiring these subscriptions, the "Copilot" button should be visible on the Home tab ribbon in PowerPoint. If not visible initially, you may need to update your Microsoft 365 license by navigating to File > Account > Update License. Follow the instructions to sign in with your account holding both subscriptions, then close and relaunch PowerPoint. The Copilot button should now be accessible. [Generating an Entire Presentation](https://www.byteswifts.com/2024/06/Create-PowerPoint-Presentations-Using-Copiltot.html)
byteswiftdigital
1,892,405
🚀 A hands-on guide to setting up zsh, oh my zsh, asdf, and spaceship prompt with zinit for your development environment
Introduction 🌟 Elevate your development environment with this guide on installing and...
0
2024-06-18T13:20:56
https://dev.to/girordo/a-hands-on-guide-to-setting-up-zsh-oh-my-zsh-asdf-and-spaceship-prompt-with-zinit-for-your-development-environment-91n
terminal, zsh, tutorial, beginners
## **Introduction** 🌟 Elevate your development environment with this guide on installing and configuring **Zsh**, **Oh My Zsh**, **asdf**, and the **Spaceship Prompt** theme. We will also leverage **Zinit** for additional plugin management. Let’s get started! 🚨🚨🚨 **WARNING:If you like it this article please click in reaction and save, also follow me here and on my [Github](https://github.com/girordo)** ### 🛠️ **Step 1: Installing Zsh** **Zsh** is a robust shell that provides a powerful command-line experience. Here’s how to install it: #### 🐧 **For Linux (Ubuntu/Debian):** ```bash sudo apt update sudo apt install zsh chsh -s $(which zsh) ``` #### 🍎 **For macOS:** Zsh is pre-installed. To set it as your default shell: ```bash chsh -s /bin/zsh ``` #### 🪟 **For Windows:** Use **WSL (Windows Subsystem for Linux)** or **Git Bash**. For WSL: 1. Install a WSL distribution (e.g., Ubuntu) from the Microsoft Store. 2. Install Zsh as you would on Ubuntu. **Verify the installation**: ```bash zsh --version ``` ### ⚙️ **Step 2: Setting Up Oh My Zsh** **Oh My Zsh** simplifies Zsh configuration with themes and plugins. **Install Oh My Zsh**: ```bash sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" ``` This script will set up Oh My Zsh and switch your default shell to Zsh. **Configure Oh My Zsh**: 1. Open your `.zshrc` file: ```bash nano ~/.zshrc ``` 2. **Enable plugins**: ```bash plugins=(git asdf) ``` 3. **Reload your configuration**: ```bash source ~/.zshrc ``` ### 🔄 **Step 3: Installing and Configuring Zinit** **Zinit** is a plugin manager for Zsh, offering flexible and fast plugin management. **Install Zinit**: 1. Add Zinit installer chunk to your `.zshrc`: ```bash cat << 'EOF' >> ~/.zshrc ### Added by Zinit's installer if [[ ! -f $HOME/.local/share/zinit/zinit.git/zinit.zsh ]]; then print -P "%F{33} %F{220}Installing %F{33}ZDHARMA-CONTINUUM%F{220} Initiative Plugin Manager (%F{33}zdharma-continuum/zinit%F{220})…%f" command mkdir -p "$HOME/.local/share/zinit" && command chmod g-rwX "$HOME/.local/share/zinit" command git clone https://github.com/zdharma-continuum/zinit "$HOME/.local/share/zinit/zinit.git" && \ print -P "%F{33} %F{34}Installation successful.%f%b" || \ print -P "%F{160} The clone has failed.%f%b" fi source "$HOME/.local/share/zinit/zinit.git/zinit.zsh" autoload -Uz _zinit (( ${+_comps} )) && _comps[zinit]=_zinit # Load a few important annexes, without Turbo zinit light-mode for \ zdharma-continuum/zinit-annex-as-monitor \ zdharma-continuum/zinit-annex-bin-gem-node \ zdharma-continuum/zinit-annex-patch-dl \ zdharma-continuum/zinit-annex-rust ### End of Zinit's installer chunk EOF ``` 2. **Source your `.zshrc`**: ```bash source ~/.zshrc ``` ### 🔌 **Step 4: Installing Additional Plugins with Zinit** Use Zinit to install additional plugins for a richer Zsh experience. **Install plugins using Zinit**: 1. Open your `.zshrc` file and add the following Zinit plugin commands: ```bash nano ~/.zshrc ``` 2. Add these lines to install and load additional plugins: ```bash zinit light zdharma-continuum/fast-syntax-highlighting zinit light zsh-users/zsh-autosuggestions zinit light zsh-users/zsh-completions ``` 3. **Save and reload** your `.zshrc`: ```bash source ~/.zshrc ``` ### 📦 **Step 5: Installing and Configuring asdf** **asdf** is a versatile version manager for multiple languages. **Install asdf**: 1. Clone the asdf repository: ```bash git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.11.3 ``` 2. Add asdf to your `.zshrc`: ```bash echo -e '\n. $HOME/.asdf/asdf.sh' >> ~/.zshrc echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> ~/.zshrc source ~/.zshrc ``` **Install asdf plugins**: 1. **Add the Node.js plugin**: ```bash asdf plugin-add nodejs https://github.com/asdf-vm/asdf-nodejs.git ``` 2. **Install a specific version**: ```bash asdf install nodejs 16.13.0 asdf global nodejs 16.13.0 ``` 3. **Add the Python plugin**: ```bash asdf plugin-add python https://github.com/danhper/asdf-python.git ``` 4. **Install a specific version**: ```bash asdf install python 3.9.7 asdf global python 3.9.7 ``` **Managing project-specific versions**: Create a `.tool-versions` file in your project directory: ```bash nodejs 14.17.6 python 3.8.10 ``` Run `asdf install` in the project directory to use these versions locally. ### 🚀 **Step 6: Configuring the Spaceship Prompt Theme** The **Spaceship Prompt** theme offers a sleek and informative prompt for Zsh. **Install Spaceship Prompt**: 1. **Clone the Spaceship repository**: ```bash git clone https://github.com/spaceship-prompt/spaceship-prompt.git "$ZSH_CUSTOM/themes/spaceship-prompt" --depth=1 ``` 2. **Create a symlink**: ```bash ln -s "$ZSH_CUSTOM/themes/spaceship-prompt/spaceship.zsh-theme" "$ZSH_CUSTOM/themes/spaceship.zsh-theme" ``` 3. **Set the theme in your `.zshrc`**: ```bash ZSH_THEME="spaceship" ``` **Configure Spaceship Prompt**: 1. Create a configuration file `.spaceshiprc.zsh`: ```bash nano ~/.spaceshiprc.zsh ``` 2. Add the following configuration: ```zsh SPACESHIP_USER_SHOW=always SPACESHIP_PROMPT_ADD_NEWLINE=false SPACESHIP_CHAR_SYMBOL="λ" SPACESHIP_CHAR_SUFFIX=" " SPACESHIP_PROMPT_ORDER=( user # Username section dir # Current directory section host # Hostname section git # Git section (git_branch + git_status) package # Package version node # Node.js section bun # Bun section elixir # Elixir section erlang # Erlang section rust # Rust section docker # Docker section docker_compose # Docker Compose section terraform # Terraform section exec_time # Execution time line_sep # Line break jobs # Background jobs indicator exit_code # Exit code section char # Prompt character ) ``` 3. **Source your configuration** in `.zshrc`: ```bash echo "source ~/.spaceshiprc.zsh" >> ~/.zshrc source ~/.zshrc ``` **Enable Command History Sharing**: To share command history across sessions: ```bash HISTFILE= ~/.zsh_history HISTSIZE=10000 SAVEHIST=10000 setopt share_history ``` **Enable Auto-Corrections**: Enable corrections for common typos: ```bash setopt correct ``` ### 🔠 **Step 7: Adding a Nerd Font for the Spaceship Prompt** **Nerd Fonts** provide additional icons and glyphs that enhance the appearance of your terminal, especially with themes like Spaceship Prompt. 1. **Install a Nerd Font** (e.g., Hack or Roboto Mono): - Go to the [Nerd Fonts GitHub repository](https://github.com/ryanoasis/nerd-fonts) and download your preferred font (e.g., Hack or Roboto Mono). - Follow the installation instructions for your operating system. 2. **Configure Your Terminal Emulator**: - Open your terminal emulator's preferences/settings. - Select the installed Nerd Font (e.g., Hack Nerd Font or Roboto Mono Nerd Font) as the font for your terminal. Now, your Spaceship Prompt should display with the Nerd Font icon you selected. ### 🎉 **Conclusion** You’ve now set up a robust and visually appealing development environment with **Zsh**, **Oh My Zsh**, **asdf**, and the **Spaceship Prompt** theme, using **Zinit** for additional plugins. This configuration will enhance your workflow and make managing multiple projects a breeze. Happy coding! --- **Further Reading** 📚: - [Oh My Zsh Plugins](https://github.com/ohmyzsh/ohmyzsh/wiki/Plugins) - [asdf Documentation](https://asdf-vm.com/) - [Spaceship Prompt GitHub](https://github.com/spaceship-prompt/spaceship-prompt) - [Zinit Documentation](https://zdharma-continuum.github.io/zinit/wiki/) - [Zsh User Guide](http://zsh.sourceforge.net/Doc/Release/) --- Photo by <a href="https://unsplash.com/@lukash?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Lukas</a> on <a href="https://unsplash.com/photos/a-computer-screen-with-a-lot-of-data-on-it-MU8w72PzRow?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a> <p align="center"><em>This article was crafted and tailored with ChatGPT help.</em> 🤖💡</p>
girordo
1,892,419
How To Use Copilot to Easily Create PowerPoint Presentations In Minutes
Introduction Creating effective and visually appealing PowerPoint presentations can be a...
0
2024-06-18T13:19:55
https://dev.to/byteswiftdigital/how-to-use-copilot-to-easily-create-powerpoint-presentations-in-minutes-d0a
## Introduction Creating effective and visually appealing PowerPoint presentations can be a time-consuming and challenging task, often requiring significant effort and attention to detail. However, with advanced artificial intelligence (AI) technologies, the process of creating presentations has become more streamlined and efficient. One such useful tool is Copilot, an AI-powered assistant that can help users generate content, add slides, incorporate visuals, and organize information with ease. This comprehensive guide will explore how to leverage Copilot's capabilities to create impressive PowerPoint presentations rapidly. ## Setting Up Copilot in PowerPoint Before utilizing Copilot, it is necessary to ensure you have the required subscriptions and proper setup. To use Copilot in PowerPoint, you will need a Microsoft 365 subscription ($6.99 monthly) and a Copilot Pro subscription ($20 monthly). After acquiring these subscriptions, the "Copilot" button should be visible on the Home tab ribbon in PowerPoint. If not visible initially, you may need to update your Microsoft 365 license by navigating to File > Account > Update License. Follow the instructions to sign in with your account holding both subscriptions, then close and relaunch PowerPoint. The Copilot button should now be accessible. [Generating an Entire Presentation](https://www.byteswifts.com/2024/06/Create-PowerPoint-Presentations-Using-Copiltot.html)
byteswiftdigital
1,892,416
My first website check it out
it is a simple payment form made using html and css only link...
0
2024-06-18T13:10:12
https://dev.to/gurnoor_singh55/my-first-website-check-it-out-5b92
webdev, beginners, css, html
it is a simple payment form made using html and css only link below-: https://gurnoor926.github.io/payment-form/
gurnoor_singh55
1,892,417
Hire Laravel Developers in Just 48 Hours [40-Hour Free Trial]
Want to hire Laravel developers? Hire our expert developers quickly and test their skills with a...
0
2024-06-18T13:14:59
https://dev.to/websoptimization_92/hire-laravel-developers-in-just-48-hours-40-hour-free-trial-1e1l
laraveldevelopers, laravelconsultants
Want to [hire Laravel developers](https://www.websoptimization.com/hire-laravel-developers.html)? Hire our expert developers quickly and test their skills with a 40-hour free trial! Get the right fit for your project in just 48 hours. #HireLaravelDevelopers #HireDedicatedLaravelDevelopers #HireLaravelDeveloper #LaravelConsultants #hireLaravelProgrammers #hireLaravelProgrammer #Laraveldevelopersforhire #hireremoteLaraveldevelopers #hireremoteLaraveldeveloper #hirelaraveldevelopersindia
websoptimization_92
1,892,413
1.6 - Ponto e vírgula, posicionamento e práticas de recuo
Ponto e vírgula e posicionamento Em Java, o ponto e vírgula é um separador usado para terminar uma...
0
2024-06-18T13:14:53
https://dev.to/devsjavagirls/16-ponto-e-virgula-posicionamento-e-praticas-de-recuo-3mhh
java
**Ponto e vírgula e posicionamento** Em Java, o ponto e vírgula é um separador usado para terminar uma instrução. Cada instrução individual deve ser finalizada com um ponto e vírgula. O ponto e vírgula indica o fim de uma entidade lógica. Exemplo: ``` x = y; y = y + 1; System.out.println(x + " " + y); ``` é o mesmo que o seguinte, em Java: ``` x = y; y = y + 1; System.out.println(x + " " + y); ``` Java não reconhece o fim da linha como um terminador. Instruções podem ser inseridas em qualquer parte da linha. Exemplo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z4r5jgdvdaosh56fa7m.png) Dividir linhas longas dessa forma ajuda na criação de programas mais legíveis. A divisão de linhas longas também impede que elas passem para a próxima linha. Lembrete: Um bloco é um conjunto de instruções conectadas logicamente delimitadas por chaves. Um bloco não é finalizado com um ponto e vírgula. O fim do bloco é indicado pela chave de fechamento {}. **Práticas de recuo** Java é uma linguagem de forma livre, permitindo inserção de instruções em qualquer lugar da linha. O recuo comum e aceito para melhorar a legibilidade. Recomendações: Recuar um nível após cada chave de abertura. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yzh844xeyqrq9oyrtxo7.png) Certas instruções encorajam recuo adicional para facilitar leitura. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syumbsyg94pszdmrt3uo.png)
devsjavagirls
1,892,404
1.5 - blocos de código
Bloco de código Agrupa duas ou mais instruções. Instruções são envolvidas por chaves {}. Podem ser...
0
2024-06-18T13:14:38
https://dev.to/devsjavagirls/15-blocos-de-codigo-44cc
java
**Bloco de código** Agrupa duas ou mais instruções. Instruções são envolvidas por chaves {}. Podem ser usados onde uma única instrução seria usada. - Exemplo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmn79k8o2yr9ubzelhmc.png) Se w for menor que h, ambas as instruções são executadas. - Benefícios dos blocos de código: Permitem vincular logicamente duas ou mais instruções. Facilitam a implementação de algoritmos com clareza e eficiência. - Exemplo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6yzq9mv4o63blppv7o5.png) Saída do programa i does not equal zero j / i is 2.0 - Se i for configurado como zero, o bloco inteiro é ignorado. - Pergunta: O uso de um bloco de código introduz alguma ineficiência de tempo de execução? Em outras palavras, Java executa realmente { e }? O uso de um bloco de código não introduz ineficiência de tempo de execução. Java não executa realmente { e }. Blocos de código não adicionam sobrecarga e geralmente aumenta a velocidade e a eficiência. Blocos de código simplificam a codificação de certos algoritmos.
devsjavagirls
1,883,663
1.4 - Instruções de controle (if e for)
IF Instrução condicional em Java: if Funciona como a instrução IF em outras linguagens Forma mais...
0
2024-06-18T13:14:23
https://dev.to/devsjavagirls/14-instrucoes-de-controle-if-e-for-25jp
java
**IF** Instrução condicional em Java: if Funciona como a instrução IF em outras linguagens Forma mais simples ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjtrrdreuftugr2ssmht.png) condição é uma expressão booleana. Se verdadeira, a instrução é executada. Se falsa, a instrução é ignorada. - Exemplo quando verdadeira ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ao2aonn2w576jw8r9woc.png) - Exemplo quando falsa. A instrução não será executada. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdzgqx7a3jln1p13e2wo.png) **Operadores relacionais para expressões condicionais** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtebixj4r0rmej1f89ir.png) Exemplo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izjl234t9mbk346h2bpo.png) Pode-se declarar mais de uma variável do mesmo tipo separando-as por vírgulas (int a, b, c). **Laço FOR** Forma mais simples do laço for: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nogilangmsgq8bz6po83.png) Partes do laço for: - Inicialização: Define uma variável de controle do laço com um valor inicial - Condição: Expressão booleana que testa a variável de controle do laço Se verdadeira, o laço continua a iterar Se falsa, o laço é encerrado - Iteração: Determina como a variável de controle do laço é alterada a cada iteração Exemplo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4qza63jhgq4nbjfbnnny.png) - count é a variável de controle do laço Inicialização: count é configurado com zero - Condição: count < 5 Verdadeira: executa println() e a iteração do laço (incrementa count) Falsa: execução retoma após o laço - Iteração do laço: Incrementa count em uma unidade Continua até a condição ser falsa Operador de incremento (++): Aumenta o operando em uma unidade Substitui count = count + 1 por count++ Operador de decremento (--): Diminui o operando em uma unidade Substitui count = count - 1 por count--
devsjavagirls
1,892,391
1.3 - Outro tipo de dados
A variável int só pode conter números inteiros. Não pode ser usada para números com componentes...
0
2024-06-18T13:13:41
https://dev.to/devsjavagirls/13-outro-tipo-de-dados-52f4
java
A variável int só pode conter números inteiros. Não pode ser usada para números com componentes fracionários. Exemplo: uma variável int pode conter 18, mas não 18.3. Java define outros tipos de dados além de int. Para números com componentes fracionários, Java define os tipos float e double. - Float (Precisão Simples) Tamanho: Ocupa 4 bytes (32 bits) de memória. Precisão: Aproximadamente 7 dígitos decimais. Uso: Adequado para situações onde a memória é um fator crítico e a precisão não precisa ser muito alta. Exemplo: Pode armazenar valores como 3.1415927 ou 1.234567. - Double (Precisão Dupla) Tamanho: Ocupa 8 bytes (64 bits) de memória. Precisão: Aproximadamente 15 dígitos decimais. Uso: Mais comumente usado em Java, ideal para aplicações científicas, financeiras, e situações onde a precisão é crucial. Exemplo: Pode armazenar valores como 3.141592653589793 ou 1.23456789012345. - Exemplo ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2gtc1u9wqsqbclf5sem.png) A saída desse programa será: var after division: 2 x after division: 2.5 Quando var é dividida por 4, o resultado é um inteiro (2) e o componente fracionário é perdido. Quando x (tipo ponto flutuante) é dividida por 4, o componente fracionário é preservado. - Tipos de dados diferentes em Java: Java tem tipos diferentes para inteiros e valores de ponto flutuante para criar programas eficientes. Aritmética de inteiros é mais rápida que cálculos de ponto flutuante. Diferentes tipos de dados requerem diferentes quantidades de memória, fazendo melhor uso dos recursos do sistema.
devsjavagirls
1,892,414
Building Header and Footer for Your E-Commerce Website with Nuxt.js
Check this post in my web notes! We established the fundamental layout structure for our...
27,540
2024-06-18T13:08:19
https://webcraft-notes.com/blog/building-header-and-footer-for-your-ecommerce
nuxt, vue, javascript, tutorial
![Building Header and Footer for Your E-Commerce Website with Nuxt.js](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73esf5uf5gi63py0wljo.png) > Check [this post](https://webcraft-notes.com/blog/building-header-and-footer-for-your-ecommerce) in [my web notes](https://webcraft-notes.com/blog/)! We established the fundamental [layout structure](https://webcraft-notes.com/blog/enhancing-your-ecommerce-site-custom-fonts-global) for our e-commerce store in our previous article. However, without necessary elements like the Header and Footer, our trip wouldn't be complete. We'll configure these crucial components today in order to continue to construct our Nuxt app. Our main goal will be to create a responsive header that can change its layout so that it fits within a compressed menu on smaller displays. We'll also create an elegant Footer that links to important website papers and outside resources, improving the e-commerce platform's overall user experience. ## 1. Crafting Your E-Commerce Store's Header Your e-commerce website's header acts as a central point for navigation, pointing visitors to key areas like account settings, product categories, and search capabilities. It makes important features and information easily accessible, improving user experience and enabling smooth site navigation. The header also helps to strengthen your brand identity. So let's separate our header into 3 sections: - main links to shop, about us, or home; - awesome logo; - additional service links to log in, cart, or favorite products. Okay, we need to create a new folder "navigation" inside the "components" folder and create an AppHeader.vue component. Then we will add a <header> tag and 3 sections inside, 2 with <nav> tag and one <div> with logo. I do not have a logo so I will add a simple title with the name of our store. Also, we will add icons "cart" and "heart", you can download them from the Fontawesome service. ``` <template> <header class="header"> <nav class="header__nav header__nav--left"> <ul class="nav__list"> <li class="nav__list--item"> <NuxtLink to="/" class="nav__list--link"> Home </NuxtLink> </li> <li class="nav__list--item"> <NuxtLink to="/" class="nav__list--link"> Shop </NuxtLink> </li> </ul> </nav> <div class="header__logo"> <p class="header__logo--text"> <NuxtLink to="/" class="header__logo--link"> Sherman CV </NuxtLink> </p> </div> <nav class="header__nav header__nav--right"> <ul class="nav__list"> <li class="nav__list--item"> <NuxtIcon name="cart-shopping-solid" size="20" class="nav__icon"/> <NuxtLink to="/" class="nav__list--link"> <p>Cart</p> </NuxtLink> <p class="link__number">(0)</p> </li> <li class="nav__list--item"> <NuxtLink to="/" class="nav__list--link"> <NuxtIcon name="heart-regular" size="20" class="nav__icon"/> </NuxtLink> <p class="link__number">(0)</p> </li> </ul> </nav> </header> </template> ``` For the CSS part, I'm pretty sure you can handle this on your own if not you can get the [source code here](https://buymeacoffee.com/webcraft.notes/e/257947). You simply need to position those elements in one line and add space between them like here: ``` display: flex; justify-content: space-between; align-items: center; width: 100%; height: 70px; background-color: #fff; ``` Nice, but we need to remember about responsiveness and update our header so that users have the best experience by using our commerce store. First, we need to add a <menu> section after the last <nav>, inside that menu we will add a button that will toggle our responsive menu and the content of that menu itself. Also, we will catch the click event from the button and change the showMenu state. ``` <menu class="menu"> <button class="menu__button" @click.prevent="showMenu = !showMenu"> <NuxtIcon name="bars-solid" size="20" class="menu__button--icon"/> </button> <transition name="slide-fade"> <div class="menu__content" v-if="showMenu"> <ul class="menu__list"> <li class="menu__list--item"> <NuxtLink to="/" class="menu__list--link"> Home </NuxtLink> </li> <li class="menu__list--item"> <NuxtLink to="/" class="menu__list--link"> Shop </NuxtLink> </li> <li class="menu__list--item"> <NuxtIcon name="cart-shopping-solid" size="20" class="menu__list--icon"/> <NuxtLink to="/" class="menu__list--link"> <p>Cart</p> </NuxtLink> <p class="link__number">(0)</p> </li> <li class="menu__list--item"> <NuxtLink to="/" class="menu__list--link"> <NuxtIcon name="heart-regular" size="20" class="menu__list--icon"/> </NuxtLink> <p class="link__number">(0)</p> </li> </ul> </div> </transition> </menu> ``` Yes, I forgot to mention <transition> for the smooth appearance of our menu. ``` .slide-fade-enter-active { transition: all 0.3s ease-out; } .slide-fade-leave-active { transition: all 0.8s cubic-bezier(1, 0.5, 0.8, 1); } .slide-fade-enter-from, .slide-fade-leave-to { transform: translateX(20px); opacity: 0; } ``` That's it, we have created our Header, and now we can move forward to our Footer. ## 2. Developing Your E-Commerce Footer The footer, which is normally located at the bottom of a website page, aids in navigation and gives users access to vital information. Links to important pages such as privacy policies, terms of service, copyright notices, and contact details are frequently included. By facilitating trust, providing quick access to essential resources, and enhancing website usability, a footer improves user experience. As with a header, with our footer the same story. We will create an AppFooter.vue component inside the "navigation" folder. In the Footer, we will have 2 sections, the top that will show links to different information related to our store and the bottom with the famous phrase "All rights reserved". ``` <template> <footer class="footer"> <section class="footer__top"> <nav class="footer__top--nav footer__top--nav-left"> <h6>Services</h6> <ul> <li> <NuxtLink to="/" class="footer__top--link"> Shop & Contact </NuxtLink> </li> <li> <NuxtLink to="/" class="footer__top--link"> Return & Refund </NuxtLink> </li> <li> <NuxtLink to="/" class="footer__top--link"> Online Store </NuxtLink> </li> <li> <NuxtLink to="/" class="footer__top--link"> Terms & Conditions </NuxtLink> </li> </ul> </nav> <nav class="footer__top--nav footer__top--nav-center"> ... </nav> <nav class="footer__top--nav footer__top--nav-center"> ... </nav> </section> <section class="footer__bottom"> <p>&copy; 2024 Sherman CV. All Rights Reserved</p> </section> </footer> </template> ``` So simple, now we will create a "pages" folder and index.vue that will represent our landing page. In the previous article, we already added our Header and Footer into the default layout, so we are ready to start our project again and check the result. with the command: "npm run dev". I'm pretty sure that you were developing your store with a dev server and have already seen the result but we need some intrigue. ![building header and footer with Nuxt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1sthhqlc37gfcvywsnkh.png) Awesome! We are ready to move, and in the next article, we will put in live our most important pages. We've made significant progress in our Nuxt.js e-commerce journey by establishing two essential elements: the Header and Footer. These components are essential for both brand identity and user navigation. We're enhancing our platform's professionalism and user experience with a responsive Header and an educational Footer. Now that our Header and Footer are complete, we can go on to the next phase. In order to achieve our e-commerce objectives, we'll concentrate on making our key pages come to life in the upcoming piece. If you do not want to wait for the next article from this series and want to move on, you can find the whole list of articles in [my web notes](https://webcraft-notes.com/series/building-an-e-commerce-store-with-nuxt). Also, if you need a source code for this tutorial you can get it [here](https://buymeacoffee.com/webcraft.notes/e/257947).
webcraft-notes
1,892,411
Everything You Need to Know About GPT-4o
OpenAI’s GPT-4o is the third major iteration of their popular large multimodal model, expanding the...
0
2024-06-18T13:06:29
https://dev.to/mohith/everything-you-need-to-know-about-gpt-4o-29cm
openai, ai, chatgpt, gpt4o
OpenAI’s GPT-4o is the third major iteration of their popular large multimodal model, expanding the capabilities of GPT-4 with Vision. This new model integrates talking, seeing, and interacting with users more seamlessly than previous versions through the ChatGPT interface. In the GPT-4o announcement, OpenAI focused on the model’s ability for "much more natural human-computer interaction." This article will discuss what GPT-4o is, how it differs from previous models, evaluate its performance, and explore its use cases. ### What is GPT-4o? OpenAI’s GPT-4o, where the “o” stands for omni (meaning ‘all’ or ‘universally’), was released during a live-streamed announcement and demo on May 13, 2024. It is a multimodal model with text, visual, and audio input and output capabilities, building on the previous iteration of OpenAI’s GPT-4 with Vision model, GPT-4 Turbo. The power and speed of GPT-4o come from being a single model handling multiple modalities. Previous GPT-4 versions used multiple single-purpose models (voice to text, text to voice, text to image) and created a fragmented experience of switching between models for different tasks. Compared to GPT-4T, OpenAI claims it is twice as fast, 50% cheaper across both input tokens ($5 per million) and output tokens ($15 per million), and has five times the rate limit (up to 10 million tokens per minute). GPT-4o has a 128K context window and has a knowledge cut-off date of October 2023. Some of the new abilities are currently available online through ChatGPT, the ChatGPT app on desktop and mobile devices, the OpenAI API, and Microsoft Azure. ### What’s New in GPT-4o? While the release demo only showed GPT-4o’s visual and audio capabilities, the release blog contains examples that extend far beyond the previous capabilities of GPT-4 releases. Like its predecessors, it has text and vision capabilities, but GPT-4o also has native understanding and generation capabilities across all its supported modalities, including video. As Sam Altman points out in his personal blog, the most exciting advancement is the speed of the model, especially when the model is communicating with voice. This is the first time there is nearly zero delay in response, and you can engage with GPT-4o similarly to how you interact in daily conversations with people. Less than a year after releasing GPT-4 with Vision, OpenAI has made meaningful advances in performance and speed which you don’t want to miss. ### Text Evaluation of GPT-4o For text, GPT-4o features slightly improved or similar scores compared to other large multimodal models like previous GPT-4 iterations, Anthropic's Claude 3 Opus, Google's Gemini, and Meta's Llama3, according to self-released benchmark results by OpenAI. Note that in the text evaluation benchmark results provided, OpenAI compares the 400b variant of Meta’s Llama3. At the time of publication of the results, Meta had not finished training its 400b variant model. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e8s58bp8vpc3o62vy0fr.png) ### Video Capabilities of GPT-4o **Understanding Video:** GPT-4o has enhanced capabilities for both viewing and understanding videos. According to the API release notes, the model supports video (without audio) via its vision capabilities. Videos need to be converted to frames (2-4 frames per second, either sampled uniformly or via a keyframe selection algorithm) to input into the model. You can refer to the OpenAI cookbook for vision to better understand how to use video as input and the limitations of this release. **Demonstrations and Capabilities:** During the initial demo, GPT-4o showcased its ability to view and understand both video and audio from an uploaded video file and generate short videos. It was frequently asked to comment on or respond to visual elements. However, similar to our initial observations of Gemini, the demo didn’t clarify if the model was receiving continuous video or triggering an image capture whenever it needed to “see” real-time information. One demo moment stood out where GPT-4o noticed a person making bunny ears behind Greg Brockman. This suggests that GPT-4o might use a similar approach to video as Gemini, where audio is processed alongside extracted image frames of a video. https://www.youtube.com/watch?v=MirzFk_DSiI&t=173s ### Audio Capabilities of GPT-4o **Ingesting and Generating Audio:** GPT-4o can ingest and generate audio files. It demonstrates impressive control over generated voice, including changing communication speed, altering tones, and even singing on demand. GPT-4o can also understand input audio as additional context for any request. Demos have shown GPT-4o providing tone feedback for someone speaking Chinese and feedback on the speed of someone's breath during a breathing exercise. **Performance:** According to benchmarks, GPT-4o outperforms OpenAI’s previous state-of-the-art automatic speech recognition (ASR) model, Whisper-v3, and excels in audio translation compared to models from Meta and Google. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qj0zynjhxh55wmvcf1jx.png) ### Image Generation with GPT-4o GPT-4o has strong image generation abilities, capable of one-shot reference-based image generation and accurate text depictions. OpenAI's demonstrations included generating images with specific words transformed into alternative visual designs, showcasing its ability to create custom fonts. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ficuppb4atb2quccvv0.png) **Visual Understanding:** Visual understanding in GPT-4o has been improved, achieving state-of-the-art results across several visual understanding benchmarks compared to GPT-4T, Gemini, and Claude. Roboflow maintains a less formal set of visual understanding evaluations, showing real-world vision use cases for open-source large multimodal models. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckg5c2amjml81biyrzml.png) ### Evaluating GPT-4o for Vision Use Cases **Optical Character Recognition (OCR):** GPT-4o performs well in OCR tasks, returning visible text from images in text format. For example, when prompted to "Read the serial number" or "Read the text from the picture," GPT-4o answered correctly. In evaluations on real-world datasets, GPT-4o achieved a 94.12% average accuracy (10.8% higher than GPT-4V), a median accuracy of 60.76% (4.78% higher than GPT-4V), and an average inference time of 1.45 seconds. This 58.47% speed increase over GPT-4V makes GPT-4o the leader in speed efficiency (a metric of accuracy given time, calculated by accuracy divided by elapsed time). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48v5q7fvaoqlzcrt4lle.png) In summary, GPT-4o's advancements in video, audio, and image capabilities, along with its improved performance and efficiency, make it a significant leap forward in AI technology. Whether you're looking to leverage its capabilities for customer support, content creation, education, or healthcare, GPT-4o offers a robust and versatile tool to meet your needs. ### Document Understanding with GPT-4o **Key Information Extraction:** Next, we evaluate GPT-4o’s ability to extract key information from images with dense text. When prompted with “How much tax did I pay?” referring to a receipt, and “What is the price of Pastrami Pizza?” in reference to a pizza menu, GPT-4o answers both questions correctly. This marks an improvement over GPT-4 with Vision, which struggled with extracting tax information from receipts. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zybgo8zlccbe7910n08.png) **Visual Question Answering with GPT-4o:** Next, we put GPT-4o through a series of visual question and answer prompts. When asked to count coins in an image containing four coins, GPT-4o initially answers five but correctly responds upon retry. This inconsistency in counting is similar to issues seen in GPT-4 with Vision, highlighting the need for performance monitoring tools like GPT Checkup. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50la3nz3tnrxvq5zdcko.png) Despite this, GPT-4o correctly identifies scenes, such as recognizing an image from the movie Home Alone. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3gku3xtehg0j7j8chm6.png) **Object Detection with GPT-4o:** Object detection remains a challenging task for multimodal models. In our tests, GPT-4o, like Gemini, GPT-4 with Vision, and Claude 3 Opus, failed to generate accurate bounding boxes for objects. Two instances of GPT-4o responding with incorrect object detection coordinates were noted, illustrating the model's limitations in this area. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omk39dscxk1a9yw11pww.png) ### GPT-4o Use Cases As OpenAI continues to expand GPT-4o's capabilities and prepares for future models like GPT-5, the range of use cases is set to grow exponentially. GPT-4o makes image classification and tagging simple, similar to OpenAI’s CLIP model, but with added vision capabilities that allow for more complex computer vision pipelines. **Real-time Computer Vision Use Cases:** With speed improvements and enhanced visual and audio capabilities, GPT-4o is now viable for real-time use cases. This includes applications like navigation, translation, guided instructions, and interpreting complex visual data in real-time. Interacting with GPT-4o at the speed of human conversation reduces the time spent typing and allows for more seamless integration with the world around you. **One-device Multimodal Use Cases:** GPT-4o’s ability to run on devices such as desktops, mobiles, and potentially wearables like Apple VisionPro, allows for a unified interface to troubleshoot tasks. Instead of typing text prompts, you can show your screen or pass visual information while asking questions. This integrated experience reduces the need to switch between different screens and models. **General Enterprise Applications:** With improved performance and multimodal integration, GPT-4o is suitable for many enterprise application pipelines that do not require fine-tuning on custom data. Although it is more expensive than running open-source models, GPT-4o’s speed and capabilities can be valuable for prototyping complex workflows quickly. You can use GPT-4o in conjunction with custom models to augment its knowledge or decrease costs, enabling more efficient and effective enterprise applications. ### What Can GPT-4o Do? At its release, GPT-4o was the most capable of all OpenAI models in terms of functionality and performance. Here are some key features: **Key Features of GPT-4o:** - **Real-time Interactions:** Engage in real-time verbal conversations without noticeable delays. - **Knowledge-based Q&A:** Answer questions using its extensive knowledge base, similar to prior GPT-4 models. - **Text Summarization and Generation:** Execute tasks like text summarization and generation efficiently. - **Multimodal Reasoning and Generation:** Process and respond to text, voice, and vision data, understanding and generating responses across these modalities. - **Language and Audio Processing:** Handle more than 50 different languages with advanced capabilities. - **Sentiment Analysis:** Understand user sentiment in text, audio, and video. - **Voice Nuance:** Generate speech with emotional nuances, suitable for sensitive communication. - **Audio Content Analysis:** Analyze and generate spoken language for applications like voice-activated systems and interactive storytelling. - **Real-time Translation:** Support real-time translation between languages. - **Image Understanding and Vision:** Analyze and explain visual content, including images and videos. - **Data Analysis:** Analyze data in charts and create data charts based on analysis or prompts. - **File Uploads:** Support file uploads for specific data analysis. - **Memory and Contextual Awareness:** Remember previous interactions and maintain context over long conversations. - **Large Context Window:** Maintain coherence over longer conversations or documents with a 128,000-token context window. - **Reduced Hallucination and Improved Safety:** Minimize incorrect or misleading information and ensure outputs are safe and appropriate. ### How to Use GPT-4o **ChatGPT Free:** Available to free users of OpenAI's ChatGPT chatbot, but with restricted message access and limited features. **ChatGPT Plus:** Paid users get full access to GPT-4o without feature restrictions. **API Access:** Developers can integrate GPT-4o into applications via OpenAI's API. **Desktop Applications:** Integrated into desktop applications, including a new app for macOS. **Custom GPTs:** Organizations can create custom versions of GPT-4o tailored to specific needs via OpenAI's GPT Store. **Microsoft OpenAI Service:** Explore GPT-4o's capabilities in a preview mode within Microsoft Azure OpenAI Studio, designed to handle multimodal inputs. **Key Features of GPT-4o:** - **Multimodal Capabilities:** GPT-4o is not just a language model; it understands and generates content across text, images, and audio. This makes it exceptionally versatile, processing and responding to queries requiring a nuanced understanding of different data types. For instance, it can analyze a document, recognize objects in an image, and understand spoken commands all within the same workflow. - **Increased Processing Speed and Efficiency:** Engineered for speed, GPT-4o's improvements are crucial for real-time applications such as digital assistants, live customer support, and interactive media, where response time is critical for user satisfaction and engagement. - **Enhanced Capacity for Users:** GPT-4o supports a higher number of simultaneous interactions, allowing more users to benefit from its capabilities at once. This feature is particularly beneficial for businesses requiring heavy usage without compromising performance, such as in customer service bots or data analysis tools. - **Improved Safety Features:** With advanced AI-driven algorithms, GPT-4o manages the risks associated with generating harmful content, ensuring safer interactions and compliance with regulatory standards. These measures are vital in maintaining trust and reliability as AI becomes more integrated into critical processes. Overall, GPT-4o represents a significant leap forward in AI technology, promising to enhance how businesses and individuals interact with machine intelligence. The integration of these advanced capabilities positions OpenAI to remain a leader in the AI technology space, potentially outpacing competitors in creating more adaptable, efficient, and safer AI systems. **Competitor Analysis of OpenAI’s ChatGPT with the New GPT-4o Update** OpenAI's latest release, GPT-4o, has set a new benchmark in the world of artificial intelligence with its advanced multimodal capabilities. This section will provide a comprehensive analysis of how GPT-4o stacks up against its competitors, particularly focusing on Anthropic's Claude 3 Opus, Google's Gemini, and Meta's Llama3. **1. Anthropic's Claude 3 Opus** Strengths: Ethical AI: Claude 3 Opus is designed with a strong emphasis on ethical AI and safety. Anthropic has developed robust frameworks to minimize harmful outputs, which is a significant selling point for applications in sensitive fields like healthcare and finance. Human-like Interaction: Known for its human-like interaction quality, Claude 3 Opus excels in generating empathetic and contextually appropriate responses. Weaknesses: Speed and Efficiency: While Claude 3 Opus provides high-quality responses, it lags behind GPT-4o in terms of processing speed and efficiency. GPT-4o's real-time interaction capabilities offer a more seamless user experience. Multimodal Integration: Claude 3 Opus lacks the extensive multimodal integration that GPT-4o offers. GPT-4o's ability to process and generate text, images, and audio in a unified manner gives it a distinct edge. **Comparative Analysis:** GPT-4o outperforms Claude 3 Opus with its faster processing speeds and comprehensive multimodal capabilities. However, Claude 3 Opus remains a strong contender in applications where ethical considerations and human-like interaction are paramount. **2. Google's Gemini** Strengths: Data Integration: Gemini benefits from Google's extensive data resources and integration capabilities. It excels in understanding and leveraging vast datasets to provide accurate and contextually rich responses. Continuous Improvement: Google's continuous updates and improvements ensure that Gemini remains a top competitor in the AI field. Weaknesses: Real-time Interaction: While Gemini is highly capable, its real-time interaction and response speed do not match the nearly instantaneous response time of GPT-4o. Audio Processing: Gemini's audio processing capabilities are less advanced compared to GPT-4o, which excels in real-time audio interaction and nuanced voice generation. **Comparative Analysis:** GPT-4o's edge lies in its real-time interaction capabilities and superior audio processing. However, Gemini remains highly competitive due to its robust data integration and continuous improvement from Google’s extensive research and development efforts. **3. Meta's Llama3** Strengths: Open Source Flexibility: Llama3's open-source nature allows for greater customization and flexibility, making it an attractive option for developers looking to tailor the model to specific needs. Cost Efficiency: Meta’s focus on cost efficiency makes Llama3 a viable option for applications requiring scalable AI solutions without significant financial investment. Weaknesses: Multimodal Capabilities: Llama3 does not match GPT-4o's multimodal capabilities. GPT-4o’s ability to handle text, image, and audio inputs and outputs provides a more versatile and powerful tool. Performance Metrics: In terms of raw performance metrics, GPT-4o outperforms Llama3 in benchmarks related to speed, accuracy, and context window size. **Comparative Analysis:** While Llama3’s open-source flexibility and cost efficiency are notable strengths, GPT-4o's advanced multimodal capabilities and superior performance metrics make it the preferred choice for applications requiring high versatility and processing power. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/socz2vvzxtcd78mkl83s.png) **Overall Comparative Insights** **Speed and Efficiency:** GPT-4o leads in processing speed and efficiency, providing nearly instantaneous responses that enhance user experience significantly. **Multimodal Integration:** The comprehensive multimodal integration of GPT-4o, capable of handling text, image, and audio inputs and outputs, sets it apart from competitors like Claude 3 Opus, Gemini, and Llama3. **Customization and Flexibility:** While GPT-4o offers extensive capabilities out of the box, Meta’s Llama3 provides more flexibility through its open-source nature, allowing for greater customization. **Ethical AI and Safety:** Anthropic’s Claude 3 Opus shines in the realm of ethical AI, with robust safety measures that ensure responsible AI usage, though GPT-4o also emphasizes improved safety protocols. **Cost and Accessibility:** GPT-4o's cost efficiency, with reduced input and output token costs, makes it more accessible compared to previous models, although Meta's Llama3 still holds an edge in terms of overall cost efficiency due to its open-source model. ### How to Access GPT-4o **Subscription Plans and API Access:** OpenAI offers free use of GPT-4o for limited capability and then extended usage through various subscription tiers, catering to different user needs. Individual developers can start with the ChatGPT Plus plan, while businesses can opt for customized plans for enhanced access. **Integrating GPT-4o via OpenAI API:** Developers need an API key from OpenAI. They should consult the API documentation and use development libraries (Python, Node.js, Ruby) to integrate GPT-4o into their applications. **Using OpenAI Playground:** For non-coders, the OpenAI Playground provides a user-friendly interface to experiment with GPT-4o’s capabilities. Users can input text, images, or audio and see real-time responses. **Educational Resources and Support:** OpenAI offers extensive resources, including tutorials, webinars, and a dedicated support team to assist with technical questions and integration challenges. ### Advanced Features of GPT-4o **Enhanced Multimodal Capabilities:** GPT-4o processes and synthesizes information across text, images, and audio inputs, making it useful for sectors like healthcare, media, and customer service. **Real-Time Processing:** Critical for applications requiring immediate responses, such as interactive chatbots and real-time monitoring systems. **Expanded Contextual Understanding:** GPT-4o remembers and refers back to earlier points in conversations or data streams, beneficial for complex problem-solving. **Advanced Safety Protocols:** Improved content filters and ethical guidelines to ensure safe and trustworthy AI interactions. **Customization and Scalability:** Developers can fine-tune the model for specific tasks, supporting scalable deployment from small operations to enterprise-level solutions. ### How Businesses Can Benefit from GPT-4o Update **Automation of Complex Processes:** Automate tasks like document analysis, risk assessment, and diagnostic assistance in finance, legal, and healthcare sectors. **Enhanced Customer Interaction:** Power sophisticated customer service chatbots that handle inquiries with a human-like understanding, reducing operational costs. **Personalization at Scale:** Analyze customer data to offer personalized recommendations, enhancing the shopping experience and increasing sales. **Innovative Marketing Solutions:** Generate promotional materials and engaging multimedia content, making marketing campaigns more effective. **Improved Decision Making:** Integrate GPT-4o into business intelligence tools to gain deeper insights and support strategic decisions. **Training and Development:** Provide personalized learning experiences and real-time feedback in employee training and development. **Risk Management:** Monitor and analyze communications and transactions for anomalies, helping in mitigating risks. ### Current Challenges and Future Trends **Challenges:** - **Scalability:** Ensuring consistent performance at scale. - **Data Privacy and Security:** Managing privacy and security of extensive data. - **Bias and Fairness:** Addressing inherent biases in training data. - **Regulatory Compliance:** Navigating an uncertain regulatory environment. **Future Trends:** - **Greater Multimodal Capabilities:** Enhanced understanding and processing of a wider array of sensory data. - **AI and IoT Convergence:** Dynamic interaction with the physical world. - **Ethical AI Development:** Push towards ethical AI development. - **Autonomous Decision-Making:** Handling more complex decision-making tasks. - **Collaborative AI:** AI evolving to collaborate more effectively with humans. ### Conclusion GPT-4o is a significant advancement in AI, offering powerful tools to enhance operations and services. Its integration across different platforms and ease of use further solidify its place as a versatile AI model for individual users and organizations. As AI continues to evolve, GPT-4o represents a leap forward, setting the stage for future innovations in artificial intelligence. ### Frequently Asked Questions **What makes GPT-4o different from previous versions of GPT?** GPT-4o extends functionalities by integrating multimodal capabilities, processing, and understanding a combination of text, image, and audio data. **How can businesses implement GPT-4o?** Through OpenAI’s API, GPT-4o can be integrated into existing systems for automating customer service, enhancing content creation, and streamlining operations. **Is GPT-4o safe and ethical to use?** GPT-4o is designed with improved safety protocols to handle sensitive information carefully and minimize biases. **What are the costs associated with using GPT-4o?** Costs vary based on the scale and scope of application. OpenAI offers various pricing tiers from individual developers to large enterprises. **Can GPT-4o be customized for specific tasks?** Yes, GPT-4o is highly customizable, allowing developers to fine-tune the model for specialized applications. **What future developments can we expect from OpenAI in the AI field?** Future developments may include enhanced multimodal capabilities, improvements in AI safety and ethics, and more complex task handling. **Is GPT-4o free?** GPT-4o offers limited free interaction. Extended use, especially for commercial purposes, typically requires a subscription to a paid plan. By leveraging these advanced capabilities, GPT-4o is poised to drive innovation, enhance user interactions, and streamline operations across various industries. Happy Coding.
mohith
1,892,409
Lock / Mutex to a CS undergraduate (Difficulty 2)
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T13:04:25
https://dev.to/sauravshah31/lock-mutex-to-a-cs-undergraduate-59me
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Locks prevent issues when multiple entities access a resource simultaneously (like multiple apps writing to a file), which can cause unexpected behavior (like overwritten writes). Google Docs uses it to avoid overwritten edits during collaboration. ## Additional Context I am planning to post 5 submissions explaining "Lock/Mutex" at 5 levels of difficulty. This is Difficulty 2. A Computer Science first year student, probably have not heard of mutex, but might have heard about some problems involving race condition. An explanation that explains what mutex is along with its usage and example is useful. For more about explaining the term in 5 Levels of difficulty, refer to the below post. It's interesting! {% embed https://dev.to/sauravshah31/computer-science-challenge-lets-make-it-interesting-lai %} [Previous explanation for Difficulty 1](https://dev.to/sauravshah31/lock-mutex-to-an-8th-grader-378a) [Next explanation for Difficulty 3](https://dev.to/sauravshah31/lock-mutex-to-a-software-engineer-5hm8) **Cheers🎉** ~ [sauravshah31](https://x.com/sauravshah31)
sauravshah31
1,892,389
Building a SQL Report Generator using Gemini AI + ToolJet 📊
Introduction This tutorial will guide you through the process of building an AI-driven SQL...
0
2024-06-18T13:00:56
https://blog.tooljet.com/building-a-sql-report-generator-using-gemini-ai-tooljet/
lowcode, gemini, javascript, ai
## Introduction This tutorial will guide you through the process of building an AI-driven SQL custom report generator using [ToolJet](https://github.com/ToolJet/ToolJet), a low-code visual app builder, and the Gemini API, a powerful natural language processing API. The resulting application will enable users to input requests in plain English, which will then be translated into custom reports. We'll use ToolJet's visual app builder to create a user-friendly UI, and ToolJet's low-code query builder to connect it to the Gemini API endpoints and our data sources. The final product will enable users to preview generated reports and download them in PDF, Excel, or CSV formats. ------------------------------------------------------------- ## Prerequisites: - **ToolJet** (https://github.com/ToolJet/ToolJet): An open-source, low-code business application builder. [Sign up](https://www.tooljet.com/signup) for a free ToolJet cloud account or [run ToolJet on your local machine](https://docs.tooljet.com/docs/setup/try-tooljet/) using Docker. - **Gemini API Key** : Log into [Google AI Studio](https://aistudio.google.com/app/apikey) using your existing Google credentials. Within the AI Studio interface, you can locate and copy your API key. Here is a quick preview of our final application: ![SQL Report Builder Preview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xb7brsesfnu9t6ny61h1.png) ------------------------------------------------------------- Login to your [ToolJet account](https://app.tooljet.com/). Navigate to the ToolJet dashboard and click on the Create new app button on the top left corner. ToolJet comes with 45+ built-in components. This will let us set up our UI in no time. ## Building our UI - Drag and drop the **Container** component onto the canvas from the component library on the right side. Adjust the height and width of the **Container** component appropriately. - Similarly, drag-and-drop the **Icon** and three **Text** components inside your Container. We'll use these **Text** components for our header and label texts. - For the **Icon** component, navigate to the properties panel on the right and select the appropriate icon under the **Icon** property. - Change the colour of the **Icon** and **Text** component according to your preference. - Change the font size and content of the **Text** component appropriately. - Drag and drop the **Textarea** component inside your Container. We'll use this component as an input for our text query. - Rename the **Textarea** component to _textPrompt_. - Next, drag and drop the **Table** component onto the Container. We'll use this component to display a preview of our report. The **Table** component comes built-in with the functionality to download the displayed data. This will allow us to download our generated report in PDF, Excel, or CSV formats. - Now let's add a **Button** component that initiates the report generation process. Change the colour, size and content appropriately. ![SQL Report Builder UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5b1r1pifxlt1kj74xe0e.png) ------------------------------------------------------------- ## Setting up Queries Apart from its built-in database and data sources, ToolJet allows you to connect to various external data sources, including databases, external APIs, and services. For this tutorial, we'll be using ToolJet's built-in PostgreSQL sample data source. The queries we'll set up will be applicable to an external PostgreSQL data source as well. We'll also be using the REST API query feature to connect with the **Gemini** API endpoints. - In the query panel, click the + Add button and choose the Sample data source option. - Rename the query to getDatabaseSchema. - In the dropdown, choose the SQL mode and enter the code below. This will fetch all the table names in our database along with their column names. ``` SELECT table_name, string_agg(column_name, ', ') AS columns FROM information_schema.columns WHERE table_schema = 'public' GROUP BY table_name ``` - To ensure that the query runs every time the application loads, enable the **Run this query on application load?** toggle. Now, let's create another query that will connect to the Gemini AI API and generate our custom SQL report query. - Using ToolJet's [Workspace Constants](https://docs.tooljet.com/docs/org-management/workspaces/workspace_constants/) feature, create a new constant named **GEMINI_API_KEY** with your Gemini API key. - In the query panel, click on the **+ Add** button and choose the **REST API** option. - Rename the query to _getSqlQuery_. - In the Request parameter, choose POST as the Method from the drop-down and paste the following URL. ``` https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro:generateContent?key={{constants.GEMINI_API_KEY}} ``` - Navigate to the Body section of _getSqlQuery_. Toggle on Raw JSON and enter the following code: ``` {{ `{ "contents": [{ "parts": [{ "text": "Data Schema: ${JSON.stringify(queries.getTablesWithColumns.data.map(item => ({ ...item, table_name: "public." + item.table_name }))).replace(/"([^"]+)":/g, '$1:').replace(/"/g, '\\"')}, Text Prompt: Write a standard SQL query for a custom SQL report that will ${components.textPrompt.value.replaceAll("\n"," ")}. Return without formatting and without any code highlighting and any backticks" },], },], }` }} ``` Let's add our final query which will retrieve the data from the sample data source that we need for our custom report. - Similarly, create another **Sample data source** query, rename it to _getReportData_ and enter the code below: ``` {{queries.getSqlQuery.data.candidates[0].content.parts[0].text}} ``` ------------------------------------------------------------- ## Binding Queries to the UI Components Now that we have successfully built our UI and queries, the next step is to integrate them. - Select the **Button** component and navigate to the properties panel on the right. Click on the **+ New event handler** button. Change the **Action** to **Run query** and select the _getSqlQuery_ query. - Next, navigate to the _getSqlQuery_ query and click on the **+ New event handler** button. Change the **Action** to **Run query** and select the _getReportData_ query. - Next, Select the **Table** component. In the properties panel on the right, enter the following code in the Data field. ``` {{queries.getReportData.data}} ``` We have successfully integrated our queries into our UI. Now let's test the application with the prompt below: _list the names of customers along with the products they have ordered, including the order date and the total quantity ordered for each product._ ![SQL Report Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pbdvss1cd8b7mjucowym.png) You can click on the **+** button on the **Table** footer to download this report in PDF, Excel, or CSV formats. ------------------------------------------------------------- ## Conclusion Congratulations on successfully building an AI-powered SQL report generator using ToolJet and the Gemini API. You can now input prompts in plain English and generate reports across multiple tables in your PostgreSQL instance. To learn and explore more about ToolJet, check out the [ToolJet docs](https://docs.tooljet.com/docs/) or connect with us and post your queries on [Slack](https://join.slack.com/t/tooljet/shared_invite/zt-2ij7t3rzo-qV7WTUTyDVQkwVxTlpxQqw).
amanregu
1,892,403
Understanding the Singleton Pattern in TypeScript
Hello everyone, السلام عليكم و رحمة الله و بركاته Introduction Design patterns are...
0
2024-06-18T12:54:21
https://dev.to/bilelsalemdev/understanding-the-singleton-pattern-in-typescript-4kep
javascript, typescript, designpatterns, programming
Hello everyone, السلام عليكم و رحمة الله و بركاته #### Introduction Design patterns are essential tools for solving common problems in software design. One of the most widely used patterns is the Singleton pattern. This pattern ensures that a class has only one instance and provides a global point of access to it. This article will deep dive into the Singleton pattern, explore its implementation in TypeScript, and provide real-world examples where this pattern proves to be highly beneficial, such as in `Socket.IO` connections and database connections. #### What is the Singleton Pattern? The Singleton pattern restricts the instantiation of a class to one "single" instance. This is useful when exactly one object is needed to coordinate actions across the system. The Singleton pattern ensures that a class has only one instance and provides a global point of access to it. #### Implementing Singleton in TypeScript To implement a Singleton in TypeScript, you need to follow these steps: 1. **Private Constructor:** Ensure that the class cannot be instantiated from outside the class. 2. **Static Method:** Provide a static method that returns the instance of the class. 3. **Private Static Variable:** Hold the single instance of the class. Here’s a basic implementation: ```typescript class Singleton { private static instance: Singleton; private constructor() { // private constructor to prevent direct instantiation } public static getInstance(): Singleton { if (!Singleton.instance) { Singleton.instance = new Singleton(); } return Singleton.instance; } public someBusinessLogic() { // business logic here } } // Usage const singletonInstance = Singleton.getInstance(); singletonInstance.someBusinessLogic(); ``` #### Real-World Examples Now, let's look at how the Singleton pattern can be applied in real-world scenarios. ##### 1. Singleton Pattern with Socket.IO `Socket.IO` is a library that enables real-time, bidirectional, and event-based communication. When dealing with sockets, it's often crucial to maintain a single connection instance throughout the application to ensure consistent and efficient communication. Here's how you can implement a Singleton for a `Socket.IO` connection in TypeScript: ```typescript import { io, Socket } from 'socket.io-client'; class SocketSingleton { private static instance: Socket; private constructor() {} public static getInstance(): Socket { if (!SocketSingleton.instance) { SocketSingleton.instance = io('http://localhost:3000'); } return SocketSingleton.instance; } } // Usage const socket = SocketSingleton.getInstance(); socket.emit('event', { data: 'some data' }); socket.on('response', (data) => { console.log(data); }); ``` ##### 2. Singleton Pattern with Database Connections Managing database connections efficiently is crucial in any application to avoid the overhead of creating multiple connections and to manage resources properly. Using the Singleton pattern for database connections ensures that only one connection instance is used throughout the application. Here's an example using a hypothetical database client: ```typescript import { DatabaseClient } from 'some-database-client'; class DatabaseConnection { private static instance: DatabaseClient; private constructor() {} public static getInstance(): DatabaseClient { if (!DatabaseConnection.instance) { DatabaseConnection.instance = new DatabaseClient({ host: 'localhost', user: 'root', password: 'password', database: 'my_db' }); } return DatabaseConnection.instance; } } // Usage const db = DatabaseConnection.getInstance(); db.query('SELECT * FROM users', (err, results) => { if (err) throw err; console.log(results); }); ``` #### What Happens if We Don't Use the Singleton Pattern? ##### Socket.IO Connections Without the Singleton pattern, multiple instances of `Socket.IO` connections could be created. This can lead to: - **Inconsistent Communication:** Each instance would manage its own connection, resulting in inconsistent state and data. - **Increased Resource Usage:** Multiple connections consume more memory and CPU resources, leading to inefficient resource management. - **Event Duplication:** Events may be emitted and received multiple times, causing unexpected behavior and bugs. ##### Database Connections Without the Singleton pattern for database connections, the following issues might arise: - **Connection Overhead:** Each request might open a new database connection, leading to a high overhead in managing these connections. - **Resource Exhaustion:** Databases have a limited number of connections they can handle simultaneously. Multiple instances can quickly exhaust available connections. - **Inconsistent State:** Different parts of the application might work with different instances, leading to inconsistencies and potential data integrity issues. #### Advantages of Using Singleton Pattern - **Controlled Access:** It provides controlled access to the single instance. - **Reduced Namespace Pollution:** It avoids the need for global variables, reducing namespace pollution. - **Lazy Initialization:** The instance is created only when it's needed, optimizing resource use. - **Consistency:** Ensures that a class has only one instance, providing consistent data across the application. #### Conclusion The Singleton pattern is a powerful tool in the software design, especially useful for managing resources such as database connections and socket communications. It's implementation in TypeScript is straightforward, and its application in real-world scenarios like `Socket.IO` connections and database management demonstrates its practical utility. By understanding and utilizing the Singleton pattern, you can avoid the pitfalls associated with multiple instances.
bilelsalemdev
1,892,402
Why You Should Use Local AI Instead ChatGPT?
Privacy Privacy is very important. If you use ChatGPT, admins can see your conversations....
0
2024-06-18T12:52:34
https://dev.to/qui/why-you-should-use-local-ai-instead-chatgpt-48md
## Privacy Privacy is very important. If you use ChatGPT, admins can see your conversations. But in local AI, they can't see because they aren't admin of your AI. ## Freedom For example, you can't write RAT with ChatGPT, but in local AI, you can do it (AI model must uncencored). I said RAT, but we have a lot of examples for it. ## Security In ChatGPT your data can be leaked. But in local AI, it cannot.
qui
1,892,400
Activating Windows 10/11
Hello guys. This is qui. Today I will make a tutorial about activating Windows 10/11 for free. This...
0
2024-06-18T12:51:41
https://dev.to/qui/activating-windows-1011-fj2
Hello guys. This is qui. Today I will make a tutorial about activating Windows 10/11 for free. This tutorial isn't working with LTSC versions. First you should run script with this command in PowerShell (Administrator): `irm https://get.activated.win | iex` And select HWID. If it's not working, you can use KMS. Thank you for reading this post. Good bye!
qui
1,888,857
25 funded projects you can contribute in open source
The reputation of funded projects is very strong because they have secured substantial funds and are...
0
2024-06-18T12:49:28
https://dev.to/taipy/25-funded-projects-you-can-contribute-in-open-source-40lh
opensource, programming, discuss, webdev
The reputation of funded projects is very strong because they have secured substantial funds and are backed by ventures. There are so many projects that are open source, and you should definitely contribute to those especially because their credibility is way higher. There might be a chance where you can get a direct job offer, after all, you don't really know who is watching you in open source! I've kept only active projects (last commit under 2 months) so it will be useful. Let's keep it short and straight. --- ## 1. [Taipy](https://github.com/Avaiga/taipy) - Data and AI algorithms into production level web apps. ![taipy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wd10iiofzmt4or4db6ej.png) &nbsp; Taipy is the perfect Python library for easy, end-to-end application development, featuring what-if analyses, smart pipeline execution, built-in scheduling, and deployment tools. It's used for creating a GUI interface for Python-based Data & AI applications and improving data flow management. The key is performance and Taipy is the perfect choice for that especially when compared to Streamlit. You can read the detailed comparison of [Taipy vs Streamlit](https://www.marktechpost.com/2024/03/15/taipy-vs-streamlit-navigating-the-best-path-to-build-python-data-ai-web-applications-with-multi-user-capability-large-data-support-and-ui-design-flexibility/) by Marktechpost. - 💰 Secured a total funding of $5M. - 🚀 Primary language used is Python. Taipy has almost 10k stars on GitHub and is on the `v3.1` release. {% cta https://github.com/Avaiga/taipy %} Star Taipy ⭐️ {% endcta %} --- ## 2. [Hoppscotch](https://github.com/hoppscotch/hoppscotch) - API Development Ecosystem. ![Hoppscotch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75cjol6454uvrnth524y.png) &nbsp; Hoppscotch is a lightweight, web-based API development suite. It was built from the ground up with ease of use and accessibility in mind. Hoppscotch is very similar to Postman but provides a few different features. This is what the dashboard looks like and you can test things live at [hoppscotch.io](https://hoppscotch.io/). ![Hoppscotch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2f6ck92qdpd99in6wav.png) Postman requires you to be online even for testing local APIs. With Hoppscotch, you can work with your APIs without an internet connection. Even the web app operates offline by caching locally and functioning as a PWA, allowing you to test APIs anywhere, anytime! Hoppscotch also provides a private workspace. See the [list of complete features](https://github.com/hoppscotch/hoppscotch?tab=readme-ov-file#features). ![features](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d36kmr72z11h71nnvhf5.png) The best part and necessary one is that they provide complete [docs](https://docs.hoppscotch.io/) which includes guides, articles, support, and a changelog so you can see all of the stuff here. ![2023 wrapped](https://hoppscotch.com/images/blog-hoppscotch-wrapped-2023.png) - 💰 Secured a total funding of $3M. - 🚀 Primary language used is TypeScript. Hoppscotch has 60k+ stars on GitHub with 300+ active issues and 200+ contributors. {% cta https://github.com/hoppscotch/hoppscotch %} Star Hoppscotch ⭐️ {% endcta %} --- ## 3. [Daily](https://github.com/dailydotdev/daily) - the homepage every developer deserves. ![daily](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvzd1auk8wet7vv5ev37.png) &nbsp; It's a professional network where you can read articles and personalize news feeds related to the developer ecosystem. They aggregate valuable posts from various topics across many organizations like Hacker News, Dev, Hashnode, and many more. You can upvote, bookmark, and even create your own squad. ![squads](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqqkl42pja53ssywltyl.png) I'm a fan of some of the features, and it would take me hours if I explain everything so it's better to check it out. This is one of my personal favorite projects that I've contributed to open source. You can check my [daily profile](https://app.daily.dev/anmolbaranwal). - 💰 Secured a total funding of $11M. - 🚀 Primary language used is TypeScript. Dailydotdev has 17k+ stars on GitHub. {% cta https://github.com/dailydotdev/daily %} Star Daily ⭐️ {% endcta %} --- ## 4. [Requestly](https://github.com/requestly/requestly) - HTTP Interceptor for browsers. ![requestly](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1jnfzpqe827qxm11a1tm.png) &nbsp; Requestly was built to save developers time by intercepting and modifying HTTP Requests. Requestly helps front-end developers with essential tooling & integrations that help them write, test & debug their code 10x faster. Requestly reduces dependency on backend devs and environments for development & testing needs. Using Requestly, devs can create mock, test, validate & override API responses, modify request & response headers, set up redirects (Map local, Map remote), and use Requestly sessions for faster debugging. You can see the list of [complete features](https://github.com/requestly/requestly?tab=readme-ov-file#-features). - 💰 Secured a seed funding of $500k. - 🚀 Primary language used is TypeScript. Requestly has 1.8k+ stars on GitHub and is growing at a rapid pace. {% cta https://github.com/requestly/requestly %} Star Requestly ⭐️ {% endcta %} --- ## 5. [Resend](https://github.com/resend) - email for developers. ![resend](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a7diqqs4n4yrshxf22l3.png) &nbsp; Email might be the most important medium for people to communicate. However, we need to stop developing emails like in 2010 and rethink how email can be done in 2022 and beyond. It should be modernized for the way we build web apps today. They provide a lot of different repositories corresponding to the tech stack we are using. Feel free to explore each of these. ![resend integrations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jmx7q5i4wrsnwgcuvwk.png) - 💰 Secured a seed funding of $3.5M. - 🚀 Primary language used is TypeScript (React email). Resend (React email) has 12.5k+ stars on GitHub and is used by 7.5k+ developers. {% cta https://github.com/resend %} Star Resend ⭐️ {% endcta %} --- ## 6. [Buildship](https://github.com/rowyio/buildship/) - Low-code Visual Backend Builder powered by AI. ![buildship](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzlrynz5xephv4t9layd.png) &nbsp; For the apps you are building, with no-code app builders (FlutterFlow, Webflow, Framer, Adalo, Bubble, BravoStudio...) or frontend frameworks (Next.js, React, Vue...), you need a backend to support scalable APIs, secure workflows, automation, and more. BuildShip gives you a completely visual way to build these backend tasks scalably in an easy-to-use fully hosted experience. This means you don't need to wrangle or deploy things on the cloud platform or perform DevOps. Just Build and Ship, instantly 🚀 They even collaborated with TypeSense and growing very fast! ![buildship](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6oc3rc713mjg9cwqj7d4.png) I have tried Buildship, and it's powerful. - 💰 Private funding (backed by Google, Vercel, Figma, and more). - 🚀 Primary language used is TypeScript. It has 260+ stars on GitHub which is done using Rowy and has 5.8k stars. {% cta https://github.com/rowyio/buildship/ %} Star BuildShip ⭐️ {% endcta %} --- ## 7. [Cal](https://github.com/calcom/cal.com) - Scheduling infrastructure for absolutely everyone. ![cal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ccagnexb805xzpewfy5.png) &nbsp; This is one of the most active projects of all time. I've seen a bunch of paid gigs via Algora by Cal as well. Earlier, I used Calendly but I switched it to Cal especially because they provide more flexibility in terms of links you can make. For instance, I have a collab link where the people can choose the duration of the meeting and fix timings in other links. You can attach it to almost every app like GMeet, Zoom, or even sync payments if you want to take paid sessions. The [total options](https://cal.com/apps) for integrations of apps are almost unbelievable :) ![integrations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/anesuu0ux6ejz886irnt.png) You can do a lot of stuff including automating workflows so just check it out. ![workflow dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kc0x5vq54joov98wwq9h.png) - 💰 Secured total funding (series A) of $32.4M. - 🚀 Primary language used is TypeScript. Cal has 29k+ stars on GitHub and has more than 600 contributors. {% cta https://github.com/calcom/cal.com %} Star Cal ⭐️ {% endcta %} --- ## 8. [Penpot](https://github.com/penpot/penpot) - design tool for perfect collaboration. ![penpot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mooryn8zodod2mkpzefn.png) &nbsp; Penpot is the first open-source design tool for design and code collaboration. Designers can create stunning designs, interactive prototypes, and design systems at scale, while developers enjoy ready-to-use code and make their workflow easy and fast. And all of this with no handoff drama. Completely free and works with open standards (SVG, CSS, and HTML). See the list of [libraries & templates](https://penpot.app/libraries-templates) and [features](https://penpot.app/features) in one go. Watch the below video to experience `Penpot 2.0`. - 💰 Secured a total funding of $8M. - 🚀 Primary language used is Clojure. Penpot has 28.5k+ stars on GitHub and is on the `v2.0` release. {% cta https://github.com/penpot/penpot %} Star Penpot ⭐️ {% endcta %} --- ## 9. [Appsmith](https://github.com/appsmithorg/appsmith) - Platform to build admin panels, internal tools, and dashboards. ![appsmith](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rt7s0r3wz2leec83cl17.png) &nbsp; Admin panels and dashboards are some of the common parts of any software idea (in most cases) and I've tried to build it from scratch which is a lot of pain with unnecessary hard work. You would have seen organizations build internal applications such as dashboards, database GUIs, admin panels, approval apps, customer support dashboards, and more to help their teams perform day-to-day operations. As I said, Appsmith is an open source tool that enables the rapid development of these internal apps. For starters, watch this YouTube video that explains Appsmith in 100 seconds. {% embed https://www.youtube.com/watch?v=NnaJdA1A11s %} They provide Drag and drop widgets to build UI. You can use 45+ customizable widgets to create beautiful responsive UI in minutes without writing a single line of HTML/CSS. Find the [complete list of widgets](https://www.appsmith.com/widgets). ![button clicks widgets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kqpnnslvsvjl4gifseon.png) ![validations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/489fly7tvknz2uv2mgei.png) You can read the [docs](https://docs.appsmith.com/) and use any of these [20+ templates](https://www.appsmith.com/templates) so you can quickly get started. - 💰 Secured a seed funding of $5M. - 🚀 Primary language used is TypeScript. Appsmith has 32k+ stars on GitHub with 200+ releases. {% cta https://github.com/appsmithorg/appsmith %} Star Appsmith ⭐️ {% endcta %} --- ## 10. [Twenty](https://github.com/twentyhq/twenty) - the modern alternative to Salesforce. ![twenty](https://framerusercontent.com/images/oclg8rdRgBnzeLnSJOfettLFjI.webp) &nbsp; We’ve spent thousands of hours grappling with traditional CRMs like Pipedrive and Salesforce to align them with our business needs, only to end up frustrated — customizations are complex and the closed ecosystems of these platforms can feel restrictive. Twenty is a modern, powerful, affordable platform to manage your customer relationships. You can read the [user guide](https://twenty.com/user-guide). ![twenty](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tucrt5pk9piyswnt9q77.png) - 💰 Secured a seed funding of $759k. - 🚀 Primary language used is TypeScript. Twenty has 14.5k+ stars on GitHub with 200+ contributors. {% cta https://github.com/twentyhq/twenty %} Star Twenty ⭐️ {% endcta %} --- ## 11. [Continue](https://github.com/continuedev/continue) - AI code assistant. ![continue gif](https://github.com/continuedev/continue/raw/main/docs/static/img/understand.gif) &nbsp; Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) & [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension). > Tab to autocomplete code suggestions. ![autocomplete gif](https://github.com/continuedev/continue/raw/main/docs/static/img/autocomplete.gif) > Refactor functions where you are coding. ![refactor image](https://github.com/continuedev/continue/raw/main/docs/static/img/inline.gif) > Ask questions about your codebase. ![codebase](https://github.com/continuedev/continue/raw/main/docs/static/img/codebase.gif) > Quickly use documentation as context ![documentation context gif](https://github.com/continuedev/continue/raw/main/docs/static/img/docs.gif) Read the [quickstart guide](https://docs.continue.dev/quickstart). - 💰 Secured a seed funding of $2.1M. - 🚀 Primary language used is TypeScript. Continue has 12k+ stars on GitHub and is on the `v0.8` release. {% cta https://github.com/continuedev/continue %} Star Continue ⭐️ {% endcta %} --- ## 12. [Refine](https://github.com/refinedev/refine) - open source Retool for Enterprise. ![refine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wsti2yfikrhc9nggov5.png) &nbsp; Refine is a meta React framework that enables the rapid development of a wide range of web applications. From internal tools to admin panels, B2B apps, and dashboards, it serves as a comprehensive solution for building any type of CRUD application such as DevOps dashboards, e-commerce platforms, or CRM solutions. ![e-commerce](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xry9381y4s36emgb9psr.png) You can set it with a single CLI command in under a minute. It has connectors for 15+ backend services including Hasura, Appwrite, and more. But the best part is that Refine is `headless by design`, thereby offering unlimited styling and customization options. You can see the [templates](https://refine.dev/templates/). ![templates](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87vbx5tqyicb9gmgirka.png) - 💰 Secured a total funding of $3.8M. - 🚀 Primary language used is TypeScript. They have around 25k+ stars on GitHub and are used by 3k+ developers. {% cta https://github.com/refinedev/refine %} Star Refine ⭐️ {% endcta %} --- ## 13. [Revideo](https://github.com/redotvideo/revideo) - Create Videos with Code. ![revideo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttwzahj6kfgllj0aknt1.png) &nbsp; Revideo is an open source framework for programmatic video editing. It is forked from the amazing Motion Canvas editor, to turn it from a standalone application into a library that developers can use to build entire video editing apps. Revideo lets you create video templates in Typescript and deploy an API endpoint to render them with dynamic inputs. It also provides a React player component to preview changes in the browser in real time. - 💰 Secured a total funding of $5M. - 🚀 Primary language used is TypeScript. Revideo has 1.2k stars on GitHub and has very less active issues. In short, a perfect less-crowded project to contribute. {% cta https://github.com/redotvideo/revideo %} Star Revideo ⭐️ {% endcta %} --- ## 14. [Million](https://github.com/aidenybai/million) - make your React 70% faster. ![million](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afs9dm1eujmajxn0rng9.png) &nbsp; Million.js is an extremely fast and lightweight optimizing compiler that makes components up to 70% faster. Explore it yourself! ![features](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vozcs5gd57rwlp3jjmr4.png) - 💰 Secured a total funding of $500k. - 🚀 Primary language used is TypeScript. Million has 15.5k+ stars on GitHub and is used by 3k+ developers. {% cta https://github.com/aidenybai/million %} Star Million ⭐️ {% endcta %} --- ## 15. [FlowiseAI](https://github.com/FlowiseAI/Flowise) - Drag & drop UI to build your customized LLM flow. ![flowiseai](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r5bp43nil764fhe4a05z.png) &nbsp; Flowise is an open source UI visual tool to build your customized LLM orchestration flow & AI agents. ![integrations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahk2ovjrpq1qk3r5pfot.png) You can read the [docs](https://docs.flowiseai.com/). ![flowise AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trkltpn5lk1y1pte0smd.png) - 💰 Secured funding from YCombinator (don't know how much). - 🚀 Primary language used is TypeScript. FlowiseAI has 26.5k+ Stars on GitHub and has more than 13k forks so it has a good overall ratio. {% cta https://github.com/FlowiseAI/Flowise %} Star FlowiseAI ⭐️ {% endcta %} --- ## 16. [Trigger](https://github.com/triggerdotdev/trigger.dev) - background jobs platform. ![trigger](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iaoox3qwmc397x9ckmw4.png) &nbsp; Trigger.dev v3 makes it easy to write reliable long-running tasks without timeouts. Create Jobs where they belong: in your codebase. Version control, localhost, test, review, and deploy like you're already used to. you can choose to use the Trigger Cloud or Self-host Trigger on your own infrastructure. Read the [quickstart guide](https://trigger.dev/docs/v3/quick-start) in the docs. - 💰 Secured a total funding of $3M. - 🚀 Primary language used is TypeScript. Trigger has 7.5k stars on GitHub and is on the `v3.1` release. {% cta https://github.com/triggerdotdev/trigger.dev %} Star Trigger ⭐️ {% endcta %} --- ## 17. [Tiptap](https://github.com/ueberdosis/tiptap) - headless rich text editor framework. ![tiptap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdtmi7do65ks6f2mpsjd.png) &nbsp; The Tiptap Editor is a headless, framework-agnostic rich text editor that's customizable and extendable through extensions. Its headless nature means it comes without a set user interface, offering full design freedom (for a jumpstart, see linked UI templates below). Tiptap is based on the highly reliable ProseMirror library. Tiptap Editor is complemented by the collaboration open-source backend Hocuspocus. Both the Editor and Hocuspocus form the foundation of the Tiptap Suite. I recommend reading [docs](https://tiptap.dev/docs/editor/introduction) with the [examples](https://tiptap.dev/docs/editor/examples/default) along with detailed code. - 💰 Secured a total funding of $2.6M. - 🚀 Primary language used is TypeScript. ![tiptap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20c83ios6ugr1q6blqfq.png) Tiptap has 24k+ stars on GitHub with 300+ contributors. {% cta https://github.com/ueberdosis/tiptap %} Star Tiptap ⭐️ {% endcta %} --- ## 18. [Infisical](https://github.com/Infisical/infisical) - secret management platform. ![infisical](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jrolzjdnkky1r694h9av.png) &nbsp; Infisical is the open source secret management platform that teams use to centralize their secrets like API keys, database credentials, and configurations. They are making secret management more accessible to everyone, not just security teams, and that means redesigning the entire developer experience from the ground up. ![Infisical](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3eu288l470du91b66pd.png) Infisical also provides a set of tools to automatically prevent secret leaks to git history. This functionality can be set up on the level of Infisical CLI using pre-commit hooks or through direct integration with platforms like GitHub. You can read the [docs](https://infisical.com/docs/documentation/getting-started/introduction) and check on how to [install the CLI](https://infisical.com/docs/cli/overview) which is the best way to use it. Do check their [license](https://github.com/Infisical/infisical/blob/main/LICENSE) before using the whole source code because they have some enterprise-level code that is protected under MIT Expat but don't worry, most of the code is free to use. - 💰 Secured a total funding of $2.9M. - 🚀 Primary language used is TypeScript. They have 12.5k+ stars on GitHub with 130+ releases. Plus the Infiscial CLI is installed more than 5.4M times so it's very trustworthy. {% cta https://github.com/Infisical/infisical %} Star Infisical ⭐️ {% endcta %} --- ## 19. [HyperDX](https://github.com/hyperdxio/hyperdx) - observability platform unifying session replays, logs, metrics, traces, and errors. ![hyperdx](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6r38lckflg0wmwlq6i4.png) &nbsp; HyperDX helps engineers quickly figure out why production is broken by centralizing and correlating logs, metrics, traces, exceptions, and session replays in one place. An open source and developer-friendly alternative to Datadog and New Relic. Read the [docs](https://www.hyperdx.io/docs). ![hyperdx](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9g83r7408vr2oawc8s8p.png) - 💰 Secured a total funding of $500k. - 🚀 Primary language used is TypeScript. HyperDX has 6k+ stars on GitHub. {% cta https://github.com/hyperdxio/hyperdx %} Star HyperDX ⭐️ {% endcta %} --- ## 20. [Highlight](https://github.com/highlight/highlight) - full-stack monitoring platform. ![highlight](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3p2ecjnrwbtskuqrkjv7.png) &nbsp; highlight.io is a monitoring tool for the next generation of developers (like you!). Unlike the age-old, outdated tools out there, they aim to build a cohesive, modern, and fully-featured monitoring solution. ![support frameworks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afaoao8954hobs7d2igw.png) - 💰 Secured a total funding of $8.5M. - 🚀 Primary language used is TypeScript. Highlight has 7k+ stars on GitHub. {% cta https://github.com/highlight/highlight %} Star Highlight ⭐️ {% endcta %} --- ## 21. [Panora](https://github.com/panoratech/Panora) - add an integration catalog to your SaaS product in minutes. ![panora](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jzhhyl8t0xy2ueln8d4t.png) &nbsp; Panora helps you put your product at the core of your customer's daily workflows. Your customers expect all of their tools to work well together. Panora avoids your team spending hundreds of hours building and maintaining integrations instead of your core product. Check out the [quickstart guide](https://docs.panora.dev/quick-start). - 💰 Secured a total funding of $500k (might be more). - 🚀 Primary language used is TypeScript. Panora has 300+ stars on GitHub and is in the very early stage. {% cta https://github.com/panoratech/Panora %} Star Panora ⭐️ {% endcta %} --- ## 22. [Fleet](https://github.com/fleetdm/fleet) - platform for IT, security, and infrastructure teams. ![fleet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d47vi9uyn2hq3kx6s2h6.png) &nbsp; Open source platform for IT and security teams with thousands of computers. Designed for APIs, GitOps, webhooks, YAML, and humans. Organizations like Fastly and Gusto use Fleet for vulnerability reporting, detection engineering, device management (MDM), device health monitoring, posture-based access control, managing unused software licenses, and more. ![fleet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vfn75mjk5rfhb4bfjwp8.png) - 💰 Secured a total funding of $25M. - 🚀 Primary language used is Go. Fleet has 2.5k stars on GitHub. {% cta https://github.com/fleetdm/fleet %} Star Fleet ⭐️ {% endcta %} --- ## 23. [Ballerine](https://github.com/ballerine-io/ballerine) - infrastructure and data orchestration platform for risk decisions. ![Ballerine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tnnrhyf6oj3dexdeyyuf.png) &nbsp; Ballerine is an Open-Source Risk Management Infrastructure that helps global payment companies, marketplaces, and Fintechs to automate their decisions for merchants, sellers, and users throughout the customer lifecycle. From account-opening (KYC, KYB), underwriting, and transaction monitoring, using a flexible rules & workflow engine, 3rd party plugin system, manual review back office, and document & information collection frontend flows. - 💰 Secured a total funding of $5.5M. - 🚀 Primary language used is TypeScript. Ballerine has 2k stars on GitHub with 700+ releases. {% cta https://github.com/ballerine-io/ballerine %} Star Ballerine ⭐️ {% endcta %} --- ## 24. [Tooljet](https://github.com/ToolJet/ToolJet) - Low-code platform for building business applications. ![tooljet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhipvjl2wnthjccgrpij.png) &nbsp; We all build frontend, but it is generally way complex, and a lot of factors are involved. This can save a lot of hassle. ToolJet is an open-source low-code framework to build and deploy internal tools with minimal engineering effort. ToolJet's drag-and-drop frontend builder allows you to create complex, responsive frontends within minutes. You can integrate various data sources, including databases like PostgreSQL, MongoDB, and Elasticsearch; API endpoints with OpenAPI spec and OAuth2 support; SaaS tools such as Stripe, Slack, Google Sheets, Airtable, and Notion; as well as object storage services like S3, GCS, and Minio, to fetch and write data. Everything :) This is how Tooljet works. ![tooljet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6vv09z7ioma1ce2ttei.png) You can develop multi-step workflows in ToolJet to automate business processes. In addition to building and automating workflows, ToolJet allows for easy integration of these workflows within your applications. ![workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eh2vk3kih9fhck6okf67.png) You can read the [docs](https://docs.tooljet.com/docs/) and see the [How to guides](https://docs.tooljet.com/docs/how-to/use-url-params-on-load). - 💰 Secured a total funding of $6.2M (one of the investors is GitHub). - 🚀 Primary language used is JavaScript. Tooljet has 27.8k+ stars on GitHub and 500+ contributors. {% cta https://github.com/ToolJet/ToolJet %} Star Tooljet ⭐️ {% endcta %} --- ## 25. [mattermost](https://github.com/mattermost/mattermost) - secure collaboration across the entire software development lifecycle. ![mattermost](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/43p8f052h71ryhavrkms.png) &nbsp; Mattermost is an open source platform for secure collaboration across the entire software development lifecycle. This repo is the primary source for core development on the Mattermost platform; it's written in Go and React and runs as a single Linux binary with MySQL or PostgreSQL. A new compiled version is released under an MIT license every month on the 16th. - 💰 Secured a total funding of $73.5M. - 🚀 Primary language used is TypeScript. Mattermost has 28.4k+ stars on GitHub with 600+ active issues and 900+ contributors. {% cta https://github.com/mattermost/mattermost %} Star Mattermost ⭐️ {% endcta %} --- I'm very surprised that so many funded projects use TypeScript over JavaScript. Are you? If you know of any other funded projects or want me to make a part 2. Let me know in the comments with your favorite from this list. Have a great day! Till next time. You can join my community for developers and tech writers at [dub.sh/opensouls](https://dub.sh/opensouls). | If you like this kind of stuff, <br /> please follow me for more :) | <a href="https://twitter.com/Anmol_Codes"><img src="https://img.shields.io/badge/Twitter-d5d5d5?style=for-the-badge&logo=x&logoColor=0A0209" alt="profile of Twitter with username Anmol_Codes" ></a> <a href="https://github.com/Anmol-Baranwal"><img src="https://img.shields.io/badge/github-181717?style=for-the-badge&logo=github&logoColor=white" alt="profile of GitHub with username Anmol-Baranwal" ></a> <a href="https://www.linkedin.com/in/Anmol-Baranwal/"><img src="https://img.shields.io/badge/LinkedIn-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white" alt="profile of LinkedIn with username Anmol-Baranwal" /></a> | |------------|----------| Follow Taipy for more content like this. {% embed https://dev.to/taipy %}
anmolbaranwal
1,892,334
Learning React: Creating a Roll a Dice App - R1
It's not too late to learn react so, here is my journey. I will try to post everyday about the...
0
2024-06-18T12:47:21
https://dev.to/ivewor/learning-react-creating-a-roll-a-dice-app-r1-13d4
tutorial, programming, beginners, react
It's not too late to learn react so, here is my journey. I will try to post everyday about the progress I made. I tried to learn react last year but was not able to finish it and now I'm again a noob. So, this is my 2nd shot in learning react. Haven't touched the basics of it but instead I will try to learn it by building some projects. The first project I build is Roll a Dice. This is a very simple one and gave a good idea of how the file structure works and everything to the base level in react. Also, I'm using next.js to build this small web application. I just want to be comfortable with both of these things at the same time that's why. ## What I learn - React.js and Next.js file structure - Running and creating a react application - Some jsx - HTML and CSS in react - useState Hook - and other basic stuff Now, let's build. ## Roll a Dice Web App in React.js and Next.js ### Create repo This is not necessary but it's an good idea to keep everything in Github repo for various needs. But mine is to have some commits everyday on my profile and learn every system that I need to accomplish in real world job. I just then cloned that repo in my system. Opened the terminal, cd into project directory. ### Creating the project Since, I'm using Next.js so, the command to create the project will be: `npx create-next-app@latest .` `.` in the end so, it won't create another directory in the project project folder and take base directory name as it is. You can then select your preferred configuration like JS or TS. I'm using JS, for tailwind say no. Since it's a very small project we don't need that. After completion of the installation. Run the command to test if it's working. `npm run dev` Visit localhost:3000, you should see the vercel/next.js landing page. If upto this point if everything is working fine. Congratulations, we have now learned how to create a react project using next.js. ### Clean up the files The next step is to clean some files and remove the unwanted code from our project. You can read about the [project structure here](https://nextjs.org/docs/getting-started/project-structure), I won't talk about this here. Remove everything except the page.js and layout.js from the src/app directory. #### layout.js The default code should look like this: ``` import { Inter } from "next/font/google"; import "./globals.css"; const inter = Inter({ subsets: ["latin"] }); export const metadata = { title: "Create Next App", description: "Generated by create next app", }; export default function RootLayout({ children }) { return ( <html lang="en"> <body className={inter.className}>{children}</body> </html> ); } ``` We're going to remove some of it and keep and add what we need for this project. Remove the second `import` of CSS or just rename it to `style.css`. We can keep the fonts. Next we have export metadata. We're going to change it instead of removing to ``` export const metadata = { title: "Roll a Dice", description: "A simple roll a dice web app!", }; ``` Next keep the remaining Root function. The end code of the file should look like this: ``` import { Inter } from "next/font/google"; import "./style.css"; const inter = Inter({ subsets: ["latin"] }); export const metadata = { title: "Roll a Dice", description: "A simple roll a dice web app!", }; export default function RootLayout({ children }) { return ( <html lang="en"> <body className={inter.className}>{children}</body> </html> ); } ``` Next, create a `style.css` file under the same directory `src/app`, because we need one style file and we imported it on the top of layout.js file. #### page.js In this file, let's remove everything and write from the beginning. Let's first have a function to render our homepage with a simple heading text. ``` export default function Home() { return ( <main className="page"> <h1>Roll a Dice</h1> </main> ); } ``` In this code, we have a simple react function which returns a simple `h1` in our application. Next, we need the dice image and a button to roll it. We want the image to randomly change to another one upon clicking that button. #### Images Before that let's collect some images for the dice. I have already downloaded them from the web. So, you can copy these [6 images from the repo](https://github.com/IVEWOR/roll-dice/tree/main/public). Store them in the public folder. Make sure it is present in the root directory, same level as `src` directory. ``` - roll-dice -- src/app -- public/all-your-images.png ``` Make sure the dice images are named like this: ``` dice-1.png dice-2.png ... dice-6.png ``` You can name it whatever you want just please adjust it accordingly in the code. #### page.js Now, let's create an image and the button, which we talked earlier. For the image we need to import the `Image` component from next.js since we're using next.js. ``` import Image from "next/image"; export default function Home() { return ( <main className="page"> <div className="img-side"> <Image src="/dice-1.png" width="400" height="400" /> </div> <div className={"text-side"}> <h1>Roll a Dice</h1> <p> Roll a Dice is a user-friendly web app that simulates dice rolls. Perfect for games or decision-making, it offers quick, random results with a single click. </p> <button>Roll the dice</button> </div> </main> ); } ``` Now, our web app looks like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ztiu6fxlqthvb18232qo.png) #### style.css Let's add some styling to it. I won't talk about this much. Here's the CSS: ``` body { background-color: #f0f0f0; padding: 10px; } body, * { margin: 0; } .page { max-width: 850px; padding: 20px; margin: 0 auto; background-color: #fff; border-radius: 28px; display: grid; gap: 20px; align-items: center; } .img-side img { width: 350px; height: auto; max-width: 100%; } .text-side h1 { margin-bottom: 10px; } .text-side button { padding: 12px 28px; line-height: 1; border: solid 2px #222; background-color: #222; color: #fff; font-size: 17px; letter-spacing: 0.5px; border-radius: 18px; margin-top: 30px; cursor: pointer; transition: cubic-bezier(0.6, -0.28, 0.735, 0.045) 0.2s all; } .text-side button:hover { opacity: 0.9; } @media (min-width: 850px) { .page { grid-template-columns: 1fr 1fr; } } ``` Now, the web app looks like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gawj9gbe8jnhi7zz3py2.png) #### Adding the functionality Again in `page.js`, we're going to import a useState which an react hook using `import { useState } from "react";`. First let's do the destructuring and initialize the variable with 1. ``` import Image from "next/image"; import { useState } from "react"; export default function Home() { const [num, setNum] = useState(1); ``` Then we're going to create random number generator function inside our `Home` function, which is going to return a number from 1 to 6. ``` import Image from "next/image"; import { useState } from "react"; export default function Home() { const [num, setNum] = useState(1); const randomNumberInRange = (min, max) => { return Math.floor(Math.random() * (max - min + 1)) + min; } ....rest of the code ``` Now, we need a function that fires our `randomNumberInRange` function. ``` import Image from "next/image"; import { useState } from "react"; export default function Home() { const [num, setNum] = useState(1); const randomNumberInRange = (min, max) => { return Math.floor(Math.random() * (max - min + 1)) + min; }; const handleClick = () => { setNum(randomNumberInRange(1, 6)); }; return ( ....rest of the code ``` Now, we have to assign the `handleClick` function to the button. So, it works on click. Let's do that: ``` <button onClick={handleClick}>Roll the dice</button> ``` We're mostly done with our web app. We just need to make some adjustments in the `Image` tag so, it can use the random `num` from our function. To do that, modify the `Image` `src` to `<Image src={"/dice-" + num + ".png"} width="400" height="400" />`. Now, the `page.js` code should look like this: ``` "use client"; import Image from "next/image"; import { useState } from "react"; export default function Home() { const [num, setNum] = useState(1); const randomNumberInRange = (min, max) => { return Math.floor(Math.random() * (max - min + 1)) + min; }; const handleClick = () => { setNum(randomNumberInRange(1, 6)); }; return ( <main className="page"> <div className="img-side"> <Image src={"/dice-" + num + ".png"} width="400" height="400" /> </div> <div className={"text-side"}> <h1>Roll a Dice</h1> <p> Roll a Dice is a user-friendly web app that simulates dice rolls. Perfect for games or decision-making, it offers quick, random results with a single click. </p> <button onClick={handleClick}>Roll the dice</button> </div> </main> ); } ``` Now, if you click on the button. The image should start changing like rolling a dice. And that's it for this web app, atleast for now. If you have any suggestions or feedback, then please share! [The repo is here](https://github.com/IVEWOR/roll-dice)
ivewor
1,892,388
How to Learn Python FREE: 8-Week Learning Plan (80/20 Rule)
Unlock the power of Python programming in just 8 weeks with our streamlined 80/20 learning plan! This...
0
2024-06-18T12:45:27
https://dev.to/proflead/how-to-learn-python-free-8-week-learning-plan-8020-rule-3142
python, programming, tutorial, softwaredevelopment
Unlock the power of Python programming in just 8 weeks with our streamlined 80/20 learning plan! This video offers a concise yet comprehensive guide to mastering the essential aspects of Python, ensuring you spend your time on the concepts that deliver the most significant impact. Whether you're a beginner eager to dive into the world of programming or looking to refresh your skills, this plan is designed to optimize your learning journey, focusing on core syntax, practical applications, and impactful libraries. The link to the full learning guide and a learning tracking sheet: [https://proflead.dev/posts/how-to-learn-python-effectively/](https://proflead.dev/posts/how-to-learn-python-effectively/)
proflead
1,892,398
Exploring the Top Open Source Projects of 2024: Innovations and Opportunities
Open source projects have always been a cornerstone of technological innovation, offering developers...
0
2024-06-18T12:44:25
https://dev.to/matin_mollapur/exploring-the-top-open-source-projects-of-2024-innovations-and-opportunities-56h0
webdev, javascript, programming, opensource
> Open source projects have always been a cornerstone of technological innovation, offering developers the opportunity to collaborate, learn, and create impactful software. As we move further into 2024, several open source projects are making waves in the developer community. This article delves into some of the most promising open source projects you should be aware of this year. ## 1. CopilotKit CopilotKit stands out with its AI-driven text editor that enhances traditional elements with features like auto-completion and context-aware editing. It supports both frontend and backend runtimes for in-app copilots, making it a versatile tool for developers looking to integrate AI capabilities into their projects. The project’s [GitHub repository](https://github.com/CopilotKit/CopilotKit) has garnered significant attention, reflecting its utility and popularity. ## 2. Shadcn UI Shadcn UI offers a comprehensive set of components that significantly speed up frontend development. Its high level of customization and top-notch accessibility make it a favorite among developers. The project's ease of use—no installation required, just copy and paste the components—has driven its popularity, as evidenced by its impressive GitHub star count. Check out the project on [GitHub](https://github.com/shadcn/ui). ## 3. Docusaurus Docusaurus is a Facebook project designed for building, deploying, and maintaining open source project websites. With a strong focus on ease of use and extensive documentation, it helps developers create and manage project documentation efficiently. Its significant following on GitHub underscores its value in the open source community. Explore more about Docusaurus on their [website](https://docusaurus.io/). ## 4. Mermaid Mermaid enables the generation of diagrams like flowcharts and sequence diagrams from text, such as markdown. This tool simplifies the creation of visual documentation, making it invaluable for technical writers and developers who need to communicate complex processes visually. Visit the [Mermaid website](https://mermaid.js.org/) for more details. ## 5. Reactive Resume Reactive Resume is a privacy-focused, customizable resume builder that’s completely open source and free. It offers a user-friendly interface and robust features, making it an excellent tool for job seekers looking to create professional resumes without compromising their data privacy. Find more about Reactive Resume on their [website](https://rxresu.me/). ## 6. Blitz Blitz is a fullstack toolkit for Next.js, providing developers with a set of conventions and libraries to build and scale applications efficiently. It enhances the capabilities of Next.js, making it easier to develop complex applications with less boilerplate code. Learn more on the [Blitz website](https://blitzjs.com/). ## 7. RoomGPT RoomGPT is an AI-powered tool that transforms photos of your room into your dream space. This innovative project uses TypeScript and has gained traction for its unique approach to interior design, allowing users to visualize and plan their living spaces creatively. Explore RoomGPT on their https://www.roomgpt.io/. ## 8. Refine Refine is a versatile framework for building React-based applications with a focus on simplicity and efficiency. It provides a set of tools and components that accelerate development, making it ideal for both beginners and experienced developers. Check out Refine on their [GitHub](https://github.com/pankod/refine). ## 9. Rocket.Chat Rocket.Chat is an open source communication platform that allows teams to collaborate in real-time. It supports text, voice, and video communication and integrates with various other tools and services. Learn more about Rocket.Chat on their [website](https://rocket.chat/). ## 10. Focalboard Focalboard is an open source project management tool that helps teams organize tasks and projects visually. It’s a great alternative to tools like Trello and Asana, offering a high degree of customization and flexibility. Discover more about Focalboard on their [GitHub](https://github.com/mattermost/focalboard). ## Conclusion The open source projects highlighted above are just a few examples of the exciting developments in the tech community this year. Whether you're looking to enhance your development workflow with tools like CopilotKit and Shadcn UI, manage your documentation with Docusaurus, or create visually appealing diagrams with Mermaid, there's something for everyone. These projects not only showcase the creativity and collaboration inherent in the open source community but also provide valuable resources that can significantly impact your work. **Stay updated with these and other trending open source projects to keep your skills sharp and contribute to the vibrant ecosystem of open source development.**
matin_mollapur