id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,763,774
Y flores para la más preciosa
Te quiero mucho mi princesa
0
2024-02-17T03:42:21
https://dev.to/romero28/y-flores-para-la-mas-preciosa-2jjn
codepen
Te quiero mucho mi princesa {% codepen https://codepen.io/Thomas-Romero-Serrato/pen/YzgBBGz %}
romero28
1,763,809
A good, open source tool to share files between Computer and mobile devices.
I often have the need to send files from mobile phone to my desktop computer. And vice versa...
0
2024-02-17T06:03:31
https://dev.to/davychxn/a-good-open-source-tool-to-share-files-between-computer-and-mobile-devices-149j
I often have the need to send files from mobile phone to my desktop computer. And vice versa (computer -> mobile phone). Some people recommend a tool called "localsend". It's an open source project, and works on multiple ends/platforms. GitHub: https://lnkd.in/gm2Qm-Yh Official website: https://localsend.org/ I tried it on Win11 & Android. It has a nice designed logo, clean user interfaces and a discussion area. And it works well as I expected! 😊 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4pe9z739z1nfk6zeu78q.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/474eivs7ixeqtea76g0g.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ngjolbfz149u8o5b873.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy9ll47pic15tqaymz5v.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsvb2r5zje0crvao7xlq.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bcbblsv28wtg73zdkp2m.jpg)
davychxn
1,763,822
Proton Keto Gummies Diabetes (Urgent MEDICAL Warning!):
Introduction In the ever-evolving world of health and wellness, individuals are constantly seeking...
0
2024-02-17T06:53:34
https://dev.to/protonketoacvg/proton-keto-gummies-diabetes-urgent-medical-warning-39jb
webdev, javascript, beginners, tutorial
Introduction In the ever-evolving world of health and wellness, individuals are constantly seeking innovative ways to support their fitness goals and enhance their overall well-being. Proton Keto ACV Gummies have emerged as a popular choice among those looking to incorporate the benefits of apple cider vinegar (ACV) and ketogenic principles into their daily routine. In this comprehensive review, we will delve into the key aspects of Proton Keto ACV Gummies, exploring their ingredients, potential benefits, and how they fit into a ketogenic lifestyle. Understanding the Ingredients Proton Keto ACV Gummies boast a carefully curated blend of ingredients designed to synergistically support health and wellness. Let's take a closer look at the key components: Apple Cider Vinegar (ACV): Renowned for its myriad health benefits, ACV has long been celebrated for its potential to aid in weight management, support digestion, and regulate blood sugar levels. The acetic acid in ACV is believed to contribute to these effects. BHB (Beta-Hydroxybutyrate): As a staple in many ketogenic supplements, BHB is an exogenous ketone that helps induce a state of ketosis in the body. Ketosis is a metabolic state where the body relies on ketones for energy instead of glucose, potentially aiding in fat loss. Medium-Chain Triglycerides (MCTs): Extracted from coconut oil, MCTs are fats that are easily absorbed and converted into ketones, providing a quick and efficient energy source. MCTs have been associated with increased energy levels and cognitive function. Garcinia Cambogia: This tropical fruit extract is often included in weight loss supplements due to its potential to inhibit fat production and control appetite. It contains hydroxycitric acid (HCA), which may contribute to these effects. Green Tea Extract: Known for its antioxidant properties, green tea extract may support metabolism and fat oxidation. It also contains catechins, which have been linked to various health benefits. Official website: https://www.deccanherald.com/brandspot/featured/proton-keto-acv-gummies-reviews-updated-proton-keto-gummies-proton-keto-acv-gummies-kelly-clarkson-kelly-clarkson-keto-gummies-2892377 https://www.theweek.in/focus/health-and-wellness/2024/02/13/proton-keto-gummies-reviews-fraud-warning-exposed-kelly-clarkson-gummies-must-read-before-buying.html Social Media: - Pinterest: https://www.pinterest.com/Protonketo/ Tumblr: https://www.tumblr.com/protonketoacvgummies Twitter: https://twitter.com/protonketoacv
protonketoacvg
1,763,863
A buzzword-free Solana crash course for founders, devs, and investors
I'm gonna explain everything you need to know about Solana and I'll try doing it without using...
0
2024-02-21T10:40:16
https://dev.to/almostefficient/a-buzzword-free-solana-crash-course-for-founders-devs-and-investors-4ab7
solana, webdev, blockchain, web3
I'm gonna explain everything you need to know about Solana and I'll try doing it without using buzzwords. There are three parts to this, I recommend skimming parts that you feel comfortable with! **Who is this for?** Someone who's never used a blockchain and doesn’t understand what Solana, Bitcoin, or Ethereum are (even a little bit). ## What is Solana? Solana is a platform for building applications on top of. Practically, it’s a network of computers that run common software that lets you use them for running code and storing data. Anyone can run these computers and there’s thousands of them worldwide. People have built and deployed thousands of applications on Solana. Anyone can use them by visiting their websites, like regular apps. Alongside all of these computers and the applications that run on top of them, Solana is the millions of humans that form communities to push forward their goals, innovate, and have fun. #### The network Solana is an open network, meaning anyone can join or use it without asking for permission or requesting access. This also means anyone can run the code or read the data that’s already on the network. This is permissionlessness. Many apps publish their code and document how you can interact with them using your code, so you can build apps on top of existing programs on the network. This is composability. While frowned upon as it’s against the open-source spirit, it **is** possible to publish code/data on Solana that is private: if you don’t share **how** to interact with it, no one can. #### The applications Blockchain networks are mainly used for financial applications. The idea (and reality) is that a shared digital space for confirmable transactions is more efficient and fairer than traditional financial spaces. Imagine a banking system that doesn’t close on the weekends, lets you move your money without restrictions, and guarantees that your money won’t disappear (unless you make it). Solana allows blockchains to expand beyond financial apps to anything that can benefit from its unique qualities — games, art, social media, physical infrastructure. #### The communities When everyone can participate and has control and ownership of what's being built, communities naturally form. There’s a bunch of tools and services on Solana to help communities coordinate, govern, and fundraise. You can easily combine resources and vote on decisions, so the only limits are human coordination. ### How does Solana work? I’ll keep it short: a large number of computers with specific hardware requirements talk to each other to validate and transmit messages. The messages are called “transactions”, which are a verifiable message format, that tell the computers what users are doing with their data. If you broadcast a transaction message stating “I transfer 1 USD to Raza”, the computers running the network check if you have 1 USD to transfer, and if you do, they deduct 1 USD from your balances and add it to mine. All of this adds up to a platform you can build apps on. As computing infrastructure, Solana is similar to AWS/GCP/Azure/Vercel. You write and deploy code, and users interact with it from a client (website, mobile app). The main differences are: **Decentralization** - Vercel can decide that your app ain’t vibin with their terms of service and kick you off. Solana can’t. The network is run by thousands of individual computers owned by me and you, not a CEO. The network as a collective decides what happens. **Composability** - You’ve got a program on AWS that I want to interact with. This isn’t possible unless you explicitly expose it to me. On Solana, everything can interact, so software compounds faster. **Openness** - Twitter/Reddit increased their API costs a stupid amount. All of your data on these services is now inaccessible. Data on Solana is public and permanent — if you know how to read it, you’ll never lose it. **Pay per action** - Solana is more like public transport than a car you rent. You only pay for what you use. Every time you write, update, or delete data on the network, you pay a small fee based on the computational intensity of your actions. ### Why build on Solana? Web2 platforms (AWS/GCP/Azure/Vercel) are great for duplication and an infinite number of something. Solana is better for things that need scarcity, like currencies. Here’s a bunch of qualities that are built in to Solana, at the platform level: - **Scarcity** - it’s impossible to duplicate assets. - **Sovereignty** - users can self-custody and they get to own their data. - **Trustless** - no need to trust anyone, everything is guaranteed with math (cryptography). - **Security** - no one can change your data or update your programs (unless you leave a hole somewhere). - **Composability** - build on top of existing resources, save time and money. - **Performance** - Recreating these things faster and cheaper (while staying decentralized) than Solana is virtually impossible. **Does your app/service need Solana?** To answer that, ask yourself how easy it would be to build these on your own versus using the ones available on Solana. Most use-cases don’t benefit from being on the blockchain. If you don’t need two or more of these qualities, you probably don’t need to build on Solana. Build on Solana because you want to push forward and do something that's never been done before. Build on Solana because you know you can get rich by creating better products. **Example 1: Payments** If you use Stripe or Paypal, you’ll lose at least 3% in payment processor fees. Solana fees are $0.0006 per transaction (paid by users). This isn’t theoretical — you can set up a Shopify store that accepts virtual USD payments on Solana right now using the [Solana Pay Shopify app](https://apps.shopify.com/solana-pay). **Example 2: Physical infrastructure** If you don’t have an infinite budget, the best way to build an alternative to Google Maps is by letting regular people contribute map data. Users buy hardware, contribute data, and get rewarded. Coordination, payouts, and payment for usage of map data is all done on the Solana network. ### The Solana ecosystem The bigger apps and communities in the ecosystem are financial - exchanges, trading, banking, stablecoins. While there’s a bunch of infrastructure and B2B companies, the ethos of the ecosystem is to build consumer apps and solve real problems, not sell shovels in a gold rush. The range of apps is limited to the people that want to build them and we’re seeing games, social apps, and more come up. Here’s a map of the big ecosystem players from Messari: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hq7ke0pseh3uadyak623.png) It’s normal if some of these labels don’t make sense. Each of these uses one or more of the previously mentioned qualities of Solana. I suggest picking one area you’re interested in and digging into it. A Google/Twitter search will get you far enough to be able to navigate yourself. ## Getting started on Solana First, create an identity. This is normally an email address and a password. On Solana, you’ll generate a wallet, which consists of a public key and a private key — these are similar to an email and password, except if you leak your private key, everything you own in that identity is gone. Wallet apps secure and control your public/private keypair. The most popular one is [Phantom wallet](https://phantom.app/). To do anything on Solana, you need the SOL token, the native currency of the network, for paying transaction fees. $1 USD worth of SOL will let you pay for 1000+ transactions. Instead of buying crypto, I recommend earning it via bounties: [earn.superteam.fun](https://earn.superteam.fun/). What now? Buy, trade, invest, shop, create, whatever you want. ## Terms, topics, and buzzwords Can’t run from buzzwords forever. Here I’ve got the most common terms explained in casual language. **Transaction** - messages users send to Solana network computers to change account data, like asset transfers, purchases, and complex actions like buy orders. **Instruction** - the atomic unit of a transaction. One transaction can do multiple things. Each action in the transaction is an instruction. **On-chain** - short for “on the blockchain”. When you do things on the Solana network (vs on a private database), you have to pay, and those things are a lot more secure. **Off-chain** - Not on the blockchain. Example: your salary in your bank account. To get it on-chain, transfer from your bank account to an exchange bank account, and receive on-chain money. **Validator** - a computer on Solana that validates (checks/verifies/confirms) and adds transactions to the ledger (list of previous transactions). They do this cause they get paid. **Signature** - a string of text used to verify that an action came from a specific identity. Just like a checkbook, these are used to make sure that other people can’t change your data/assets. Think of them like a fingerprint scan; only you can unlock your phone. **Stake** - anyone can join the network and submit false data (e.g. taking 69 USD from my account 😠). To prevent this, validators must lock up SOL tokens to participate. If you submit false data, the protocol will seize your SOL. **Oracle** - services that bring off-chain data (like weather, Gold price) on-chain. These are a middleman, but they’re run trustlessly. **Mint** - create new assets on chain. Example: new virtual dollars (USDC) are minted when you transfer money from your bank to the bank of the company that runs USDC. **Program/smart contract** - this is code that lives on the Solana network that’s used to do things. Think of it like backend logic. **Account** - where data is stored on Solana. This data can be program code or user data. Similar to a database entry or Excel row. **Mainnet/testnet/devnet** - there are three Solana networks. Mainnet is for real money and all the apps. Testnet and devnet are for developers with fake money to test things. ## Developer overview You can learn to build Solana apps in as little as a weekend. The majority of Solana development is building clients that interact with programs already deployed on the blockchain (web and mobile). There’s plenty of libraries and SDKs that make your job easier. If you want to build something truly novel that hasn’t been done before, you’ll write Solana programs (smart contracts) in Rust. I suggest starting with building clients (even just Node.js scripts) to get a feel for Solana and then move to Rust. Solana offloads data formatting (among other things) to developers so the network can process faster. Data you get from the chain directl is in bytes. SDKs handle deserialization (converting it into text from 1’s and 0’s, aka decoding). You also need to encode data before sending to the chain. ### RPC Nodes To read/write from the blockchain, you’ll send API requests to an RPC node - a computer in the network that doesn't participate in validation/consensus, but only receives and sends data. It takes your transactions and submits them to the rest of the network. ### Reading blockchain data All you need is the account address of where it’s stored and how the data is formatted. Send an API request to an RPC node and you’ll get back a response if your query was valid. Create a connection, fetch the data, decode it, interpret it. ### Writing blockchain data You write to Solana by interacting with existing on-chain programs or publishing new ones. To publish a new program, write it in Rust and use tools like Solana CLI or Anchor CLI to deploy it on-chain. To interact with existing deployed programs on Solana, you can either: - use an SDK (like Solana web3.js). These handle data formatting and fill in boilerplate/configs. or - do all of the encoding and structuring yourself if you’re working with a custom program that doesn’t have an SDK. ## Start building ### Developers Solana developer portal: Solana.com/developers Full-stack Solana development course: soldev.app/course Solana for Javascript developers: https://www.youtube.com/watch?v=9ayz-5-h_vY A deep dive on Solana transactions: https://www.youtube.com/watch?v=cu5GNWnN7IU ### Founders The Solana Foundation provides equity-free grants to get you started. They also have a venture arm — Solana Ventures, that offers advice, funding, and intros when you’re ready to scale. Solana Foundation grants: [https://solana.org/grants](https://solana.org/grants) YC request for startups: [Stablecoin Finance](https://www.ycombinator.com/rfs#:~:text=software%20to%20build.-,STABLECOIN%20FINANCE,-%2D%20Brad%20Flora) Superteam ideas to build: [https://build.superteam.fun/](https://build.superteam.fun/) Superteam grants: [https://earn.superteam.fun/grants/](https://earn.superteam.fun/grants/) Solana focused accelerator: https://www.colosseum.org/accelerator The tech of the future is here. Go out and build what you want to see exist.
almostefficient
1,763,900
Memoization in Python; an alternative to Recursion
Introduction One of the most effective recipes for solving Dynamic Programming problems is...
0
2024-02-17T09:37:48
https://dev.to/ckorley4/memoization-in-python-an-alternative-to-recursion-30km
python
**Introduction** One of the most effective recipes for solving Dynamic Programming problems is Memoization. Memoization is the process of storing the result of the subproblem and calling them again when a similar subproblem is to be solved. This will reduce the time complexity of the problem. If we do not use memorization, similar subproblems are repeatedly solved which can lead to exponential time complexities. Memorization has one major advantage over conventional computation techniques: it can drastically cut down on the time and resources needed to compute a function’s output. This is especially helpful when dealing with functions that perform repetitive tasks or have a high computational cost. The program can execute more quickly by avoiding the overhead of repeatedly computing the same result by caching the output of these functions. Memoization is useful in various Python programming applications, such as: - Recursive functions that call themselves with the same input values - Computationally intensive functions, such as mathematical functions - Functions that retrieve data from a remote source or database **Factorial** Let's look a simple example if recursion. The example below uses recursion to compute the factorial of a number. eg.(200) ``` def factorial(input): if input < 2: return 1 elif input >= 2: return input * factorial(input-1) print(factorial(200)) ``` **Memoization** Lets look at the same example with Memoization ``` memo ={} def memoized(n): if n in memo: return memo[n] elif n == 0: return 1 else: x = fact(n-1) * n memo[n] = x return x print(memoized(200)) ```
ckorley4
1,763,924
multipart/formdata: why is it recommended
Multipart/formdata is a way of transferring form data from a client to a server. It was first...
0
2024-02-17T11:02:28
https://blog.aurelmegnigbeto.dev/multipartformdata-why-is-it-recommended
http, fileupload, json
Multipart/formdata is a way of transferring form data from a client to a server. It was first described in 1995 with the [RFC 1867](https://www.rfc-editor.org/rfc/rfc1867) witch makes it one of the oldest specification about transferring huge data between computer. Let's see why is it the recommended approach to transfer some data in a client/server architecture. *TLDR* *Here are some reasons why it is the recommended approach to implement file upload:* * *Possibility to transfer different kind of data inside the same request* * *Efficiency compared to bare text sending* * *Supported on all internet browsers* Let’s dive into each reason: ### **Possibility to transfer different kind of data inside the same request** The mulltipart/formdata specification allow sending of multiple types of content along the same request. It means that you can transfer simple text, JSON data, binary data in the payload of the request. Each part need to be separated by a **boundary,** identified by a name and a content-type. In the following example, you can see that we are sending : * form-data * text/plain * and image/gif ```http Content-Type: multipart/form-data; boundary=AaB03x --AaB03x Content-Disposition: form-data; name="submit-name" Larry --AaB03x Content-Disposition: form-data; name="files" Content-Type: multipart/mixed; boundary=BbC04y --BbC04y Content-Disposition: file; filename="file1.txt" Content-Type: text/plain ... contents of file1.txt ... --BbC04y Content-Disposition: file; filename="file2.gif" Content-Type: image/gif Content-Transfer-Encoding: binary ...contents of file2.gif... --BbC04y-- --AaB03x-- ``` ### **Efficiency compared to bare text sending** [https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4](https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4) > The content type "application/x-www-form-urlencoded" is inefficient for sending large quantities of binary data or text containing non-ASCII characters. The content type "multipart/form-data" should be used for submitting forms that contain files, non-ASCII data, and binary data. * **application/x-www-form-urlencoded** and **application/json** are designed to handle text data and not binary data. If you want to send binary data, you need to encode it in base64, which is inefficient regarding the output result size. * **Browser handle content negotiation:** The approach of sending file using JSON is not standard, and you may need to implement it yourself both on your client and the server before benefiting from it. Knowing that your browser and the server negotiate to handle the content transfer, your implementation. Transferring a content to the browser by default use ### Supported on all internet browsers [https://www.rfc-editor.org/rfc/rfc1867](https://www.rfc-editor.org/rfc/rfc1867) **multipart/formdata** has been around for decades. It has been the norm for transferring file to a backend, and all the internet relies on it. **multipart/formdata** is the way of transferring file, and form-data from a client to a server in an efficient way. They are some alternative, but they would be an overhead for implementation and may not be as benefiting for your development. In the **next post**, we will dig deeper into how **multipart/formdata** is structured to send different kind of data to your server.
aurelmegn
1,763,931
ESP Embedded Rust: Ping CLI App Part 1
Introduction In the last blog post, I demonstrated the basic usage of ping::EspPing. Also...
0
2024-02-17T11:36:49
https://dev.to/theembeddedrustacean/esp-embedded-rust-ping-cli-app-part-1-g73
cli, esp32, tutorial, rust
## Introduction In the last blog post, I demonstrated the basic usage of `ping::EspPing`. Also in the [post](https://apollolabsblog.hashnode.dev/esp-embedded-rust-command-line-interface) of the week before that, I went through the process of creating a command line interface over UART. I figured, why not combine both to create a Ping CLI app replica?! As a result, in this post, a replica of a CLI ping application will be created. So that it's not overwhelming, the process is going to be broken up into two posts. In the first post, the basic framework of the app will be created. Next week, in the second post, more features will be added. #### If you find this post useful, and if Embedded Rust interests you, stay in the know by subscribing to The Embedded Rustacean newsletter: {% cta http://www.theembeddedrustacean.com/subscribe %} Subscribe Now to The Embedded Rustacean{% endcta%} ### **📚 Knowledge Pre-requisites** The content of this post is heavily dependent on the following past posts: * [ESP Embedded Rust: Command Line Interface](https://apollolabsblog.hashnode.dev/esp-embedded-rust-command-line-interface) * [Edge IoT with Rust on ESP: Ping!](https://apollolabsblog.hashnode.dev/edge-iot-with-rust-on-esp-ping) * [Edge IoT with Rust on ESP: Connecting WiFi](https://apollolabsblog.hashnode.dev/edge-iot-with-rust-on-esp-connecting-wifi) ### **💾 Software Setup** All the code presented in this post is available on the [**apollolabs ESP32C3**](https://github.com/apollolabsdev/ESP32C3) git repo. Note that if the code on the git repo is slightly different then it means that it was modified to enhance the code quality or accommodate any HAL/Rust updates. Additionally, the full project (code and simulation) is available on Wokwi [**here**](https://wokwi.com/projects/389975860067518465). ### **🛠 Hardware Setup** #### Materials * [**ESP32-C3-DevKitM**](https://docs.espressif.com/projects/esp-idf/en/latest/esp32c3/hw-reference/esp32c3/user-guide-devkitm-1.html) ![ESP32 Devkit](https://cdn.hashnode.com/res/hashnode/image/upload/v1681796942083/da81fb7b-1f90-4593-a848-11c53a87821d.jpeg) ## **👨‍🎨 Software Design** This is a description of the app we're going to build: ```plaintext Ping is a utility that sends ICMP Echo Request packets to a specified network host (either identified by its IP address or hostname) to test connectivity and measure round-trip time. Usage: ping [options] <hostname/IP> Options: -c, --count <number> Number of ICMP Echo Request packets to send (default is 4). -i, --interval <seconds> Set the interval between successive ping packets in seconds. -t, --timeout <seconds> Specify a timeout value for each ping attempt. -s, --size <bytes> Set the size of the ICMP packets. Examples: ping 192.168.1.1 # Ping the IP address 192.168.1.1 ping example.com # Ping the hostname 'example.com' ping -c 10 google.com # Send 10 ping requests to google.com ping -i 0.5 -s 100 example.com # Ping with interval of 0.5 seconds and packet size of 100 bytes to 'example.com' ``` This description is what will appear when `help ping` is entered. Also, the following is the type of output we want to recreate: ![pingout](https://cdn.hashnode.com/res/hashnode/image/upload/v1708101110294/048fd335-3691-49ed-a4c4-0ac01a57a045.png) To create the application, the following steps are followed: ### 🐾 Step 1: Setup CLI Root Menu & Callback In this first step, the following tasks will be accomplished: 1. The `ping` command `Item` needs to be added to the root menu `Menu` struct. 2. The callback function for `ping` setup. ### 🐾 Step 2: CLI Start-Up The following are the actions that the application needs to take before spawning the CLI interface and invoking any commands: 1. Configure, Instantiate, and Connect to WiFi. 2. Configure and Instantiate UART 3. Instantiate and run the CLI runner with the root menu. ### 🐾 Step 3: Create Ping App Logic The app logic will be contained in the `ping` command callback function. This is the logic that will be executed when the `ping` command is invoked. The following are the app logic steps: 1. Retrieve & Process CLI input 2. Instantiate `EspPing` 3. Setup `EspPing` Configuration 4. Perform `ping` and update CLI For this week's post, in step 1, the only input that will be processed is the ip address. Hostname and option processing capability will be added in the next post. Additionally, `ping` stats printing will be added next week. ## **👨‍💻 Code Implementation** ### **📥 Crate Imports** In this implementation, the following crates are required: * The `esp_idf_hal` crate to import the peripherals needed for `uart`. * The `esp_idf_svc` crate to import the device services needed for `wifi` and `ping`. * The `menu` crate for creating the CLI. * The `std::str` crate to import the `FromStr` abstraction. ```rust use esp_idf_hal::delay::BLOCK; use esp_idf_hal::gpio; use esp_idf_hal::peripherals::Peripherals; use esp_idf_hal::prelude::*; use esp_idf_hal::uart::*; use esp_idf_svc::eventloop::EspSystemEventLoop; use esp_idf_svc::ipv4::Ipv4Addr; use esp_idf_svc::nvs::EspDefaultNvsPartition; use esp_idf_svc::ping::{Configuration as PingConfiguration, EspPing}; use esp_idf_svc::wifi::{AuthMethod, BlockingWifi, ClientConfiguration, Configuration, EspWifi}; use menu::*; use std::fmt::Write; use std::str::FromStr; ``` ### 🐾 Step 1: Setup CLI Root Menu & Callback **1️⃣ Add the** `ping` **command** `Item` **to root menu:** similar to what was done in the CLI post with the `hw` command for the hello app, a new `&Item` is added for `ping` in the `ROOT_MENU`. Note that the callback function name is `ping_app` also there's only one `parameter_name` that will be recognized which is `"hostname/IP"` . Also, the `help` message details how the command will work. Be mindful that the options are included in the description although not supported yet. ```rust &Item { item_type: ItemType::Callback { function: ping_app, parameters: &[Parameter::Mandatory { parameter_name: "hostname/IP", help: Some("IP address or hostname"), }], }, command: "ping", help: Some(" Ping is a utility that sends ICMP Echo Request packets to a specified network host (either identified by its IP address or hostname) to test connectivity and measure round-trip time. Usage: ping [options] <hostname/IP> Options: -c, --count <number> Number of ICMP Echo Request packets to send (default is 4). -i, --interval <seconds> Set the interval between successive ping packets in seconds. -t, --timeout <seconds> Specify a timeout value for each ping attempt. -s, --size <bytes> Set the size of the ICMP packets. -h, --help Display this help message and exit. Examples: ping 192.168.1.1 # Ping the IP address 192.168.1.1 ping example.com # Ping the hostname 'example.com' ping -c 10 google.com # Send 10 ping requests to google.com ping -i 0.5 -s 100 example.com # Ping with interval of 0.5 seconds and packet size of 100 bytes to 'example.com' "), } ``` **2️⃣ Create the callback function:** The callback function named `ping_app` for the `ping` command specified in the `ROOT_MENU` `Item` is implemented as follows: ```rust fn ping_app<'a>( _menu: &Menu<UartDriver>, item: &Item<UartDriver>, args: &[&str], context: &mut UartDriver, ) { // App code goes here } ``` We're going to keep it empty for now, and fill in the logic implementation in the last step. ### 🐾 Step 2: CLI Start-Up 1️⃣ **Configure, Instantiate, and Connect to WiFi:** This involves the same steps taken in the [wifi post](https://apollolabsblog.hashnode.dev/edge-iot-with-rust-on-esp-connecting-wifi) to connect to WiFi. Inside the `main` function, the following code is added: ```rust let peripherals = Peripherals::take().unwrap(); let sysloop = EspSystemEventLoop::take()?; let nvs = EspDefaultNvsPartition::take()?; let mut wifi = BlockingWifi::wrap( EspWifi::new(peripherals.modem, sysloop.clone(), Some(nvs))?, sysloop, )?; wifi.set_configuration(&Configuration::Client(ClientConfiguration { ssid: "Wokwi-GUEST".try_into().unwrap(), bssid: None, auth_method: AuthMethod::None, password: "".try_into().unwrap(), channel: None, }))?; // Start Wifi wifi.start()?; // Connect Wifi wifi.connect()?; // Wait until the network interface is up wifi.wait_netif_up()?; println!("Wifi Connected"); ``` **2️⃣ Configure and Instantiate UART:** This and the following step are identical to what was accomplished in the [CLI post](https://apollolabsblog.hashnode.dev/esp-embedded-rust-command-line-interface). Here is the code associated with this step: ```rust // Configure UART // Create handle for UART config struct let config = config::Config::default().baudrate(Hertz(115_200)); // Instantiate UART let mut uart = UartDriver::new( peripherals.uart0, peripherals.pins.gpio21, peripherals.pins.gpio20, Option::<gpio::Gpio0>::None, Option::<gpio::Gpio1>::None, &config, ) .unwrap(); ``` **3️⃣ Instantiate and run the CLI runner with the root menu**: Again, following the the [CLI post](https://apollolabsblog.hashnode.dev/esp-embedded-rust-command-line-interface), this is the associated code: ```rust // Create a buffer to store CLI input let mut clibuf = [0u8; 64]; // Instantiate CLI runner with root menu, buffer, and uart let mut r = Runner::new(ROOT_MENU, &mut clibuf, uart); loop { // Create single element buffer for UART characters let mut buf = [0_u8; 1]; // Read single byte from UART r.context.read(&mut buf, BLOCK).unwrap(); // Pass read byte to CLI runner for processing r.input_byte(buf[0]); } ``` ### 🐾 Step 3: Create Ping App Logic **1️⃣ Retrieve & Process CLI input:** In the `ping_app` callback function, the first order of action will be to recover the user input. This is done using the `argument_finder` function in the `menu` crate. For now, the only entry supported is an IP address. Given that the retrieved IP address entry is a `&str` type, it needs to be converted to an `EspPing` compatible `Ipv4Addr` type using the `from_str` associated method which returns a `Result`. Finally, in case the user enters incorrect input, this can be handled by doing a pattern match on the `from_str` `Result`. Here is the code: ```rust // Retreieve CLI Input let ip_str = argument_finder(item, args, "hostname/IP").unwrap().unwrap(); // Process Input - Convert &str type to Ipv4Addr let ip = Ipv4Addr::from_str(ip_str); // Process Input - Make sure address formant is correct let addr = match ip { Ok(addr) => addr, Err(_) => { writeln!(context, "Address error, try again").unwrap(); return; } }; ``` **2️⃣ Instantiate** `EspPing`: This is a single-line action same to what was done before: ```rust let mut ping = EspPing::new(0_u32); ``` **3️⃣ Setup** `EspPing` **Configuration**: for this post, a default configuration is going to be used. In next week's post, the configuration will be modified to accommodate any user-entered options. ```rust let ping_config = &PingConfiguration::default(); ``` **4️⃣ Perform** `ping` **and update CLI**: This is the part where the output of the ping app is replicated. First, we need to print `Pinging [IP address] with [bytes sent] bytes of data` . The IP address is the one entered by the user and the number of bytes sent is inside the `ping_config` struct. ```rust // Update CLI // Pinging {IP} with {x} bytes of data writeln!( context, "Pinging {} with {} bytes of data\n", ip_str, ping_config.data_size ) .unwrap(); ``` Afterward, a ping needs to be performed 4 times and the output of each ping is reported in the format `Reply from [IP Address]: bytes=[bytes received] time=[response duration] TTL=[timeout duration]`. Here's the associated code: ```rust // Ping 4 times and print results for _n in 1..=4 { let summary = ping.ping(addr, ping_config).unwrap(); writeln!( context, "Reply from {}: bytes = {}, time = {:?}, TTL = {:?}", ip_str, summary.received, summary.time, ping_config.timeout ) .unwrap(); } ``` Thats it! ## **🧪 Testing** Since the current version supports only IP addresses, for the sake of testing, local network addresses can be pinged if using physical hardware. If you desire to test with internet addresses or on Wokwi some possible addresses to ping include the following: * **OpenDNS**: 208.67.222.222 and 208.67.220.220 * **Cloudflare**: 1.1.1.1 and 1.0.0.1 * **Google DNS**: 8.8.8.8 and 8.8.4.4 ## **📱Full Application Code** Here is the full code for the implementation described in this post. You can additionally find the full project and others available on the [**apollolabs ESP32C3**](https://github.com/apollolabsdev/ESP32C3) git repo. Also, the Wokwi project can be accessed [**here**](https://wokwi.com/projects/389975860067518465). ```rust use esp_idf_hal::delay::BLOCK; use esp_idf_hal::gpio; use esp_idf_hal::peripherals::Peripherals; use esp_idf_hal::prelude::*; use esp_idf_hal::uart::*; use esp_idf_svc::eventloop::EspSystemEventLoop; use esp_idf_svc::ipv4::Ipv4Addr; use esp_idf_svc::nvs::EspDefaultNvsPartition; use esp_idf_svc::ping::{Configuration as PingConfiguration, EspPing}; use esp_idf_svc::wifi::{AuthMethod, BlockingWifi, ClientConfiguration, Configuration, EspWifi}; use menu::*; use std::fmt::Write; use std::str::FromStr; // CLI Root Menu Struct Initialization const ROOT_MENU: Menu<UartDriver> = Menu { label: "root", items: &[ &Item { item_type: ItemType::Callback { function: hello_name, parameters: &[Parameter::Mandatory { parameter_name: "name", help: Some("Enter your name"), }], }, command: "hw", help: Some("This is the help for the hello, name hw command!"), }, &Item { item_type: ItemType::Callback { function: ping_app, parameters: &[Parameter::Mandatory { parameter_name: "hostname/IP", help: Some("IP address or hostname"), }], }, command: "ping", help: Some(" Ping is a utility that sends ICMP Echo Request packets to a specified network host (either identified by its IP address or hostname) to test connectivity and measure round-trip time. Usage: ping [options] <hostname/IP> Options: -c, --count <number> Number of ICMP Echo Request packets to send (default is 4). -i, --interval <seconds> Set the interval between successive ping packets in seconds. -t, --timeout <seconds> Specify a timeout value for each ping attempt. -s, --size <bytes> Set the size of the ICMP packets. -h, --help Display this help message and exit. Examples: ping 192.168.1.1 # Ping the IP address 192.168.1.1 ping example.com # Ping the hostname 'example.com' ping -c 10 google.com # Send 10 ping requests to google.com ping -i 0.5 -s 100 example.com # Ping with interval of 0.5 seconds and packet size of 100 bytes to 'example.com' "), }, ], entry: None, exit: None, }; fn main() -> anyhow::Result<()> { // Take Peripherals let peripherals = Peripherals::take().unwrap(); let sysloop = EspSystemEventLoop::take()?; let nvs = EspDefaultNvsPartition::take()?; let mut wifi = BlockingWifi::wrap( EspWifi::new(peripherals.modem, sysloop.clone(), Some(nvs))?, sysloop, )?; wifi.set_configuration(&Configuration::Client(ClientConfiguration { ssid: "Wokwi-GUEST".try_into().unwrap(), bssid: None, auth_method: AuthMethod::None, password: "".try_into().unwrap(), channel: None, }))?; // Start Wifi wifi.start()?; // Connect Wifi wifi.connect()?; // Wait until the network interface is up wifi.wait_netif_up()?; println!("Wifi Connected"); // Configure UART // Create handle for UART config struct let config = config::Config::default().baudrate(Hertz(115_200)); // Instantiate UART let mut uart = UartDriver::new( peripherals.uart0, peripherals.pins.gpio21, peripherals.pins.gpio20, Option::<gpio::Gpio0>::None, Option::<gpio::Gpio1>::None, &config, ) .unwrap(); // This line is for Wokwi only so that the console output is formatted correctly uart.write_str("\x1b[20h").unwrap(); // Create a buffer to store CLI input let mut clibuf = [0u8; 64]; // Instantiate CLI runner with root menu, buffer, and uart let mut r = Runner::new(ROOT_MENU, &mut clibuf, uart); loop { // Create single element buffer for UART characters let mut buf = [0_u8; 1]; // Read single byte from UART r.context.read(&mut buf, BLOCK).unwrap(); // Pass read byte to CLI runner for processing r.input_byte(buf[0]); } } // Callback function for hw command fn hello_name<'a>( _menu: &Menu<UartDriver>, item: &Item<UartDriver>, args: &[&str], context: &mut UartDriver, ) { // Print to console passed "name" argument writeln!( context, "Hello, {}!", argument_finder(item, args, "name").unwrap().unwrap() ) .unwrap(); } // Callback function for ping command fn ping_app<'a>( _menu: &Menu<UartDriver>, item: &Item<UartDriver>, args: &[&str], context: &mut UartDriver, ) { // Retreieve CLI Input let ip_str = argument_finder(item, args, "hostname/IP").unwrap().unwrap(); // Process Input - Convert &str type to Ipv4Addr let ip = Ipv4Addr::from_str(ip_str); // Process Input - Make sure address formant is correct let addr = match ip { Ok(addr) => addr, Err(_) => { writeln!(context, "Address error, try again").unwrap(); return; } }; // Create EspPing instance let mut ping = EspPing::new(0_u32); // Setup Ping Config let ping_config = &PingConfiguration::default(); // Update CLI // Pinging {IP} with {x} bytes of data writeln!( context, "Pinging {} with {} bytes of data\n", ip_str, ping_config.data_size ) .unwrap(); // Ping 4 times and print results // Reply from {IP}: bytes={summary.recieved} time={summary.time} TTL={summary.timeout} for _n in 1..=4 { let summary = ping.ping(addr, ping_config).unwrap(); writeln!( context, "Reply from {}: bytes = {}, time = {:?}, TTL = {:?}", ip_str, summary.received, summary.time, ping_config.timeout ) .unwrap(); } } ``` ## Conclusion In this post, a ping CLI application replica was built on a ESP32C3 using Rust and the supporting `std` library crates. The current version is limited to pinging IP addresses only. In the next blog post, hostnames and options support will be added. Additionally, ping statistics will be reported as part of the output. Have any questions? Share your thoughts in the comments below 👇. #### If you found this post useful, and if Embedded Rust interests you, stay in the know by subscribing to The Embedded Rustacean newsletter: {% cta http://www.theembeddedrustacean.com/subscribe %} Subscribe Now to The Embedded Rustacean{% endcta%}
theembeddedrustacean
1,764,011
EGYPT TOUR PACKAGES
Tourists from all corners of the world visit Egypt and book their Egypt Tour Packages online to...
0
2024-02-17T13:18:52
https://dev.to/nileholiday/egypt-tour-packages-3of4
Tourists from all corners of the world visit Egypt and book their Egypt Tour Packages online to explore the various tombs, beautiful temples, ancient ruins, and striking structures of an era long gone. The country is not only huge tracts of desert as it may seem to the uninitiated, as a lot of water activities await you in Egypt, like surfing, sailing, and diving. Our [Egypt Tour Packages](https://nileholiday.com/tour-packages) are designed to provide our customers with value for money. Our Egypt Tours will assure our customers to enjoy the best of their holiday to the fullest. Whether you are looking for well-priced last minute Egypt Classic Tours or exclusive Egypt Luxury Tours, we cater to a variety of preferences and budgets. Egypt Budget Tours Our Egypt Budget Tours are organized at a low-cost with high-quality services. You can experience and explore Egypt’s historical splendour and beauty on our Egypt Tours. Our Egypt Tour Packages starts by visiting one of the oldest seven wonders in the world, the incredible Pyramids of Giza. Then you head down to experience a beautiful lodging on board a Nile cruise between Aswan and Luxor. You can visit the Luxor temple, Karnak Temple, Valley of the Kings and more! Book our Budget Egypt Tours now or tailor-make your trip to suit your requirements. Egypt Classic Tours With our classic Egypt Vacation Packages, you will find the true definition of class in Egypt. Enjoy a variety of customized Egypt Classic Tours and vacations to satisfy all your dreams about ancient Egypt, the cradle of ancient civilizations. Live the fascinating experiences of Egypt Tours that you will never forget. Explore some of our Classic Egypt Tour Packages that will allow you to see and enjoy the magnificent landscapes where the ancient pharaohs lived with our organized trips to Egypt. Egypt Land Tours Our Egypt Land Tours delve deep into the ancient history of The Land of The Pharaohs. With our comprehensive itineraries, our Egypt Travel Packages promise an exciting adventure. From sightseeing tours, accommodation arrangements and transportation formalities- every aspect of your trip Egypt Trip Packages will be carefully hand-picked to cater to your taste and interest. Our Egypt Holiday Packages take you across the major tourist attractions of the country to offer an authentic taste of the history and culture of the place. Egypt Luxury Tours Our Egypt Luxury Tours are designed to provide our customers with the utmost luxurious experience. Our luxury Tours in Egypt will walk you through exceptional destinations such as Cairo and Luxor. Unlock the country’s timeless wonders and treasured beauties, from Cairo to the banks of the Nile River. Our Luxury Egypt Tours will provide you with proud and affable culture and an ancient destination of the land of the pharaohs. To help you plan your Egypt Trip, here are some of our Egypt Tour Packages you can choose from considering your preferences and budgets. You can see all of Egypt’s top sights, while enjoying our affordable deals that can help you save up on your travel allowance. Want to explore Egypt in all its ancient glory? Get in touch with [Nile Holiday now](https://nileholiday.com/tour-packages)!
nileholiday
1,764,076
How to scrape Stackoverflow
How to scrape Stackoverflow easily with scraper
0
2024-02-17T15:18:29
https://crawlbase.com/blog/scrape-stackoverflow-questions/
stackoverflow, scrapestackoverflow, webscraping
--- title: How to scrape Stackoverflow published: true description: How to scrape Stackoverflow easily with scraper cover_image: https://crawlbase.com/blog/scrape-stackoverflow-questions/scrape-stackoverflow-questions.jpg canonical_url: https://crawlbase.com/blog/scrape-stackoverflow-questions/ tags: StackOverflow, scrapeStackoverflow, webscraping # cover_image: # Use a ratio of 100:42 for best results. # published_at: 2024-02-17 14:53 +0000 --- This blog was originally posted to [Crawlbase Blog](https://crawlbase.com/blog/scrape-stackoverflow-questions/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) **Stack Overflow**, an active site for programming knowledge, offers a wealth of information that can be extracted for various purposes, from research to staying updated on the latest trends in specific programming languages or technologies. <!-- more --> This tutorial will focus on the targeted extraction of questions and answers related to a specific tag. This approach allows you to tailor your data collection to your interests or requirements. Whether you're a developer seeking insights into a particular topic or a researcher exploring trends in a specific programming language, this guide will walk you through efficiently scraping [Stack Overflow](https://crawlbase.com/how-to-scrape-stackoverflow/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) questions with your chosen tags. Join us on this educational journey, where we simplify the art of web scraping using JavaScript and Crawlbase APIs. This guide helps you understand the ins and outs of data extraction and lets you appreciate the collaborative brilliance that makes Stack Overflow an invaluable resource for developers. ## Table of Contents [**I. Why Scrape Stack Overflow**](#I-Why-Scrape-Stack-Overflow) [**II. Understanding Stack Overflow Questions Page Structure**](#II-Understanding-Stack-Overflow-Questions-Page-Structure) [**III. Prerequisites**](#III-Prerequisites) [**IV. Setting Up the Project**](#IV-Setting-Up-the-Project) [**V. Scrape using Crawlbase Scraper API**](#V-Scrape-using-Crawlbase-Scraper-API) [**VI. Custom Scraper Using Cheerio**](#VI-Custom-Scraper-Using-Cheerio) [**VII. Conclusion**](#VII-Conclusion) [**VIII. Frequently Asked Questions**](#VIII-Frequently-Asked-Questions) ## I. Why Scrape Stack Overflow Scraping Stack Overflow can be immensely valuable for several reasons, particularly due to its status as a dynamic and comprehensive knowledge repository for developers. Here are some compelling reasons to consider scraping Stack Overflow: 1. **Abundance of Knowledge:** Stack Overflow hosts extensive questions and answers on various programming and development topics. With millions of questions and answers available, it serves as a rich source of information covering diverse aspects of software development. 2. **Developer Community Insights:** Stack Overflow is a vibrant community where developers from around the world seek help and share their expertise. Scraping this platform allows you to gain insights into current trends, common challenges, and emerging technologies within the developer community. 3. **Timely Updates:** The platform is continually updated with new questions, answers, and discussions. By scraping Stack Overflow, you can stay current with the latest developments in various programming languages, frameworks, and technologies. 4. **Statistical Analysis:** Extracting and analyzing data from Stack Overflow can provide valuable statistical insights. This includes trends in question frequency, popular tags, and the distribution of answers over time, helping you understand the evolving landscape of developer queries and solutions. As of 2020, Stack Overflow attracts approximately [25 million visitors](https://insights.stackoverflow.com/survey/2020), showcasing its widespread popularity and influence within the developer community. This massive user base ensures that the content on the platform is diverse, reflecting a wide range of experiences and challenges developers encounter globally. ![stackoverflow stats](https://crawlbase.com/blog/scrape-stackoverflow-questions/stackoverflow-stats.jpg) [source](https://www.usesignhouse.com/blog/stack-overflow-stats) Moreover, with more than [33 million answers](https://stackexchange.com/sites?view=list#users) available on Stack Overflow, the platform has become an expansive repository of solutions to programming problems. Scraping this vast database can provide access to a wealth of knowledge, allowing developers and researchers to extract valuable insights and potentially discover patterns in the responses provided over time. ![StackOverflow answers stats](https://crawlbase.com/blog/scrape-stackoverflow-questions/stackoverflow-answers-stats.jpg) [source](https://www.usesignhouse.com/blog/stack-overflow-stats) ## II. Understanding Stack Overflow Questions Page Structure Understanding the structure of the Stack Overflow Questions page is crucial when building a scraper because it allows you to identify and target the specific HTML elements that contain the information you want to extract. Here's an overview of the key elements on the target URL [https://stackoverflow.com/questions/tagged/javascript](https://stackoverflow.com/questions/tagged/javascript) and why understanding them is essential for building an effective scraper: ![StackOverflow questions page](https://crawlbase.com/blog/scrape-stackoverflow-questions/stackoverflow-questions-page.jpg) 1. **Page Title:** - Importance: The page title provides a high-level context for the content on the page. Understanding it helps in categorizing and organizing the scraped data effectively. - HTML Element: Typically found within the <head> section of the HTML document, identified with the <title> tag. 2. **Page Description:** - Importance: The page description often contains additional information about the content on the page. It can help provide more context to users and is valuable metadata. - **HTML Element:** Typically found within the <head> section, identified with the <meta> tag and the name="description" attribute. 3. **Questions List:** A. **Question Title:** - **Importance:** The title of each question provides a concise overview of the topic. It's a critical piece of information that helps users and scrapers categorize and understand the content. - **HTML Element:** Typically found within an <h2> (or similar) tag and often within a specific container element. B. **Question Description:** - **Importance:** The detailed description of a question provides more context and background information. Extracting this content is crucial for obtaining the complete question content. - **HTML Element:** Usually located within a <div> or similar container, often with a specific class or ID. C. **Author Name:** - **Importance:** Knowing who authored a question is vital for attribution and potentially understanding the expertise level of the person seeking help. - **HTML Element:** Often located within a specific container, sometimes within a <span> or other inline element with a class or ID. D. **Question Link:** - **Importance:** The link to the individual question allows users to navigate directly to the full question and answer thread. Extracting this link is essential for creating references. - **HTML Element:** Typically found within an <a> (anchor) tag with a specific class or ID. E. **Number of Votes, Views, and Answers:** - **Importance:** These metrics provide quantitative insights into the popularity and engagement level of a question. - **HTML Element:** Each of these numbers is often located within a specific container, such as a <span>, with a unique class or ID. By understanding the structure of the Stack Overflow Questions page and the placement of these elements within the HTML, you can design a scraper that precisely targets and extracts the desired information from each question on the page. This ensures the efficiency and accuracy of your scraping process. In the upcoming section of this guide, we will apply this understanding in practical examples. ## III. Prerequisites Before jumping into the coding phase, let's ensure that you have everything set up and ready. Here are the prerequisites you need: 1. **Node.js installed on your system** - **Why it's important:** Node.js is a runtime environment that allows you to run JavaScript on your machine. It's crucial for executing the web scraping script we'll be creating. - **How to get it:** Download and install Node.js from the official website: [Node.js](https://nodejs.org/) 2. **Basic knowledge of JavaScript:** - **Why it's important:** Since we'll be using JavaScript for web scraping, having a fundamental understanding of the language is essential. This includes knowledge of variables, functions, loops, and basic DOM manipulation. - **How to acquire it:** If you're new to JavaScript, consider going through introductory tutorials or documentation available on platforms like [Mozilla Developer Network](https://developer.mozilla.org/en-US/docs/Web/JavaScript) (MDN) or [W3Schools](https://www.w3schools.com/js/). 3. **Crawlbase API Token:** - **Why it's important:** We'll be utilizing the Crawlbase APIs for efficient web scraping. The API token is necessary for authenticating your requests. - **How to get it:** Visit the [Crawlbase website](https://crawlbase.com/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution), sign up for an account, and obtain your API tokens from your account settings. These tokens will serve as the key to unlock the capabilities of the Crawling API and the Scraper API. ![crawlbase docs](https://crawlbase.com/blog/scrape-stackoverflow-questions/crawlbase-docs.jpg) ## IV. Setting Up the Project To kick off our scraping project and establish the necessary environment, follow these step-by-step instructions: 1. **Create a New Project Folder:** - Open your terminal and type: `mkdir stackoverflow_scraper` - This command creates a new folder named "stackoverflow_scraper" to neatly organize your project files. 2. **Navigate to the Project Folder:** - Move into the project folder using: cd stackoverflow_scraper - This command takes you into the newly created "stackoverflow_scraper" folder, setting it as your working directory. 3. **Create a JavaScript File:** - Generate a JavaScript file with: touch index.js - This command creates a file named "index.js," where you'll be crafting your scraping code to interact with Stack Overflow's Questions page. 4. **Install Crawlbase Dependency:** - Install the Crawlbase package by running: npm install Crawlbase - This command installs the necessary library for web scraping using Crawlbase. It ensures that your project has the essential tools to communicate effectively with the Crawling API. Executing these commands will initialize your project and set up the foundational environment required for successful scraping on Stack Overflow. The next steps will involve writing your scraping code within the "index.js" file, utilizing the tools and dependencies you've just established. Let's proceed to the exciting part of crafting your web scraper. ## V. Scrape using Crawlbase Scraper API Now, let's proceed into the process of leveraging the Crawlbase [Scraper API](https://crawlbase.com/scraper-api-auto-parse-web-data/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) to scrape content from Stack Overflow pages. It's important to note that while the Scraper API streamlines the scraping process, it comes with the limitation of providing pre-built scraping configurations for general purposes. As a result, customization is limited compared to a more tailored approach. Nevertheless, for many use cases, the Scraper API is a powerful and convenient tool to get a scraped response in JSON format with minimal coding effort. Open your `index.js` file and write the following code: ```javascript // Import the ScraperAPI class from the crawlbase library const { ScraperAPI } = require('crawlbase'); // Create a new instance of ScraperAPI with your ScraperAPI token const api = new ScraperAPI({ token: 'Crawlbase_Token' }); const stackoverflowURL = 'https://stackoverflow.com/questions/tagged/javascript'; // Make a GET request to the specified URL with autoparse enabled api .get(encodeURI(stackoverflowURL)) .then((res) => { // Log the scraped data to the console console.log(res.json.body, 'Scraped Data'); }) .catch(console.error); ``` Make sure to replace the `"Crawlbase_Token"` with your actual Scraper API token and run the script below in your terminal: ```bash node index.js ``` This will execute your script, sending a GET request to the specified Stack Overflow URL, and logging the scraped data in JSON format to the console. ![json result](https://crawlbase.com/blog/scrape-stackoverflow-questions/json-result.jpg) The response showcases overall page details such as the page title, metadata, images, and more. In the upcoming section of this guide, we will take a more hands-on approach that provides greater control over the scraping process, enabling us to tailor our scraper to meet specific requirements. Let's dive into the next section to further refine our web scraping skills. ## VI. Custom Scraper Using Cheerio Unlike the automated configurations of the Scraper API, [Cheerio](https://cheerio.js.org/) with the help of the [Crawling API](https://crawlbase.com/crawling-api-avoid-captchas-blocks/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution), offers a more manual and fine-tuned approach to web scraping. This change allows us greater control and customization, enabling us to specify and extract precise data from the Stack Overflow Questions page. Cheerio's advantage lies in its ability to provide hands-on learning, targeted extraction, and a deeper understanding of HTML structure. To install Cheerio in a Node.js project, you can use npm, the Node.js package manager. Run the following command to install it as a dependency for your project: ```bash npm install cheerio ``` Once done, copy the code below and place it in the `index.js` file we created earlier. It is also important to study the code to see how we extract the specific elements we want from the complete HTML code of the target page. ```javascript // Import required modules const { CrawlingAPI } = require('crawlbase'); const cheerio = require('cheerio'); const fs = require('fs'); // Initialize CrawlingAPI with the provided token const api = new CrawlingAPI({ token: 'Crawlbase_Token' }); // Replace it with your Crawlbase Token const stackoverflowURL = 'https://stackoverflow.com/questions/tagged/javascript'; // Make a request to the specified URL api .get(encodeURI(stackoverflowURL)) .then((response) => { // Parse the HTML content using Cheerio and extract relevant information const parsedData = getParsedData(response.body); // Write the parsed data to a JSON file fs.writeFileSync('response.json', JSON.stringify({ parsedData }, null, 2)); }) // Handle errors if the request fails .catch(console.error); // Function to parse the HTML content and extract relevant information function getParsedData(html) { // Load HTML content with Cheerio const $ = cheerio.load(html), // Initialize an object to store parsed data parsedData = { title: '', description: '', totalQuestions: 0, questions: [], currentPage: 0, }; // Extract main information about the page parsedData['title'] = $('.fs-headline1').text().replace(/\s+/g, ' ').trim(); parsedData['description'] = $('div.mb24 p').text().replace(/\s+/g, ' ').trim(); parsedData['totalQuestions'] = $('div[data-controller="se-uql"] .fs-body3').text().replace(/\s+/g, ' ').trim(); parsedData['currentPage'] = $('.s-pagination.float-left .s-pagination--item.is-selected') .text() .replace(/\s+/g, ' ') .trim(); // Extract data for each question on the page $('#questions .js-post-summary').each((_, element) => { // Extract other properties for the question const question = $(element).find('.s-post-summary--content-title').text().replace(/\s+/g, ' ').trim(), authorName = $(element).find('.s-user-card--link').text().replace(/\s+/g, ' ').trim(), link = $(element).find('.s-link').attr('href'), authorReputation = $(element).find('.s-user-card--rep').text().replace(/\s+/g, ' ').trim(), questionDescription = $(element).find('.s-post-summary--content-excerpt').text().replace(/\s+/g, ' ').trim(), time = $(element).find('.s-user-card--time').text().replace(/\s+/g, ' ').trim(), votes = $(element) .find('.js-post-summary-stats .s-post-summary--stats-item:first-child') .text() .replace(/\s+/g, ' ') .trim(), answers = $(element).find('.js-post-summary-stats .has-answers').text().replace(/\s+/g, ' ').trim() || '0 answers', views = $(element) .find('.js-post-summary-stats .s-post-summary--stats-item:last-child') .text() .replace(/\s+/g, ' ') .trim(), tags = $(element).find('.js-post-tag-list-item').text(); // Push question data to the parsedData array parsedData['questions'].push({ question, authorName, link: link.includes('https://') ? link : `https://stackoverflow.com${link}`, authorReputation, questionDescription, time, votes, answers, views, tags, }); }); // Return the parsed data object return parsedData; } ``` Execute the code above using the command below: ```bash node index.js ``` The JSON response provides parsed data from the Stack Overflow Questions page tagged with "javascript". ```json { "parsedData": { "title": "Questions tagged [javascript]", "description": "For questions about programming in ECMAScript (JavaScript/JS) and its different dialects/implementations (except for ActionScript). Note that JavaScript is NOT Java. Include all tags that are relevant to your question: e.g., [node.js], [jQuery], [JSON], [ReactJS], [angular], [ember.js], [vue.js], [typescript], [svelte], etc.", "totalQuestions": "2,522,888 questions", "questions": [ { "question": "How to add a data in Tabulator using addRow method as well as AJAX?", "authorName": "Ashok Ananthan", "link": "https://stackoverflow.com/questions/77871776/how-to-add-a-data-in-tabulator-using-addrow-method-as-well-as-ajax", "authorReputation": "30", "questionDescription": "I'm utilizing Tabulator version 5.5.4 in my application. I aim to incorporate data using the addRow method under specific conditions, and alternatively, I want to add data through AJAX in certain ...", "time": "asked 1 min ago", "votes": "0 votes", "answers": "0 answers", "views": "5 views", "tags": "javascripttabulator" }, { "question": "Shopify fulfillment of orders without tracking using JSON (in Javascript)", "authorName": "Buddy", "link": "https://stackoverflow.com/questions/77871735/shopify-fulfillment-of-orders-without-tracking-using-json-in-javascript", "authorReputation": "25", "questionDescription": "I’m trying to update an order in Shopify as fulfilled. I’m using Javascript code. I tried using both the order and the filfillment IDs. Each order has multiple line items but I want to update the ...", "time": "asked 9 mins ago", "votes": "0 votes", "answers": "0 answers", "views": "9 views", "tags": "javascriptshopifyshopify-api" }, { "question": "Argument type __Event is not assignable to parameter type Event", "authorName": "Alex Gusev", "link": "https://stackoverflow.com/questions/77871732/argument-type-event-is-not-assignable-to-parameter-type-event", "authorReputation": "1,646", "questionDescription": "This is my JavaScript code: class Dispatcher extends EventTarget {} const dsp = new Dispatcher(); dsp.addEventListener('SOME_EVENT', function (event) { console.log(event); }); const evt = new ...", "time": "asked 9 mins ago", "votes": "0 votes", "answers": "0 answers", "views": "4 views", "tags": "javascripttypescripttype-conversiontype-definition" }, { "question": "Saving the text from an input in an array [duplicate]", "authorName": "CaossM3n", "link": "https://stackoverflow.com/questions/77871721/saving-the-text-from-an-input-in-an-array", "authorReputation": "1", "questionDescription": "I want to copy and save the text of 2 inputs to an array. After saving, I want a button that can display the 2 texts from the inputs. I'm really new to JavaScript and I'm trying to find something on ...", "time": "asked 12 mins ago", "votes": "0 votes", "answers": "0 answers", "views": "15 views", "tags": "javascriptarrayssafearray" }, { "question": "Electron Forge with React doesn't render html success", "authorName": "William Hu", "link": "https://stackoverflow.com/questions/77871689/electron-forge-with-react-doesnt-render-html-success", "authorReputation": "15.7k", "questionDescription": "I'm following this link https://www.electronforge.io/guides/framework-integration/react-with-typescript which is adding React into Electron Forge project. The main codes are: index.html <body> ...", "time": "asked 17 mins ago", "votes": "0 votes", "answers": "0 answers", "views": "9 views", "tags": "javascriptreactjselectronelectron-forge" }, { "question": "React setState not updating the state object", "authorName": "juanlazy", "link": "https://stackoverflow.com/questions/77871630/react-setstate-not-updating-the-state-object", "authorReputation": "37", "questionDescription": "Debugging: Inside the functions declared in AuthProvider, I coded a console.log() and its working. The setState that updates the state object is not updating its value. Why? Seems like I'm missing ...", "time": "asked 27 mins ago", "votes": "0 votes", "answers": "2 answers", "views": "26 views", "tags": "javascriptreactjsreact-context" }, { "question": "Testing a modal with DETOX that is not in the code of the app", "authorName": "kristijan k", "link": "https://stackoverflow.com/questions/77871607/testing-a-modal-with-detox-that-is-not-in-the-code-of-the-app", "authorReputation": "1", "questionDescription": "So im trying to add some e2e test with Detox and Jest in react native app made with expo i have some problem with a modal that pops out when the app is lunched describe('player app Activation screen', ...", "time": "asked 31 mins ago", "votes": "0 votes", "answers": "0 answers", "views": "6 views", "tags": "javascriptreact-nativejestjsexpodetox" }, { "question": "How to test hooks with react router navigation?", "authorName": "Rumpelstinsk", "link": "https://stackoverflow.com/questions/77871585/how-to-test-hooks-with-react-router-navigation", "authorReputation": "3,147", "questionDescription": "I'm having some problems testing hooks with renderHook utility when the hook has some navigation logic. I'm not able to simulate a navigation on the test. For example lets take this sample hook export ...", "time": "asked 35 mins ago", "votes": "1 vote", "answers": "1 answer", "views": "18 views", "tags": "javascriptreactjsreact-hooksreact-routerreact-testing-library" }, { "question": "Uncaught SyntaxError: Unexpected token '<' (at App.js:28:5) [duplicate]", "authorName": "Andre Korosh Kordasti", "link": "https://stackoverflow.com/questions/77871562/uncaught-syntaxerror-unexpected-token-at-app-js285", "authorReputation": "409", "questionDescription": "I am getting a Uncaught SyntaxError: Unexpected token '<' (at App.js:28:5) when trying to call my React - App.js file from my chat.html using Firebase. They are in different directories and are ...", "time": "asked 40 mins ago", "votes": "-2 votes", "answers": "0 answers", "views": "21 views", "tags": "javascripthtmlreactjsfirebase" }, { "question": "Error UnknownAction: Cannot parse action at /api/auth/session", "authorName": "Mohammad Miras", "link": "https://stackoverflow.com/questions/77871561/error-unknownaction-cannot-parse-action-at-api-auth-session", "authorReputation": "576", "questionDescription": "I after update package.jsonencountered this error in the project pacage.json changes I get this error when I run the project: import { serverAuth$ } from '@builder.io/qwik-auth' import type { ...", "time": "asked 40 mins ago", "votes": "-2 votes", "answers": "0 answers", "views": "14 views", "tags": "javascriptqwik" }, { "question": "Cannot get the value of state in React Js", "authorName": "Suman Bhattacharya", "link": "https://stackoverflow.com/questions/77871553/cannot-get-the-value-of-state-in-react-js", "authorReputation": "1", "questionDescription": "I tried to code a contactList web-app by using react js, but I'm stuck with just one issue. I'm using useLocation hook for sending data from Cards.js to Profile.js, but I cannot get the state in ...", "time": "asked 42 mins ago", "votes": "-1 votes", "answers": "0 answers", "views": "17 views", "tags": "javascriptreactjswildwebdeveloper" }, { "question": "Password Pattern Feedback [closed]", "authorName": "rioki", "link": "https://stackoverflow.com/questions/77871539/password-pattern-feedback", "authorReputation": "6,056", "questionDescription": "I am using supergenpass mobile for a while and am fascinated by the little password feedback image pattern used to give you visual feedback if you typed the password correctly. I would like to use ...", "time": "asked 46 mins ago", "votes": "-1 votes", "answers": "0 answers", "views": "33 views", "tags": "javascriptdynamic-image-generation" }, { "question": "What is the right way of use CloudKit JS for a React web app?", "authorName": "Eduardo Giadans", "link": "https://stackoverflow.com/questions/77871491/what-is-the-right-way-of-use-cloudkit-js-for-a-react-web-app", "authorReputation": "1", "questionDescription": "everyone! I'm currently working in creating a new web app that will replicate the functionalities of an existing iOS and Mac app. However, since those apps rely on CloudKit to manage all user ...", "time": "asked 54 mins ago", "votes": "0 votes", "answers": "0 answers", "views": "9 views", "tags": "javascriptreactjscloudkitcloudkit-js" }, { "question": "javascript - discord.js slash command builder not registering commands or displaying commands in discord", "authorName": "aarush v", "link": "https://stackoverflow.com/questions/77871469/javascript-discord-js-slash-command-builder-not-registering-commands-or-displa", "authorReputation": "1", "questionDescription": "I already have another slash command not using the slash command builder that works, so I know all the authorization scopes are fine. When I try to register the command with the slash command builder, ...", "time": "asked 59 mins ago", "votes": "0 votes", "answers": "1 answer", "views": "16 views", "tags": "javascriptdiscorddiscord.js" }, { "question": "Firebase/React Native: App crashes on Android when attempting to generate a uploading tasks", "authorName": "JAD I.", "link": "https://stackoverflow.com/questions/77871444/firebase-react-native-app-crashes-on-android-when-attempting-to-generate-a-uplo", "authorReputation": "11", "questionDescription": "I'm implementing the image upload feature in a React Native app with Firebase. The code works well on iPhone; however, upon exporting the APK, the app crashes on the uploading screen. After some ...", "time": "asked 1 hour ago", "votes": "0 votes", "answers": "0 answers", "views": "8 views", "tags": "javascriptandroidtypescriptreact-nativefirebase-storage" } ], "currentPage": "1" } } ``` This structured JSON response provides comprehensive information about each question on the page, facilitating easy extraction and analysis of relevant data for further processing or display. ## VII. Conclusion Congratulations on navigating through the ins and outs of web scraping with JavaScript and Crawlbase! You've just unlocked a powerful set of tools to dive into the vast world of data extraction. The beauty of what you've learned here is that it's not confined to Stack Overflow – you can take these skills and apply them to virtually any website you choose. Now, when it comes to choosing your scraping approach, it's a bit like picking your favorite tool. The Scraper API is like the trusty swiss army knife – quick and versatile for general tasks. On the flip side, the Crawling API paired with Cheerio is more like a finely tuned instrument, giving you the freedom to play with the data in a way that suits your needs. If you wish to explore more projects like this guide, we recommend browsing the following links: 📜 [How to Scrape Bing SERP](https://crawlbase.com/blog/scrape-bing-search-results/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) 📜 [How to Scrape Flipkart Products](https://crawlbase.com/blog/scrape-flipkart/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) 📜 [How to Scrape Yelp](https://crawlbase.com/blog/scrape-yelp/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) 📜 [How to Scrape Target.com](https://crawlbase.com/blog/scrape-target/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) 📜 [How to Scrape Bloomberg](https://crawlbase.com/blog/scrape-bloomberg/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) Should you find yourself in need of assistance or have burning questions, our [support team](https://crawlbase.com/dashboard/support/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) is here to help. Feel free to reach out, and happy scraping! ## VIII. Frequently Asked Questions ### Q: What is the difference between Scraper API and Crawling API? **A:** [Scraper API](https://crawlbase.com/scraper-api-auto-parse-web-data/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) is designed for a specific purpose – to retrieve the scraped response of any given page. It excels at simplifying the process of obtaining data from websites, providing a straightforward output tailored for quick integration. However, the key distinction lies in its limitation to delivering only the scraped response. On the other hand, [Crawling API](https://crawlbase.com/crawling-api-avoid-captchas-blocks/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution) is a versatile tool crafted for general-purpose website crawling. It offers a broader spectrum of customization options, allowing users to tailor the response according to their specific needs. Unlike Scraper API, Crawling API enables users to enhance their scraping capabilities by incorporating third-party parsers such as Cheerio. This flexibility makes Crawling API well-suited for a range of scraping scenarios, where customization and control over the response are essential. ### Q: Why should I use the Scraper API and Crawling API if I can build a scraper using Cheerio for free? **A:** While Cheerio allows you to build scrapers for free, it comes with limitations, especially in handling bot detections imposed by websites. Scraping websites and sending numerous requests in a short timeframe can lead to IP bans, hindering the scraping process. This is where the Crawlbase APIs, including Scraper API and Crawling API, shine. Both APIs are built on top of thousands of residential and datacenter proxies, providing the crucial benefit of anonymity while crawling. This not only safeguards your IP from potential blocks but also saves you considerable time and costs that would otherwise be required for setting up and managing massive IP servers independently. In essence, the Scraper API and Crawling API offer a hassle-free solution for efficient and anonymous scraping, making them invaluable tools for projects where reliability and scale are crucial. ### Q. Is it legal to scrape Stack Overflow? **A:** Yes, but it's important to be responsible about it. Think of web scraping like a tool – you can use it for good or not-so-good things. Whether it's okay or not depends on how you do it and what you do with the info you get. If you're scraping stuff that's not public and needs a login, that can be viewed as unethical and possibly illegal, depending on the specific situation. In essence, while web scraping is legal, it must be done responsibly. Always adhere to the website's terms of service, respect applicable laws, and use web scraping as a tool for constructive purposes. Responsible and ethical web scraping practices ensure that the benefits of this tool are utilized without crossing legal boundaries.
crawlbase
1,764,188
Single Responsibility Principle (SRP) By Using PHP : SOLID Principle
Introduction: Welcome to my article where we’ll explore a super important concept in...
0
2024-02-17T18:23:36
https://dev.to/razabangi/single-responsibility-principle-srp-by-using-php-solid-principle-14lm
oop, singleresponsibility, solidprinciples, php
## Introduction: Welcome to my article where we’ll explore a super important concept in programming called the Single Responsibility Principle (SRP). Single Responsibility Principle (SRP)! It’s a key concept in programming that helps keep our code neat and easy to understand. So, let’s dive in and make sure you’ve got it all figured out! Robert C. Martin describes it: > A class should have one and only one reason to change, meaning that a class should have only one job. ## What is the Single Responsibility Principle? So, what’s this SRP thing all about? Well, think of it like this: imagine you have a bunch of toys in your room. Each toy should have its own job, right? Like, a ball is for bouncing, and a teddy bear is for cuddling. In programming, the SRP says that each part of your code should have just one job to do. It’s like keeping your toys organized so you can find them easily when you need them! ## SRP in Action: A Practical Example Now, let’s dive into a real-life example to see how SRP works in action. We’ll look at a piece of code that’s not following the SRP and compare it to a version that does. Trust me, you’ll see a big difference! ``` class Product { private string $name; private string $color; private int $price; public function __construct( $name, $color, $price ) { $this->name = $name; $this->color = $color; $this->price = $price; } public function getPrice() { return $this->price; } public function getProductDetails(): void { printf( "Product name: %s,\nColor: %s,\nPrice: %d", $this->name, $this->color, $this->price ); } } class Invoice { private Product $product; private int $quantity; public function __construct(Product $product, int $quantity) { $this->product = $product; $this->quantity = $quantity; } public function calculateTotal(): int { return $this->product->getPrice() * $this->quantity; } public function invoicePrint(): void { echo "Invoice print."; } public function saveToDB(): void { echo "Invoice saved in DB."; } } ``` Here we have a simple PHP code snippet with two classes: Product and Invoice. The Product class holds details about a product like its name, color, and price. Pretty straightforward, right? It's like having a label on a box that tells you exactly what's inside. Now, let’s talk about the Invoice class. Its job is to handle invoices, but here's the catch: it's doing more than it should! According to the Single Responsibility Principle (SRP) by Robert C. Martin, each class should have just one reason to change. But in our case, the Invoice class is handling three things: calculating the total, printing the invoice, and saving invoice data to the database. It’s like having one person trying to do three different jobs at once — it’s bound to get messy! To follow SRP, we’d want to split up these responsibilities into separate classes. That way, each class can focus on doing one thing really well, making our code easier to understand and maintain. By refactoring our code to follow SRP, we’ll end up with classes that are simpler, more flexible, and easier to work with. And that’s a win-win for everyone involved! 🚀💻 ## Now Single Responsibility Principle comes in action: ``` class Product { private string $name; private string $color; private int $price; public function __construct( $name, $color, $price ) { $this->name = $name; $this->color = $color; $this->price = $price; } public function getPrice() { return $this->price; } public function getProductDetails(): void { printf( "Product name: %s,\nColor: %s,\nPrice: %d", $this->name, $this->color, $this->price ); } } class Invoice { private Product $product; private int $quantity; public function __construct(Product $product, int $quantity) { $this->product = $product; $this->quantity = $quantity; } public function calculateTotal(): int { return $this->product->getPrice() * $this->quantity; } } class InvoicePrinter { private Invoice $invoice; public function __construct(Invoice $invoice) { $this->invoice = $invoice; } public function print(): void { echo "Invoice print"; } } class InvoiceDB { private Invoice $invoice; public function __construct(Invoice $invoice) { $this->invoice = $invoice; } public function save(): void { echo "Invoice saved in DB."; } } ``` In this updated code snippet, we’ve successfully applied the Single Responsibility Principle (SRP), keeping our classes focused and organized. The Product class is responsible for holding product details such as name, color, and price. It's like a neat little package that tells us everything we need to know about a product. Then, we have the Invoice class, which now has a single responsibility: calculating the total amount based on the product price and quantity. It's like a dedicated accountant crunching the numbers to give us the final bill. But wait, there’s more! We’ve introduced two new classes: InvoicePrinter and InvoiceDB. These classes each handle a specific task related to invoices. The InvoicePrinter class is responsible for printing invoices, while the InvoiceDB class takes care of saving invoice data to the database. By breaking down responsibilities into separate classes, we’ve made our code cleaner, easier to understand, and more flexible. Now, each class can focus on its own job without getting tangled up in unrelated tasks. It’s like having a well-oiled machine where each part does its job smoothly and efficiently. With our code structured this way, future changes and updates will be a breeze. And that’s what following SRP is all about — making our codebase more robust and maintainable for the long haul. 🌟💻 ## Conclusion: In conclusion, by following the Single Responsibility Principle (SRP), we’ve made our code easier to work with and understand, kind of like organizing your toys into different boxes so you can find them easily when you need them! Breaking down responsibilities into smaller, focused tasks has made our codebase more flexible and adaptable, just like having a toolbox with specific tools for different jobs. Whether you’re a beginner just starting out or an advanced developer, SRP helps keep our code neat and tidy, making it easier for everyone to collaborate and contribute. So, remember, when in doubt, keep it simple and focused — your future self (and your teammates) will thank you for it! Happy coding! 🚀💻 Thank you for your precious time to read :)
razabangi
1,764,228
live free fis world cup halfpipe calgary snow rodeo 2024 live free streaming **5tth
In 2024, the world of live streaming is set to offer an abundance of free entertainment, with an...
0
2024-02-17T20:49:01
https://dev.to/bairdnee911/live-free-fis-world-cup-halfpipe-calgary-snow-rodeo-2024-live-free-streaming-5tth-4a2e
In 2024, the world of live streaming is set to offer an abundance of free entertainment, with an array of exciting features. Users can anticipate unlimited access to free live TV streams, providing a diverse range of content for viewers. Additionally, the year brings opportunities for free gift cards and random codes, making it an enticing period for those seeking complimentary rewards. The prospect of free gift cards adds a delightful element to the streaming experience, creating an avenue for users to enjoy more perks without any cost. CLICK HERE GET FREE➤GIFT CARD https://t.co/rQJp9Pey5t CLICK HERE GET FREE➤GIFT CARD https://t.co/rQJp9Pey5t The allure of free card random codes further enhances the excitement, offering users unexpected surprises and bonuses. With these enticing incentives, 2024 promises a unique blend of entertainment and rewards for streaming enthusiasts. As technology continues to advance, the streaming landscape evolves, presenting viewers with innovative and cost-free options. Embrace the upcoming year with open arms, as unlimited live free streaming TV, free gift cards, free card random codes, and much more await, transforming the digital entertainment experience for all. Get ready for a year filled with endless entertainment and surprises, all at the convenience of your fingertips.
bairdnee911
1,764,232
live track cycling occ oceanian championships 2024 live streaming free tv **#$5
In 2024, the world of live streaming is set to offer an abundance of free entertainment, with an...
0
2024-02-17T20:56:32
https://dev.to/bairdnee911/live-track-cycling-occ-oceanian-championships-2024-live-streaming-free-tv-5-gl6
In 2024, the world of live streaming is set to offer an abundance of free entertainment, with an array of exciting features. Users can anticipate unlimited access to free live TV streams, providing a diverse range of content for viewers. Additionally, the year brings opportunities for free gift cards and random codes, making it an enticing period for those seeking complimentary rewards. The prospect of free gift cards adds a delightful element to the streaming experience, creating an avenue for users to enjoy more perks without any cost. CLICK HERE GET FREE➤GIFT CARD https://t.co/rQJp9Pey5t CLICK HERE GET FREE➤GIFT CARD https://t.co/rQJp9Pey5t The allure of free card random codes further enhances the excitement, offering users unexpected surprises and bonuses. With these enticing incentives, 2024 promises a unique blend of entertainment and rewards for streaming enthusiasts. As technology continues to advance, the streaming landscape evolves, presenting viewers with innovative and cost-free options. Embrace the upcoming year with open arms, as unlimited live free streaming TV, free gift cards, free card random codes, and much more await, transforming the digital entertainment experience for all. Get ready for a year filled with endless entertainment and surprises, all at the convenience of your fingertips.
bairdnee911
1,764,242
How Laravel loads .env files
It is no secret that during Laravel’s bootstrapping it is loading its environment variables. But how?...
0
2024-02-17T21:45:32
https://dev.to/blitzcry/how-laravel-loads-env-files-4c78
laravel, php, webdev, programming
It is no secret that during Laravel’s bootstrapping it is loading its environment variables. But how? And where? That where is in Illuminate\Foundation\Bootstrap\LoadEnvironmentVariables And the how? Pretty straightforward actually. As we can see in the following snippet, there are a couple of steps that Laravel takes. Let’s follow them one by one. ```php class LoadEnvironmentVariables { /** * Bootstrap the given application. * * @param \Illuminate\Contracts\Foundation\Application $app * @return void */ public function bootstrap(Application $app) { if ($app->configurationIsCached()) { return; } $this->checkForSpecificEnvironmentFile($app); try { $this->createDotenv($app)->safeLoad(); } catch (InvalidFileException $e) { $this->writeErrorAndDie($e); } } } ``` ## Is the configuration cached? This happens in the Application Contract: ```php /** * Determine if the application configuration is cached. * * @return bool */ public function configurationIsCached() { return is_file($this->getCachedConfigPath()); } /** * Get the path to the configuration cache file. * * @return string */ public function getCachedConfigPath() { return $this->normalizeCachePath('APP_CONFIG_CACHE', 'cache/config.php'); } ``` As you can see it checks it returns the path to the ./config.php file, which will be located in the cache/ directory, if it’s… obviously, cached. If it finds it there, the program is done and it moves on with its life… ## What if it’s not cached? Next, it will run: `$this->checkForSpecificEnvironmentFile($app);` Doing a series of checks: ```php protected function checkForSpecificEnvironmentFile($app) { if ($app->runningInConsole() && ($input = new ArgvInput)->hasParameterOption('--env') && $this->setEnvironmentFilePath($app, $app->environmentFile().'.'.$input->getParameterOption('--env'))) { return; } $environment = Env::get('APP_ENV'); if (! $environment) { return; } $this->setEnvironmentFilePath( $app, $app->environmentFile().'.'.$environment ); } ``` If the app is running in the console and we specify the env file it should use, it will set the environment file to what we specified in the command. From the docs: > Before loading your application’s environment variables, Laravel determines if an APP_ENV environment variable has been externally provided or if the --env CLI argument has been specified. If so, Laravel will attempt to load an .env.[APP_ENV] file if it exists. If it does not exist, the default .env file will be loaded. If no — env or CLI argument was provided, it sets the environment file path to the default one. Now what? Do we have the config? Next, it created an instance of the Dotenv component! > Laravel Dotenv is a component of the Laravel framework designed to simplify the management of environment variables within Laravel applications. It is based on the PHP dotenv library and allows developers to define configuration settings for their applications in a .env file. ```php try { $this->createDotenv($app)->safeLoad(); } catch (InvalidFileException $e) { $this->writeErrorAndDie($e); } ``` The createDotenv is simple: ```php protected function createDotenv($app) { return Dotenv::create( Env::getRepository(), $app->environmentPath(), $app->environmentFile() ); } ``` In my case, the arguments it received in the create call are: - Dotenv\Repository\AdapterRepository - “/home/user/example-app” - “.env” The creation of Dotenv is a bit more complicated but my goal is not to divide into it, but rather in how it will load the config after we provide it! This happens in the safeLoad() part, which is executed in the Illuminate Application file: ```php public function safeLoad() { try { return $this->load(); } catch (InvalidPathException $e) { // suppressing exception return []; } } ``` Woah, just calls load? ```php public function load() { $entries = $this->parser->parse($this->store->read()); return $this->loader->load($this->repository, $entries); } ``` The only part of this code that can throw the InvalidPathException is the store->read() method. So we are only suppressing the errors in cases in which the file is no longer in there, who knows why? Maybe your CI/CD did something during your deploy 👀 At least we handle the InvalidFileException by displaying an error and exiting. ```php protected function writeErrorAndDie(InvalidFileException $e) { $output = (new ConsoleOutput)->getErrorOutput(); $output->writeln('The environment file is invalid!'); $output->writeln($e->getMessage()); http_response_code(500); exit(1); } ``` It’s using the ConsoleOutput class for Symfony from Symfony\Component\Console\Output, nothing out of the ordinary here, as most things in Laravel are based on Symfony. ## What now? Well, that’s it. That’s how Laravel reads your .env file. But this is not the end, rather just the beginning. As there are many other things Laravel will want to Bootstrap. Like Configurations, Exception Handlers, Facades, Service Providers, and so on. I’ll explain them one by one in the future :")
blitzcry
1,764,323
Unleashing the Power of CSS for Mind-Blowing 3D Effects
Introduction: In the world of web design, using 3D effects has become super popular for creating...
0
2024-02-18T01:07:58
https://dev.to/ackomjnr/unleashing-the-power-of-css-for-mind-blowing-3d-effects-1dpn
webdev, programming, css, web3
<h2>Introduction:</h2> In the world of web design, using 3D effects has become super popular for creating awesome user experiences. With CSS 3D transformations, designers can make their websites look cooler by adding depth, perspective, and interactivity. This article will show you how to create advanced 3D effects using CSS, teaching you how to make complex shapes, realistic perspectives, and cool animations. Let's dive into the world of CSS and create some amazing web experiences! <h2>Table of Contents:</h2> <h3>1. Introduction to Advanced CSS Transformations for Creating 3D Effects</h3> <h3>2. Understanding the Perspective Property and Its Impact on Transformations</h3> <h3>3. Using the rotateX, rotateY, and rotateZ Functions for Dynamic Effects</h3> <h3>4. Applying the Scale and Skew Functions to Manipulate Elements in 3D Space</h3> <h3>5. Creating Realistic Depth with the translateZ Function</h3> <h3>6. Combining Multiple Transformations to Achieve Complex 3D Effects</h3> <h3>7. Utilizing Keyframes and Animations for Interactive 3D Designs</h3> <h3>8. Tips for Optimizing Performance When Using Advanced CSS Transformations</h3> <h3>9. Examples of Stunning 3D Effects Created with CSS Transformations</h3> <h3>10. Conclusion and Resources for Further Learning About Advanced CSS Techniques</h3> <h2>Understanding the Perspective Property and Its Impact on Transformations:</h2> The `perspective` property in CSS helps create the illusion of depth in 3D transformations. By adding `perspective` to a parent element, all its children will look like they're in 3D space. ```css .parent { perspective: 1000px; /* Define the perspective point */ } .child { transform: rotateY(45deg); } ``` <h2>Using the rotateX, rotateY, and rotateZ Functions for Dynamic Effects:</h2> These functions let you rotate elements around different axes, creating cool 3D effects. For example, `rotateX` rotates around the X-axis, `rotateY` around the Y-axis, and `rotateZ` around the Z-axis. ```css .box { transform: rotateX(45deg) rotateY(45deg) rotateZ(45deg); } ``` <h2>Applying the Scale and Skew Functions to Manipulate Elements in 3D Space:</h2> The `scale` function changes the size of an element along different axes, while the `skew` function tilts elements. Mixing scale and skew with other transformations can make unique 3D effects. ```css .element { transform: scale(1.5, 0.8) skew(10deg, 20deg); } ``` <h2>Creating Realistic Depth with the translateZ Function:</h2> The `translateZ` function moves an element along the Z-axis, creating depth effects. This can make elements look closer or farther away from the viewer. ```css .object { transform: translateZ(100px); } ``` <h2>Combining Multiple Transformations to Achieve Complex 3D Effects:</h2> By using different transformation functions together, you can make intricate 3D effects. Trying out different combinations can lead to endless creative possibilities. ```css .shape { transform: rotateX(45deg) rotateY(45deg) translateZ(100px); } ``` <h2>Utilizing Keyframes and Animations for Interactive 3D Designs:</h2> CSS keyframes and animations help create dynamic 3D designs. By setting keyframes at different points, elements can smoothly transition between states. ```css @keyframes rotate { 0% { transform: rotateY(0deg); } 100% { transform: rotateY(360deg); } } ``` <h2>Tips for Optimizing Performance When Using Advanced CSS Transformations:</h2> While CSS transformations look cool, they can slow down a website if overused. To keep things running smoothly, try not to transform too many elements and avoid complex transformations on big elements. <h2>Examples of Stunning 3D Effects Created with CSS Transformations:</h2> 1. Interactive 3D carousel 2. Parallax scrolling effects 3. 3D card flips 4. Dynamic 3D navigation menus 5. Transforming logos and icons <h2>Conclusion:</h2> Learning advanced CSS transformations lets you create amazing 3D effects on the web. By understanding perspective, using transformation functions, and adding keyframes and animations, you can make immersive user experiences. Experiment, be creative, and remember to optimize performance to take your 3D designs to the next level! <h2>Resources for Further Learning About Advanced CSS Techniques:</h2> - [MDN Web Docs: CSS Transforms](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://developer.mozilla.org/en-US/docs/Web/CSS/transform&ved=2ahUKEwjS49qXzrOEAxVnGVkFHdiNB4MQFnoECBQQAQ&usg=AOvVaw1mw28F4R_iC9BjnMvKSVyd) - [CSS-Tricks: Understanding CSS 3D Transforms](https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://css-tricks.com/almanac/properties/t/transform/&ved=2ahUKEwi6kc-GzrOEAxUTGVkFHYumDGcQFnoECBUQAQ&usg=AOvVaw1WSw3TeQL99DwDP4GgGFyP) - [Codrops: CSS 3D Transforms Techniques](Codrops https://tympanus.net › codrops Codrops)</blog>
ackomjnr
1,764,334
How to solve, Music Teacher or Musician Breadcrumb SEO?
Here is the proven Breadcrumb Code Snippets solution for Music Teachers or musicians so I request to...
0
2024-02-18T02:44:10
https://dev.to/seosiri/how-to-solve-music-teacher-or-musician-breadcrumb-seo-5dbk
seo, breadcrub, webdev, code
Here is the proven Breadcrumb Code Snippets solution for Music Teachers or musicians so I request to don't worry about Breadcrumb SEO because this post is a practical problem-solving solution with 100% crawlable Multiple breadcrumb trail Code (follow the coding box). Multiple breadcrumb trail: You can specify multiple breadcrumb trails for a single page if there are multiple ways to navigate to your site. Here's one breadcrumb trail that leads to a page for Music Teacher, Shafique Ahmed that includes 3 items from three different platforms. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x730ye30tinwea3lpv8t.png) Breadcrumb Code Snippets Test Result after inputting the Breadcrumb SEO Code for the desired items appearing on Google's SERPs: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6hl8610das7v3mi40lt.png) > "Breadcrumb combines content and UX with the search intent (search query matching for ML Bots/Search Crawl Bots) and the searcher's interest. -Momenul Ahmad That means, what do you want to show in the SERPs that navigate the searcher on them to what they desire but keep in mind the content should be consistent and unique other than the benefits of Breadcrumb SEO bounds to go to the garbage. Let's evaluate the benefits of Breadcrumbing Content: A breadcrumb trail on a page can help users understand and explore a site. It can also help search engines understand the site's hierarchy and how the pages connect. Here are some benefits of breadcrumbs for SEO: #seo #seosiri #momenulahmad #breadcrumbseo #musicseo #youtubeseo #musicsiteseo #musicianseo #musicteacher for Shafique Ahmed's #website #breadcrumbseo- nazrulsangeet.com Read more- [](https://www.seosiri.com/2024/02/musician-breadcrumb-seo.html)
seosiri
1,764,420
🚀 Join the crypto revolution with Binance! 🌟
Are you ready to dive into the exciting world of cryptocurrency trading? Look no further than...
0
2024-02-18T06:25:03
https://dev.to/louisl247/join-the-crypto-revolution-with-binance-9lb
cryptocurrency, trading, investing, binance
Are you ready to dive into the exciting world of cryptocurrency trading? Look no further than Binance, the leading global cryptocurrency exchange platform! Whether you're a seasoned trader or just starting out, Binance offers a user-friendly interface, a wide range of cryptocurrencies to trade, and cutting-edge security features to keep your assets safe. Use my referral code [CPA_000NF6BA8C] when signing up, and we both can earn rewards together! It's a win-win situation – you get started on your crypto journey with a bonus, and I get a little something too. Let's grow our portfolios together! Here's how to get started:http://tinyurl.com/3kdknh8n Sign up for a Binance account using my referral code: [CPA_000NF6BA8C]. Start trading and exploring the vast world of cryptocurrencies. Enjoy the benefits of being part of one of the largest and most trusted crypto exchanges worldwide. Don't miss out on this opportunity to join the crypto community and potentially earn some extra rewards along the way. See you on Binance! [Join Now](http://tinyurl.com/3kdknh8n)
louisl247
1,764,541
Decoding Influencers and Influencer Marketing in 2024
First things first, who are these influencers? Think beyond just celebrities and reality stars....
0
2024-02-18T10:54:16
https://dev.to/kiranraotweets/decoding-influencers-and-influencer-marketing-in-2024-4o99
First things first, who are these influencers? Think beyond just celebrities and reality stars. Today’s influencers come in all shapes and sizes, from micro-influencers with passionate niche followings to mega-influencers commanding millions of eyeballs. They specialize in specific topics, from fashion and beauty to gaming and travel, fostering genuine connections with their communities. The numbers speak for themselves: 84% of marketers leverage influencer marketing in some form (Influencer Marketing Hub, 2023). The influencer marketing industry is expected to reach a staggering $16.4 billion by 2023 (Insider Intelligence, 2022). 72% of consumers trust recommendations from influencers over traditional advertising (Mediakix, 2023). Influencer: An influencer is an individual on social media or other online platforms who has built a reputation and following around a particular niche or topic. They have the ability to sway the opinions and purchasing decisions of their audience. Influencer Marketing: This is a form of marketing that involves collaborating with influencers to promote your brand or product. It relies on the trust and credibility that influencers have built with their audience to reach new customers and achieve marketing goals. How it Works: Identify Your Target Audience: Understand who you want to reach with your marketing campaign. Find Relevant Influencers: Research and identify influencers who align with your brand, target audience, and niche. Set Campaign Goals: Decide what you want to achieve with your campaign, like brand awareness, website traffic, or sales. Develop a Partnership: Contact influencers and propose a collaboration that benefits both parties. Content Creation: Work with the influencer to create engaging content that promotes your brand. Track and Analyze Results: Monitor campaign performance and measure the impact of the influencer collaboration. Hiring Influencer Marketing: Platforms: Several platforms can help you connect with influencers, like AspireIQ, Buzzsumo, and HypeAuditor. Agencies: You can partner with influencer marketing agencies that handle the entire process for you. Direct Outreach: Reach out to potential influencers directly via email or social media. Advantages of Digital Marketing: Targeted Reach: Reach specific demographics and interests more effectively than traditional marketing. Measurable Results: Track and analyze campaign performance in real-time. Cost-Effective: Reach a large audience for less compared to traditional advertising. Interactive Engagement: Connect and interact with your audience directly. Global Reach: Target customers worldwide without geographical limitations. Disadvantages of Digital Marketing: Competition: High competition means standing out can be challenging. Constant Change: Keeping up with evolving platforms and trends requires flexibility. Negative Feedback: Publicly exposed to potential criticism and negative reviews. Technical Expertise: Requires some technical knowledge for effective implementation. Attribution Challenges: Measuring the ROI of individual campaigns can be complex. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygzg02umyla9320daekx.png)
kiranraotweets
1,764,704
Exploring the Wonders of Diffusion Technology
Exploring the Wonders of Diffusion Technology Unveiling the Mysteries Behind...
0
2024-02-18T15:20:22
https://dev.to/shreyaslyzr/exploring-the-wonders-of-diffusion-technology-3ml5
diffusiontechnology, innovation, sustainablesolutions, futuretech
# Exploring the Wonders of Diffusion Technology ## Unveiling the Mysteries Behind Modern Innovation ### **What is Diffusion Technology?** Diffusion technology refers to the process by which molecules move from an area of high concentration to an area of low concentration. This simple yet fascinating process is the cornerstone of various breakthroughs in sectors ranging from healthcare to environmental science. ### **Applications in Everyday Life** - **Healthcare:** In drug delivery systems, diffusion technology enables controlled release of medications within the body. - **Water Purification:** Utilizes diffusion to remove contaminants, providing clean drinking water. - **Electronics:** Critical in semiconductor manufacturing, where it's used to deposit materials onto silicon wafers. ### **Future Prospects** The potential of diffusion technology is boundless. With continued research and development, it promises to bring more efficient, sustainable solutions to some of the world's most pressing problems. --- While Markdown does not inherently support the creation of visually intricate designs or embedded CSS styling, when translated to an HTML document, you can add classes and IDs to Markdown elements and then style them using CSS. For instance: ```html <h1 class="title">Exploring the Wonders of Diffusion Technology</h1> ``` You would then use a separate CSS file or a `<style>` tag within your HTML to add styles: ```css .title { color: #007BFF; text-align: center; } ``` **Hashtags for Social Sharing:** - #DiffusionTechnology - #Innovation - #SustainableSolutions - #FutureTech Converting this into a fully styled HTML blog would involve more extensive HTML and CSS knowledge, far beyond the scope of this Markdown demonstration. I hope this provides a clear illustration of how to draft an engaging blog post on diffusion technology using Markdown, with a nod towards how it might be further styled with HTML for a beautiful web presence.
shreyaslyzr
1,764,733
Bounce Animation
This is a bounce animation created by me using purely HTML &amp; CSS and it contains no JavaScript
0
2024-02-18T16:22:21
https://dev.to/huzaifaakhtar2/bounce-animation-1ino
codepen
<p>This is a bounce animation created by me using purely HTML &amp; CSS and it contains no JavaScript</p> {% codepen https://codepen.io/HuzaifaAkhtar2/pen/abMXXEx %}
huzaifaakhtar2
1,764,754
C# System.Text.Json
Introduction When working with json using strong typed classes and perfect json using...
22,100
2024-02-19T22:28:35
https://dev.to/karenpayneoregon/c-systemtextjson-37m1
csharp, dotnetcore, json, codenewbie
![razor pages](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uy58j3q0bcfmxtu3ifz.png) ## Introduction When working with json using strong typed classes and perfect json using [System.Text.Json](https://learn.microsoft.com/en-us/dotnet/api/system.text.json?view=net-8.0) functionality for the most part is easy although there can be roadblocks which this article will address. ## Official documentation Microsoft has documented working with json in the following two links, [serialize](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/how-to) and [deserialize](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/deserialization) json which is well worth taking some time to review. Using these two links is where things started for the following topics. Even with great documentation there are still things that need to be drill down into. {% cta https://github.com/karenpayneoregon/json-samples %} GitHub Source code {% endcta %} ## JsonSerializerOptions [JsonSerializerOptions](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializeroptions?view=net-8.0) is a class that provides a way to specify various serialization and deserialization behaviors. For ASP.NET Core ```csharp services.AddControllers() .AddJsonOptions(options => { ... }); ``` Example ```csharp builder.Services.AddControllers().AddJsonOptions(options => { options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter()); options.JsonSerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase; }); ``` Both of the above will be discussed later. In the code sample provided, some samples will have options defined in the method for ease of working things out while others will use options from a class. Example with options in the method ## Working with lowercased property names ```csharp public static void CasingPolicy() { JsonSerializerOptions options = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, WriteIndented = true }; List<Product>? list = JsonSerializer.Deserialize<List<Product>>(json, options); } ``` While the proper way would be ```csharp public static void CasingPolicy() { List<Product>? list = JsonSerializer.Deserialize<List<Product>>(json, JsonHelpers.LowerCaseOptions); } ``` JsonHelpers is a class in a separate class project with several predefined configurations. ```csharp public class JsonHelpers { public static JsonSerializerOptions CaseInsensitiveOptions = new() { PropertyNameCaseInsensitive = true }; public static readonly JsonSerializerOptions WebOptions = new(JsonSerializerDefaults.Web) { WriteIndented = true }; public static JsonSerializerOptions WithWriteIndentOptions = new() { WriteIndented = true }; public static JsonSerializerOptions WithWriteIndentAndIgnoreReadOnlyPropertiesOptions = new() { WriteIndented = true, IgnoreReadOnlyProperties = true }; public static JsonSerializerOptions EnumJsonSerializerOptions = new() { Converters = { new JsonStringEnumConverter() }, WriteIndented = true }; public static JsonSerializerOptions LowerCaseOptions = new() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, WriteIndented = true }; public static List<T>? Deserialize<T>(string json) => JsonSerializer.Deserialize<List<T>>(json, WebOptions); public class LowerCaseNamingPolicy : JsonNamingPolicy { public override string ConvertName(string name) => name.ToLower(); } } ``` And for desktop typically set up at class level as a static read-only property. ## Class property casing Most times when deserializing json property names are in the following format, Id, FirstName, LastName,BirthDate etc but what if json is id, firstname, lastname, birthdate? For this we are working with the following model ```csharp public class Product { public int Id { get; set; } public string Name { get; set; } } ``` And are receiving the following json. ```json [ { "name": "iPhone max", "id": 1 }, { "name": "iPhone case", "id": 2 }, { "name": "iPhone ear buds", "id": 3 } ] ``` **Code** - SerializeLowerCasing method generates the json shown above - DeserializeLowerCasing method deserializes the json above and display the json to a console window **Important** The deserialization option must, in this case match the same options as when serialized but let's look at it as matching the options from an external source. ```csharp public static void CasingPolicy() { var json = SerializeLowerCasing(); DeserializeLowerCasing(json); } public static string SerializeLowerCasing() { return JsonSerializer.Serialize( new List<Product> { new() { Id = 1, Name = "iPhone max"}, new() { Id = 2, Name = "iPhone case" }, new() { Id = 3, Name = "iPhone ear buds" } }, JsonHelpers.LowerCaseOptions); } public static void DeserializeLowerCasing(string json) { List<Product>? products = JsonSerializer.Deserialize<List<Product>>(json, JsonHelpers.LowerCaseOptions); WriteOutJson(json); Console.WriteLine(); Console.WriteLine(); foreach (var product in products) { Console.WriteLine($"{product.Id,-3}{product.Name}"); } } ``` ![results of serialize and deserialize operations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4r3ggjngx8zudf6814sh.png) ## Working with Enum Its common place to use an Enum to represent options for a property, for this there is the [JsonStringEnumConverter](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonstringenumconverter?view=net-8.0) class which converts enumeration values to and from strings. Example using the following model. ```csharp public class PersonWithGender { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public Gender Gender { get; set; } public override string ToString() => $"{Id,-4}{FirstName,-12} {LastName}"; } ``` Option for this is in the class JsonHelpers. ```csharp public static JsonSerializerOptions EnumJsonSerializerOptions = new() { Converters = { new JsonStringEnumConverter() }, WriteIndented = true }; ``` Usage ```csharp public static void WorkingWithEnums() { List<PersonWithGender> people = CreatePeopleWithGender(); var json = JsonSerializer.Serialize(people, JsonHelpers.EnumJsonSerializerOptions); WriteOutJson(json); List<PersonWithGender>? list = JsonSerializer.Deserialize<List<PersonWithGender>>( json, JsonHelpers.EnumJsonSerializerOptions); Console.WriteLine(); Console.WriteLine(); list.ForEach(Console.WriteLine); } ``` Results ![output from working with an enum](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/785g8ocpikrt2aygsihi.png) Note, if deserialization is missing the options a runtime exception is thrown. ![deserialization missing options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp8ophb93ao6ef3wakie.png) If options are not defined for serialization the numeric values are provided, not the actual Enum member. ![serialization missing options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdi7jkf94udwz12ssxu7.png) With ASP.NET Core and Razor Pages using the same model we can serialize and deserialize as done with desktop. ```csharp public class JsonSamples { public List<PersonWithGender> CreatePeopleWithGender() => [ new() { Id = 1, FirstName = "Anne", LastName = "Jones", Gender = Gender.Female }, new() { Id = 2, FirstName = "John", LastName = "Smith", Gender = Gender.Male }, new() { Id = 3, FirstName = "Bob", LastName = "Adams", Gender = Gender.Unknown } ]; public void WorkingWithEnums() { var options = new JsonSerializerOptions { Converters = { new JsonStringEnumConverter() }, WriteIndented = true }; List<PersonWithGender> people = CreatePeopleWithGender(); var json = JsonSerializer.Serialize(people); List<PersonWithGender>? list = JsonSerializer.Deserialize<List<PersonWithGender>>(json); } } ``` Or use the method in JsonHelpers class ```csharp public void WorkingWithEnums() { List<PersonWithGender> people = CreatePeopleWithGender(); var json = JsonSerializer.Serialize(people, JsonHelpers.EnumJsonSerializerOptions); List<PersonWithGender>? list = JsonSerializer.Deserialize<List<PersonWithGender>>(json, JsonHelpers.EnumJsonSerializerOptions); } ``` To get the following. ```json [ { "Id": 1, "FirstName": "Anne", "LastName": "Jones", "Gender": "Female" }, { "Id": 2, "FirstName": "John", "LastName": "Smith", "Gender": "Male" }, { "Id": 3, "FirstName": "Bob", "LastName": "Adams", "Gender": "Unknown" } ] ``` The other option is through adding options through WebApplicationBuilder in Program.cs ```csharp builder.Services.AddControllers().AddJsonOptions(options => { options.JsonSerializerOptions.PropertyNameCaseInsensitive = true; options.JsonSerializerOptions.WriteIndented = true; options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter()); }); ``` Then alter the last method. ```csharp public void WorkingWithEnums(JsonOptions options) { List<PersonWithGender> people = CreatePeopleWithGender(); var json = JsonSerializer.Serialize(people, options.JsonSerializerOptions); List<PersonWithGender>? list = JsonSerializer.Deserialize<List<PersonWithGender>>(json, options.JsonSerializerOptions); } ``` In this case, in index.cshtml.cs we setup the options using dependency injection. ```csharp public class IndexModel : PageModel { private readonly IOptions<JsonOptions> _options; public IndexModel(IOptions<JsonOptions> options) { _options = options; } ``` ## Spaces in property names There may be cases were json data has properties with spaces. ```json [ { "Id": 1, "First Name": "Mary", "LastName": "Jones" }, { "Id": 2, "First Name": "John", "LastName": "Burger" }, { "Id": 3, "First Name": "Anne", "LastName": "Adams" } ] ``` For this, specify the property name from json with [JsonPropertyNameAttribute](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonpropertynameattribute?view=net-8.0) as shown below. ```csharp public class Person { public int Id { get; set; } [JsonPropertyName("First Name")] public string FirstName { get; set; } [JsonPropertyName("Last Name")] public string LastName { get; set; } public string FullName => $"{FirstName} {LastName}"; /// <summary> /// Used for demonstration purposes /// </summary> public override string ToString() => $"{Id,-4}{FirstName, -12} {LastName}"; } ``` C# Code (from provided code in a GitHub repository) ```csharp public static void ReadPeopleWithSpacesInPropertiesOoops() { var people = JsonSerializer.Deserialize<List<Person>>(File.ReadAllText("Json\\people2.json")); foreach (var person in people) { Console.WriteLine(person); } } ``` For more details on this and more like hyphens in property names see the following well written [Microsoft documentation](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/customize-properties?pivots=dotnet-8-0). ## Read string values as int There may be cases were a json file is provided with your model expects an int. ```json [ { "Id": "1", "Name": "iPhone max" }, { "Id": "2", "Name": "iPhone case" }, { "Id": "3", "Name": "iPhone ear buds" } ] ``` Model where Id is an int but in json a string. ```csharp public class Product { public int Id { get; set; } public string Name { get; set; } } ``` For this, use in desktop projects or see below for another option using an attribute on the Id property. ```csharp var jsonOptions = new JsonSerializerOptions() { NumberHandling = JsonNumberHandling.AllowReadingFromString }; ``` [JsonSerializerOptions.NumberHandling](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializeroptions.numberhandling?view=net-8.0) indicates that _gets or sets an object that specifies how number types should be handled when serializing or deserializing_. In this ASP.NET Core sample ```csharp public async Task<List<Product>> ReadProductsWithIntAsStringFromWeb() { var json = await Utilities.ReadJsonAsync( "https://raw.githubusercontent.com/karenpayneoregon/jsonfiles/main/products.json"); ProductsStringAsInt = json; return JsonSerializer.Deserialize<List<Product>>(json, JsonHelpers.WebOptions)!; } ``` JsonHelpers.WebOptions uses JsonSerializerDefaults.Web with the default to string quoted numbers as numeric. ![Shows sonSerializerDefaults.Web defaults](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3bdllmizvgh2rsxpdapd.png) Another method is to set an attribute for desktop or web. ```csharp public class Product { [JsonNumberHandling(JsonNumberHandling.AllowReadingFromString)] public int Id { get; set; } public string Name { get; set; } } ``` ## Save decimal as two decimal places Given the following created using Bogus Nuget package. ![screen shot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zep2gqks289y81y4l9b.png) We want ```json [ { "ProductId": 12, "ProductName": "Awesome Fresh Salad", "UnitPrice": "2.99", "UnitsInStock": 5 }, { "ProductId": 19, "ProductName": "Awesome Wooden Pizza", "UnitPrice": "7.28", "UnitsInStock": 3 }, { "ProductId": 15, "ProductName": "Ergonomic Concrete Gloves", "UnitPrice": "2.38", "UnitsInStock": 2 } ] ``` This is done using a custom converter as follows. ```csharp // Author https://colinmackay.scot/tag/system-text-json/ public class FixedDecimalJsonConverter : JsonConverter<decimal> { public override decimal Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) { string? stringValue = reader.GetString(); return string.IsNullOrWhiteSpace(stringValue) ? default : decimal.Parse(stringValue, CultureInfo.InvariantCulture); } public override void Write(Utf8JsonWriter writer, decimal value, JsonSerializerOptions options) { string numberAsString = value.ToString("F2", CultureInfo.InvariantCulture); writer.WriteStringValue(numberAsString); } } ``` Code which in this case gets a list and saves to a json file which produces the output above. ```csharp List<ProductItem> results = products .Select<Product, ProductItem>(container => container).ToList(); if (results.Any()) { // process checked File.WriteAllText("Products.json", JsonSerializer.Serialize(results, new JsonSerializerOptions { WriteIndented = true, Converters = { new FixedDecimalJsonConverter() } })); } ``` Since the property UnitPrice is stored as a string we use the same technique already shown by setting NumberHandling = JsonNumberHandling.AllowReadingFromString. ```csharp // process checked File.WriteAllText("Products.json", JsonSerializer.Serialize(results, new JsonSerializerOptions { WriteIndented = true, Converters = { new FixedDecimalJsonConverter() } })); var jsonOptions = new JsonSerializerOptions() { NumberHandling = JsonNumberHandling.AllowReadingFromString }; var json = File.ReadAllText("Products.json"); List<Product>? productsFromFile = JsonSerializer.Deserialize<List<Product>>(json, jsonOptions); ``` ## Ignore property There may be times when a property should not be included in serialization or deserialization. Use [JsonIgnore](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/ignore-properties) attribute, here BirthDate will be ignored. ```csharp public class PersonIgnoreProperty : Person1 { [JsonIgnore] public DateOnly BirthDate { get; set; } } ``` Which can be controlled with [JsonIgnoreCondition Enum](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonignorecondition?view=net-8.0) ```csharp public class PersonIgnoreProperty : Person1 { [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingDefault)] public DateOnly BirthDate { get; set; } } ``` ## Immutable types By default, System.Text.Json uses the default public parameterless constructor. However, you can tell it to use a parameterized constructor, which makes it possible to deserialize an immutable class or struct. The following demonstrates deserializing a struct where the constructor is decelerated with [JsonConstructor](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonconstructorattribute?view=net-8.0) attribute. ```csharp public readonly struct PersonStruct { public int Id { get; } public string FirstName { get; } public string LastName { get; } [JsonConstructor] public PersonStruct(int id, string firstName, string lastName) => (Id, FirstName, LastName) = (id, firstName, lastName); /// <summary> /// Used for demonstration purposes /// </summary> public override string ToString() => $"{Id, -4}{FirstName,-12}{LastName}"; } ``` Deserializing from static json. ```csharp public static void Immutable() { var json = """ [ { "Id": 1, "FirstName": "Mary", "LastName": "Jones" }, { "Id": 2, "FirstName": "John", "LastName": "Burger" }, { "Id": 3, "FirstName": "Anne", "LastName": "Adams" } ] """; var options = new JsonSerializerOptions(JsonSerializerDefaults.Web); List<PersonStruct>? peopleReadOnly = JsonSerializer.Deserialize<List<PersonStruct>>(json, options); peopleReadOnly.ForEach(peep => Console.WriteLine(peep)); } ``` ![Results](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktllqhqbcw26jiur358s.png) ## Ignore null property values You can ignore properties on serialization and deserialization using [JsonIgnoreCondition Enum](https://learn.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonignorecondition?view=net-8.0). Suppose json data has an primary key which should be ignored when populating a database table using EF Core or that a gender property is not needed at all. The following first shows not ignoring properties Id and Gender while the second ignores Id and Gender. **Models** ```csharp public class PersonWithGender { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public Gender Gender { get; set; } public override string ToString() => $"{Id,-4}{FirstName,-12} {LastName}"; } public class PersonWithGender1 { [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingDefault)] public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingDefault)] public Gender Gender { get; set; } public override string ToString() => $"{Id,-4}{FirstName,-12} {LastName}"; } ``` Serializing data. ```csharp public static void IgnoreNullValues() { PersonWithGender person1 = new() { FirstName = "Karen", LastName = "Payne", Gender = Gender.Female}; var data1 = JsonSerializer.Serialize(person1, JsonHelpers.EnumJsonSerializerOptions); PersonWithGender1 person2 = new() { FirstName = "Karen", LastName = "Payne" }; var data2 = JsonSerializer.Serialize(person2, JsonHelpers.WithWriteIndentOptions); } ``` Results ```json { "Id": 0, "FirstName": "Karen", "LastName": "Payne", "Gender": "Female" } { "FirstName": "Karen", "LastName": "Payne" } ``` # Serializing to a dictionary In this sample data is read from a Microsoft NorthWind database table Employees to a dictionary with the key as first and last name and the value as the primary key. Dapper is used for the read operation. **Important** Before running this code create the database and populate with populate.sql in the script folder of the project ReadOddJsonApp. ```csharp internal class DapperOperations { private IDbConnection db = new SqlConnection(ConnectionString()); public void GetDictionary() { const string statement = """ SELECT EmployeeID as Id, FirstName + ' ' + LastName AS FullName, LastName FROM dbo.Employees ORDER BY LastName; """; Dictionary<string, int> employeeDictionary = db.Query(statement).ToDictionary( row => (string)row.FullName, row => (int)row.Id); Console.WriteLine(JsonSerializer.Serialize(employeeDictionary, JsonHelpers.WithWriteIndentOptions)); } } ``` ![Serialize to dictionary data from a database table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tg4tzj2q26yezxnwtwe5.png) ## Reuse JsonSerializerOptions instances Microsoft's docs [indicate](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/configure-options?pivots=dotnet-8-0#reuse-jsonserializeroptions-instances): If you use JsonSerializerOptions repeatedly with the same options, don't create a new JsonSerializerOptions instance each time you use it. Reuse the same instance for every call. In much of the code provided here violates the above as they are standalone code samples, best to follow above for applications. ## Summary This article provides information to handle unusual json formats and normal formatting as a resources with code samples located in a GitHub repository. ## Source code Clone the following [GitHub repository](https://github.com/karenpayneoregon/json-samples). ## Resource - [System.Text.Json Namespace](https://learn.microsoft.com/en-us/dotnet/api/system.text.json?view=net-8.0) - [How to write custom converters for JSON serialization](https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/converters-how-to?pivots=dotnet-8-0) - [What’s new in System.Text.Json in .NET 8](https://devblogs.microsoft.com/dotnet/system-text-json-in-dotnet-8/)
karenpayneoregon
1,764,778
Chocolatey: The Easiest Way to Install and Manage Windows Software
Working on Windows has it’s advantages when it comes to software and installing with most programs...
0
2024-02-18T17:57:37
https://reprodev.com/how-to-install-chocolatey-and-automate-windows-software-package-installs/
automation, tutorial, codenewbie, opensource
Working on Windows has it’s advantages when it comes to software and installing with most programs having a Windows Version available for download every time. Sometimes that sponsored Google Search link you just clicked is actually the wrong software or worse yet, a malicious malware version of it. <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExNTZyZW8xamhyd3NkZWc3aWVtMDZqcTlpdG93M2szejh2MDZuNXo3bCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oEjHCF6kGlXK0ofsY/giphy.gif"> On Linux, you can use an inbuilt package manager like Apt in Ubuntu to download the latest version from a trusted centralised source. You can install and update it using just the command line. That’s where the Chocolatey package manager can help to give you that Linux experience on Windows with a tastier way to do things. ![](https://reprodev.com/content/images/2024/02/07-Choco-Intro-01.png) ___ In this guide and overview, we’ll be installing Chocolatey using Windows PowerShell on Windows 11 to start installing our first packages. We’ll be looking at - What is Chocolatey? - Install Chocolatey using Windows PowerShell - Where and how to search for packages? - Install our first package (Google Chrome) - Upgrade our first package - Uninstall our first package - Install several packages at once (Google Chrome, Notepad++ and 7zip) - Upgrade several packages at once (Google Chrome, Notepad++ and 7zip) - Upgrade all packages installed by Chocolatey - Uninstall all packages installed by Chocolatey - Check the packages currently installed by Chocolatey - Uninstall and remove Chocolatey - What about Winget? - Chocolate for Automated Windows Virtual Machine Deployment and Configuration ___ ## What is Chocolatey? Chocolatey is a package manager for Windows, that once installed, lets you download and install programs in just a few words using only the command line without you having to do much else. Instead of having to open a browser and go to the website to download a program like Google Chrome and going through the whole setup process manually, Chocolatey can automate this. ![](https://reprodev.com/content/images/2024/02/07-Choco-Intro-03.png) Programs are called packages in Chocolatey and you’ll find most programs have a package in the community repository. Some of these being maintained by the developers themselves and are verified to be the latest working versions. ___ ## Install Chocolatey on Windows The install for Chocolatey is similar to the installation of any other package and is done by using a one line command in Windows PowerShell. [Chocolatey.org Install Link](https://chocolatey.org/install) ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-01..png) - Open Windows Powershell and “**Run as administrator**” ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-02.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-03.png) - Copy the one line command for the Chocolatey installation script below ```undefined Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) ``` ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-04.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-05.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-06.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-07.png) - Once it completes it will bring you back to the Powershell Command Line - Type the **choco** command below to check the install is working, call the program and print the current version `choco` ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-08-1.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Install-09.png) Congratulations, you’ve now installed the Chocolatey package manager and can start managing packages using the command line on Windows. ___ ## Where and How to search for packages? Chocolatey is a useful tool because of the large number of packages in its repository. There is one for nearly every piece of software you can think of and you’ll usually be able to find it quite quickly. You can search for the package you need by either using the command line or if that doesn't work, by searching on their website repository in a browser. ___ ### Searching for packages using the command line We’re going to search for 7zip and we'll be using that later on in the guide - Open a Windows PowerShell terminal and “Run as administrator” ![](https://reprodev.com/content/images/2024/02/07-Choco-Software-Search-01..png) - Type the below command to search the repository for 7zip `choco search 7zip` ![](https://reprodev.com/content/images/2024/02/07-Choco-Software-Search-02.png) - This will bring a list of search results for the program and if you scroll back up to the top you’ll find it’s usually the first result with a version number after it. ![](https://reprodev.com/content/images/2024/02/07-Choco-Software-Search-03.png) - These will also have notes in the right hand side telling you if they are approved, broken or have been replaced by a new package. - We can scroll back up the list to see the whole list of packages. - In our case, the first hit is 7zip so that’s going to be the name of the package we would need to use with Chocolatey. ![](https://reprodev.com/content/images/2024/02/07-Choco-Software-Search-04.png) We'll go over the other way to search for these when we Install our first package, Google Chrome. ___ ## Install our first package (Google Chrome) For our first package we’re going to use is for a piece of software most people will install as part of Windows initial boot up. We’re going to install the **Google Chrome** browser. We’re going to start by getting the correct package name for **Google Chrome** so we can get it installed - Let’s search for **Google Chrome** in the repository using the search command ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-01.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-02.png) `choco search chrome` ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-03.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-04.png) - This brings up **169** packages to choose from. - Scrolling up to the top and it doesn’t look like any of these are the correct one so instead we’ll check on the **Community Repository** page. ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-05.png) - Open a web browser and go to the **Community Repository** ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-06.png) - If we scroll down we can see this as one of the top 5 hits on the page and it’s actually listed as **googlechrome** ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-07.png) - We can find this from a search by using that name ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-11.png) - Click to get through to the individual project page for this application - This page will give you more information on our package for Google Chrome, the last time it was updated and the code we need to paste into our terminal. ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-09.png) - Go back to your **Powershell Terminal** to paste the command from this page - Run the below command to start the installation `choco install googlechrome` ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-12.png) - It will now start the download of **Google Chrome** but will stop asking if you are sure you want to complete the installation ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-14.png) - To confirm the install and run the script you need to type **y** ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-15.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-16.png) - Close your **Powershell Terminal** to exit or press **Ctrl+C** to cancel as we're going to do something different ___ ### Automating the Single Package install - As we want to automate this further then we’re going add one more thing to the choco command - Adding the **\-y** switch to the end will say y automatically to as many of the prompts as it can for you - Open a new Powershell window as Administrator so we can try this again - Type in the below command and Press Enter `choco install googlechrome -y` ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-17.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-18.png) - Time to grab a cup of coffee while Chocolatey does it’s thing ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-19.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-20.png) - Once it's finished the install script you can go ahead and find that we've automatically got a Desktop Shortcut to Google Chrome added ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-21.png) - It's all been added to our Start Menu ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-22.png) - Opening it up for the first time we're greeted with the normal first time run for **Google Chrome** with no noticeable or functional difference to a regular install. ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Install-23.png) Congratulations, you’ve now installed your first package using Chocolatey. Next up, even though we’ve just installed it let’s look at how to upgrade it. ___ ## Upgrade our first package In most cases you won’t have to upgrade any of your packages manually using Chocolatey as they will tend to update as normal in most cases when you open them up. - To upgrade Google Chrome manually using Chocolatey we can use the below command `choco upgrade googlechrome` - Add the **\-y** switch to automate this further `choco upgrade googlechrome -y` ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Upgrade-01.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Upgrade-02.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Upgrade-03.png) This will download the latest version from the Community Repository and upgrade the program if there’s one available. I would suggest that you close up the program when you do this or it may fail when you try. ___ ## Uninstall our first package Just like installing programs, uninstalling programs can be quite the long process on Windows and here Chocolatey can automate the process for you again. We're going to Uninstall Google Chrome from this machine. - To uninstall Google Chrome using Chocolatey we can use the below command `choco uninstall googlechrome -y` ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Uninstall-01.png) - This will uninstall the package in most cases without you having to do anything else. ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Uninstall-02.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Uninstall-03.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Uninstall-04.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Uninstall-05.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Package-Uninstall-06.png) > There is a chance with some applications that the uninstall from here won't fully remove the files. For example, there still be there may be a Programs File Folder with your user data. If this happens you’ll be prompted to use the actual program's uninstall program. You may have to then manually delete those folders if needed for a full clean uninstallation. Congratulations, we’ve now covered the basics of Chocolatey and you’ve managed one package but it’s time to start managing multiple packages at once. ___ ## Install several packages at once (Google Chrome, Notepad++ and 7zip) Installing one program at a time, using choco commands is quite useful but we can use those same commands to manage more than just one package. We’ll be installing Google Chrome, Notepad++ and 7zip all in one go using just one line of code. We’ve already seen how we can get the package names from the repository so we’re going to skip ahead to the install. We can add as many of the package names after each other to chain the install command and add the -y switch at the end of it to press y throughout the installation to make it unattended. - Open up a new Windows PowerShell terminal and “Run as administrator” ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-00.png) - Run the below command to start the installation `choco install googlechrome notepadplusplus 7zip -y` ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-01.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-02.png) - This will take 5 to 10 minutes and is definitely time to go and grab a coffee as Chocolatey gets to work on the download and install of the programs ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-03.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-04.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-05.png) If we take a look on our Desktop we now have the Google Chrome shortcut and in our Start Menu as we normally would. ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-10.png) - We can also see 7zip has installed with shortcuts in the Start Menu ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-11.png) - Google Chrome is in the Start Menu ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-13.png) - Finally we can see that Notepad++ has also been installed as normal and can be found in the Start Menu aswell ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Install-14.png) Congratulations, you just downloaded and installed 3 programs at once using just one line of code and Chocolatey. The best part was it did this unattended so you could do something else thanks to good old fashioned automation. ___ ## Upgrade several packages at once (Google Chrome, Notepad++ and 7zip) When you're working with more than one package and need to upgrade just a few you can use the same principle and name the packages you want to upgrade. Use the below command to upgrade the packages we just installed only `choco upgrade googlechrome notepadplusplus 7zip -y` This will search out any upgrades and apply them if needed all at once. ___ ## Uninstall several packages at once (Google Chrome, Notepad++ and 7zip) Just like the Uninstall Choco command for one package you can uninstall as many packages as you want by adding the package names to the command. > Please Note: You may need to input some commands when using the uninstall choco command as not all of it can be automated. In those rare cases like in this example below you’ll have to wait to see if you’re prompted and follow the instructions in the terminal - Open a Powershell window using "Run as Administrator" - Run the below command `choco uninstall googlechrome notepadplusplus 7zip -y` ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-03.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-04.png) - In this case, Notepad++ has another service called a metapackage that’s been installed to help run it. - Chocolatey will ask if we want to uninstall that Notepadplusplus.service too. - We only have 20 seconds for each answer before it fails so type **Y** and Press **Enter** twice to start the uninstallation of NotePad++ and continue. ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-05.png) - In this example, 7zip has a metapackage as well so we'll have to type **Y** and Press **Enter** twice to uninstall **7zip** and continue. ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-06.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-07.png) - Google Chrome can be uninstalled without any further intervention so will start to uninstall it's self now ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-08.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-09.png) - This should take about 5 minutes to fully complete and in this example you won't be prompted again but watch out for those packages that do. ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-10.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-11.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-12.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-13.png) ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-14.png) - We’ve now uninstalled all of the packages and we should have a list of the main packages not the metapackages we removed - If we go to our Start Menu all of our installed programs have now been removed ![](https://reprodev.com/content/images/2024/02/07-Choco-Multi-Package-Uninstall-15.png) ___ ## Check current installed packages We can check the packages currently installed by Chocolatey by using the below command `choco install` ___ ## Upgrade all packages Use at your own risk and make sure you're not in the middle of something important if you decide to run the below command. It'll upgrade every package you've currently got installed without any confirmation which can be useful but not the best way of doing this. `choco upgrade all -y` ___ ## Uninstall Chocolatey and All Packages We’ve had a lot of fun playing around with Chocolatey and installing packages from the command line but what if you want to just to go back to the way it was. Well, we can use the **choco uninstall** command to remove Chocolatey and most of the software it installed for us > Beware: Be absolutely sure you want to remove all the packages you’ve installed up till now. Once you’ve hit Enter after typing in this command you won’t be able to interrupt it without causing issues to some of the programs and your overall Windows installation. `choco uninstall all` > You can use the -y switch if you are sure that you don’t want to be prompted per package. Remember that some packages will still require you to press Yes so you may want to stick around for this mass uninstall. `choco uninstall all -y` ___ ## What about Winget? Winget was introduced as the package manager by Microsoft for Windows 10 upwards in 2021. [Use the winget tool to install and manage applications](https://learn.microsoft.com/en-us/windows/package-manager/winget/) It does what Chocolatey does but relies on the Microsoft Store for it’s repository of packages and programs. As it uses the Microsoft Store, it currently has not as many packages to choose from as Chocolatey does. This does seem to be the future plan from You’ll find that right now the package you want won’t be available using Winget but will be when using Chocolatey. It's useful in it’s current state but needs a little time to mature and for more packages to be added to the Microsoft store. When this happens, it will be a major competitor to the long standing dominance of Chocolatey in the Windows package management space. For me though Chocolatey still wins out. ___ ## Chocolatey for Automated Windows VM Configs Overall, I’m a big fan of Chocolatey and using one line to automate the installation of my most useful apps makes my life much simpler for Windows Virtual machines especially. We’re going to look at a few bigger use cases in a future guide when we look at using it as part of the automated setup and deployment of a Windows 11 Virtual Machine running on Esxi 8.0. There's a lot more inspiration that you can find in the Chocolatey subreddit too [https://www.reddit.com/r/chocolatey/](https://www.reddit.com/r/chocolatey/?ref=reprodev.com) ___ Congratulations for getting through this overview and guide to Chocolatey. It wasn't the shortest blog so well done for making it to the end. I hope I've given you a few ideas of what you can do with this and how it might save you a lot of time on some of the boring system admin you have to do. <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExOW5tMGs3OG5qemR1ZW45ZjZvbTQ5YTR6c3J2ZDllanB6Z2ZwbnhoNCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Tyyr7XJh3NKco/giphy.gif"> ___ Don't forget to explore the rest of our website as we build out more content. Stay tuned for more tutorials, tips, and tricks to help you make tech work for you. If you want to stay up-to-date with regular updates, make sure to subscribe to our free mailing list. ___
reprodev
1,764,851
Exploring the Magic of Spring Boot: Simplifying Java Development
In the vast ecosystem of Java development, Spring Boot shines as a powerful and efficient framework...
0
2024-02-18T18:52:26
https://dev.to/rukundoprince/exploring-the-magic-of-spring-boot-simplifying-java-development-4bad
In the vast ecosystem of Java development, Spring Boot shines as a powerful and efficient framework that streamlines the process of building robust and scalable applications. Whether you're a seasoned developer or just stepping into the world of Java, understanding the magic behind Spring Boot can significantly enhance your development experience. In this blog post, we'll delve into the fascinating world of Spring Boot, exploring its key features, benefits, and why it has become a cornerstone in modern Java development. What is Spring Boot? Spring Boot, an extension of the Spring Framework, is an open-source Java-based framework used to create standalone, production-grade Spring-based applications with minimal effort. It provides a comprehensive infrastructure for developing Java applications rapidly, offering a convention-over-configuration approach that reduces boilerplate code and simplifies the setup of new projects. Key Features of Spring Boot: 1. **Starter Dependencies**: Spring Boot offers a wide range of starter dependencies that streamline the inclusion of common libraries and frameworks. These starters automatically configure the necessary components, reducing the need for manual setup and enabling developers to focus on writing business logic. 2. **Auto-Configuration**: One of the most powerful features of Spring Boot is its auto-configuration capability. It intelligently configures Spring beans based on the classpath and the dependencies present in the project, eliminating the need for explicit configuration in many cases. This greatly simplifies setup and reduces configuration overhead. 3. **Embedded Servers**: Spring Boot includes embedded servers such as Tomcat, Jetty, and Undertow, allowing developers to package their applications as standalone JAR files with embedded web servers. This approach simplifies deployment and eliminates the need for external server configuration, making it easier to build and deploy microservices and cloud-native applications. 4. **Actuator**: Spring Boot Actuator provides production-ready features to monitor and manage applications. It offers endpoints for health checks, metrics, environment details, and more, allowing developers to gain insights into the application's runtime behavior and performance. 5. **Spring Boot CLI**: For rapid prototyping and scripting, Spring Boot provides a Command-Line Interface (CLI) that enables developers to create Spring-based applications using Groovy scripts. This can be particularly useful for quickly building small utilities or prototypes without the need for a full-fledged project setup. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/koe3d1jt3thh7ok011ra.jpg) Benefits of Using Spring Boot: 1. **Rapid Development**: By automating configuration and providing starter dependencies, Spring Boot accelerates the development process, allowing developers to focus on delivering business value rather than dealing with infrastructure concerns. 2. **Simplified Configuration**: With convention-over-configuration and auto-configuration, Spring Boot reduces the amount of boilerplate code and configuration files required, making the development experience more intuitive and efficient. 3. **Scalability and Flexibility**: Spring Boot is well-suited for building scalable and flexible applications, whether it's a monolithic application or a distributed microservices architecture. Its modular design and extensive ecosystem of libraries and integrations enable developers to adapt to evolving requirements with ease. 4. **Community and Ecosystem**: Spring Boot benefits from a large and active community of developers, which contributes to its ongoing evolution and the availability of resources, tutorials, and third-party integrations. This vibrant ecosystem ensures that developers have access to the tools and support they need to succeed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrnwnkev3jn415rhmsl8.jpg) Spring Boot has revolutionized Java development by simplifying the process of building robust, scalable, and maintainable applications. Its powerful features, such as starter dependencies, auto-configuration, embedded servers, and production-ready monitoring, empower developers to focus on writing code that matters while minimizing the overhead of setup and configuration. Whether you're building a small prototype or a complex enterprise application, Spring Boot provides the tools and capabilities you need to succeed in today's fast-paced development landscape. Embrace the magic of Spring Boot, and unlock new possibilities in Java development.
rukundoprince
1,764,873
Corporate entertainment in the IT industry
Corporate entertainment is a term that refers to the events and activities that businesses organize...
0
2024-02-18T19:25:27
https://dev.to/eac783/corporate-entertainment-in-the-it-industry-2i8m
Corporate entertainment is a term that refers to the events and activities that businesses organize for their employees, clients, or stakeholders. Corporate entertainment can range from small-scale gatherings such as team building workshops, holiday parties, or retreats, to large-scale events such as conventions, conferences, or product launches. Corporate entertainment can have various benefits for a business, such as: Boosting employee morale and engagement Strengthening relationships with clients and partners Enhancing brand awareness and reputation Generating leads and sales opportunities Celebrating achievements and milestones However, planning and executing successful corporate entertainment can also pose some challenges, such as: Finding the right venue and catering Choosing the appropriate entertainment and theme Balancing the budget and expectations Ensuring the safety and satisfaction of the attendees Measuring the return on investment and impact Therefore, it is important to have a clear goal and strategy for corporate entertainment, and to seek professional help from experts in the field. Some of the factors to consider when planning corporate entertainment are: The purpose and objective of the event The target audience and their preferences The type and style of entertainment The duration and timing of the event The location and accessibility of the venue The feedback and evaluation of the event There are many options and ideas for corporate entertainment, depending on the nature and size of the event. Some of the most popular and effective corporate entertainment ideas are: Live music and bands Comedy and magic shows Celebrity impersonators and speakers Photo booths and interactive games Dance and circus performers Virtual and online events To find out more about corporate entertainment and how to book the best acts for your event, you can visit some of the websites listed below https://www.edwardcrawford.co.uk/ https://www.edwardcrawford.co.uk/magician_services/corporate-magician/
eac783
1,764,885
Introdução a Orientação a Objetos
Sumário O que é Orientação a Objetos Classes Campos Construtor Métodos Hierarquia...
0
2024-02-18T20:06:30
https://dev.to/gustavocesarsantos/introducao-a-orientacao-a-objetos-5bh6
beginners, oop, backend
## Sumário 1. [O que é Orientação a Objetos](#1-o-que-é-orientação-a-objetos) 2. [Classes](#2-classes) - [Campos](#21-campos) - [Construtor](#22-construtor) - [Métodos](#23-métodos) 3. [Hierarquia das Classes](#3-hierarquia-das-classes) - [Superclasse e Subclasses](#31-superclasse-e-subclasses) - [Subclasses e Sub-subclasses](#32-subclasses-e-sub-subclasses) 4. [Interfaces](#4-interfaces) --- ### 1. O que é <a name="1-o-que-é-orientação-a-objetos"> Orientação a objetos é um paradigma da programação, tem como base a criação de classes e a forma como essas classes se relacionam, ajudando na organização, reutilização e diminuindo a reescrita de código, a seguir vou falar mais sobre as classes e como podemos trabalhar com elas. ### 2. Classes <a name="2-classes"> Uma classe é a base para a criação de um objeto, podemos compara-la a uma receita de bolo, nela vamos definir o que deve ser construído, como deve ser construído e quais suas características. A anatomia da classe é composta por três itens: #### 2.1. Campos <a name="21-campos"> São características/atributos padrões que a classe possui. Ex. Imagine um **Post** em um blog, quais seriam seus atributos? Podemos imaginar algo como: **titulo**, **data de criação**, **autor**, **conteúdo** e por ai vai. **Post** é a nossa **classe** e os atributos que listamos anteriormente são os **campos** da classe. Estes campos podem ter diferentes tipos de acesso. Podem ser **públicos** (Acessados por qualquer um), **privados** (Acessados apenas pela própria classe), **protegidos** (Acessados pela própria classe e suas subclasses). ```jsx class Post { public titulo: string; private autor: string; protected conteudo: string; createdAt: Date; //Caso não seja especificado o tipo de acesso, o padrão será o acesso publico. updatedAt: Date; //Construtor e métodos omitidos } ``` #### 2.2. Construtor <a name="22-construtor"> Como o nome diz ele é usado na construção da classe, nele podemos passar os valores iniciais dos campos, podemos declarar métodos que devem ser executados toda vez que a classe for instanciada. Mesmo se não escrevermos um construtor, toda classe possui um construtor por padrão, no caso de não escrevermos nada esse construtor será vazio. ```jsx class Foo { bar: string = ""; constructor(bar: string) { this.bar = bar; } //Métodos omitidos } class Bar { foo: string = ""; } const foo = new Foo("teste"); const bar = new Bar(); console.log(foo.bar); //teste console.log(bar.foo); // ``` #### 2.3. Métodos <a name="23-métodos"> São as “ações” que a classe pode executar, esses atos executados pelos métodos são chamados de comportamentos. Assim como os campos, os métodos também possuem diferentes tipos de acesso. Podem ser **públicos** (Acessados por qualquer um), **privados** (Acessados apenas pela própria classe), alem dos tipos de acesso os métodos possuem “tipos” diferentes. Podem ser **abstratos** (São métodos sem implementação, onde só passamos nome do método e parâmetros e as subclasses são obrigadas a implementa-lo), **estáticos** (São métodos que não tem acesso aos campos da classe e pode ser executados sem a necessidade de instanciar a classe). Ex. A classe **Gato** tem como um de seus métodos o ato de **pular.** ```jsx class Gato { //Campos e construtor omitidos public pular(altura: number): void { //Coloque aqui o algoritmo relacionado a ação de pular } private dormir(): void { //Coloque aqui o algoritmo relacionado a ação de dormir } abstract miar(som: string): void static andar(): void { //Coloque aqui o algoritmo relacionado a ação de andar } } ``` ### 3. Hierarquia das classes <a name="3-hierarquia-das-classes"> Como dito no começo do artigo, a forma como as classes se relacionam é extremamente importante, na OO um relacionamento se estabelece do uso da palavra chave **extends**, com o relacionamento estabelecido podemos trabalhar com a hierarquia das classes #### 3.1. Superclasse e Subclasses <a name="31-superclasse-e-subclasses"> Superclasse ou Classe mãe é a classe que será estendida por outras classes, ela possui a mesma estrutura de uma classe qualquer, porem suas subclasses herdaram seu construtor, campos e métodos. Subclasses ou classes filhas são as classes que estendem uma superclasse, ao fazer essa extensão, a subclasse herdam o construtor, campos e métodos da superclasse, podendo acessa-los e utiliza-los de acordo com as necessidades. ```jsx class Animal {} class Gato extends Animal {} class Cachorro extends Animal {} ``` #### 3.2. Subclasses e Sub-subclasses <a name="32-subclasses-e-sub-subclasses"> Além da estrutura citada a cima, onde existe 1 superclasse e N subclasses, existe outro tipo de hierarquia que é, as subclasses serem superclasses de outras classes abaixo na hierarquia, algo parecido com: ```jsx class Animal {} class Gato extends Animal {} class Cachorro extends Animal {} class Persa extends Gato {} class Pitbull extends Cachorro {} class Bulldog extends Cachorro {} ``` Ao passo que a estrutura vai crescendo verticalmente, as subclasses no nível mais baixo vão tendo acesso as superclasses acima, mesmo não estendo-as diretamente estendendo-as, no nosso exemplo o nível mais baixo são as classes “Persa”, “Bulldog” e “Pitbull”, essas classes podem acessar o construtor, campos e métodos da suas superclasses diretas Gato e Cachorro, mas também podem acessar a classe Animal. ### 4. Interfaces <a name="4-interfaces"> Naturalmente quando se fala de OO vem a cabeça o uso de classes, porem o uso exagerado de classe traz uma série de problemas que não existiriam sem elas, como por exemplo subclasses tendo acesso a métodos que não fazem sentido elas acessarem, métodos iguais que precisam ser escritos diversas vezes em diferentes classes para abranger o acesso de diferentes objetos, para evitar esses problemas e conseguir desenvolver um código funcional, utilizamos as interfaces. Uma interface pode ser interpretada como um contrato, neste contrato declaramos tudo que deve ter e ser feito por quem assina-lo, trazendo para o contexto técnico, na interface declaramos campos e/ou métodos, mas sem implementa-los, nos campos declaramos seu nome e tipo, nos métodos declaramos seu nome, parâmetros e tipo de retorno, a implementação fica a cargo das classes que utilizarem a interface, bem como quando utilizar os campos e métodos. Para utilizar uma interface usamos a palavra **implements**, uma classe pode implementar N interfaces. ```jsx interface foo { bar(param1: string): string; } interface bar { foo: string; } interface baz { fuu(zoo: number): void; } class xpto1 implements foo { bar(param1: string): string { return param1.toLowerCase(); } } class xpto2 implements bar { foo: string = "campo vindo da interface"; } class xpto3 implements foo, bar, baz { foo: string = "campo obrigatório por causa da interface bar"; bar(param1: string): string { return param1.toUpperCase(); } fuu(zoo: number): void { zoo + 1; } } ```
gustavocesarsantos
1,765,091
Software defined Network in Automotive
Hello Readers, My name is Rajesh M, and I work at Luxoft India as a Junior Software Developer....
0
2024-02-19T04:42:20
https://dev.to/rajeshm1/software-defined-network-in-automotive-388c
Hello Readers, My name is Rajesh M, and I work at Luxoft India as a Junior Software Developer. Luxoft has given me several opportunities to work on various projects, which has inspired me to learn the essential processes involved in developing AUTOSAR Modulеs and Add-Ons in Software defined Network in Automotive. **Introduction** SDN stands for Software Defined Network that's a networking architecture method. It allows the manage and control of the network the usage of software programs. Through Software Defined Network (SDN) networking conduct of the whole network and its gadgets are programmed in a centrally managed way through software applications the usage of open APIs. To apprehend software-defined networks, we need to apprehend the numerous planes concerned in networking. - Data Plane - Control Plane Data plane: All the sports related to in addition to on account of statistics packets sent by means of the give up-consumer belong to this plane. This consists of: Forwarding of packets. Segmentation and reassembly of records. Replication of packets for multicasting. **Control plane:** All sports vital to perform records plane activities but do not contain quit-consumer facts packets belong to this plane. In different phrases, that is the brain of the network. The activities of the manage aircraft include: - Making routing tables. - Setting packet handling policies. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs4htvbr5p0b68bwbzjo.png) Why SDN is Important? Better Network Connectivity: SDN provides very better network connectivity for income, offerings, and internal communications. SDN also enables in quicker records sharing. Better Deployment of Applications: Deployment of recent packages, offerings, and many commercial enterprise models may be accelerate using Software Defined Networking. **Better Security:** Software-defined community provides higher visibility in the course of the community. Operators can create separate zones for gadgets that require one-of-a-kind stages of security. SDN networks provide greater freedom to operators. Better Control with High Speed: Software-defined networking affords higher speed than different networking types by using making use of an open widespread software program-based totally controller. **How SDN works?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9plka3xaivry73me61re.png) To better recognize how SDN works, it helps to outline the basic components that create the community surroundings. The additives used to construct a software-described network might also or won't be placed inside the same physical area. These include: **·Applications:** Tasked with relaying data about the community or requests for precise aid availability or allocation. **·SDN controllers:** Handle communique with the apps to decide the destination of facts packets. The controllers are the weight balancers inside SDN. **·Networking devices:** Receive instructions from the controllers concerning a way to path the packets. **·Open-supply technologies:** Programmable networking protocols, such as OpenFlow, direct visitors amongst network gadgets in an SDN network. The Open Networking Foundation (ONF) helped to standardize the OpenFlow protocol and other open source SDN technology.  By combining those additives, companies get a less complicated, centralized manner to control networks. SDN strips away the routing and packet forwarding capabilities, referred to as the manipulate plane, from the facts aircraft, or underlying infrastructure. SDN then implements controllers, taken into consideration the mind of the SDN network, and layers them above the community hardware inside the cloud or on-premises. This we could groups use coverage-based control—a type of automation—to control network manage immediately.  SDN controllers inform switches in which to send packets. In a few instances, virtual switches which have been embedded in software program or the hardware will replace the physical switches. This consolidates their capabilities right into a unmarried, smart transfer which could take a look at records packets and their digital gadget destinations to make sure there are no problems before transferring packets along. **Benefits of Software defined networks** SDN reduces network operations complexity and price at the same time as accelerating the resolution of community issues and outages. These talents enable the subsequent advantages: Simplified operations: SDN affords a unmarried pane of glass to control the network as an entire and gets rid of the time and manual errors associated with dealing with each device independently. Open infrastructure: Open APIs and standard overlay tunneling connect public and private clouds for smooth workload portability and enterprise agility. Operational economics: SDN controllers combine digital and physical networks, allowing administrators to pick out hardware forwarding planes which are utility-optimized primarily based on value, performance, latency, and scale. Greater network uptime. Faster decision of community troubles will increase the community’s availability and improves person experiences. **Roles of SDN in Network Automation** In a traditional community, shifting devices and packages calls for manual oversight. Network reachability and safety require remapping gadgets’ IP addresses. SDN automates this procedure, that is essential for coping with transient workloads and virtualized services spun up dynamically to meet quick-time period demands. In an SDN, workloads and services are created with network club, reachability, and safety rules routinely assigned and enforced to simplify operations and enhance security.
rajeshm1
1,765,100
Busting Myths: The Truth About Software Development
In the ever-evolving landscape of technology, software development stands as a cornerstone, driving...
0
2024-02-19T05:01:59
https://dev.to/nitin-rachabathuni/busting-myths-the-truth-about-software-development-e6n
webdev, beginners, javascript, programming
In the ever-evolving landscape of technology, software development stands as a cornerstone, driving innovation and solving complex problems. Yet, it's surrounded by myths that often mislead or intimidate newcomers and outsiders. Today, we're setting the record straight by debunking some of these myths with insights and coding examples. Myth 1: You Must Be a Math Genius The Truth: While math can be beneficial, especially in fields like data science or machine learning, the day-to-day work of most software developers focuses more on problem-solving skills and logical thinking. Example: Consider the task of implementing a feature that filters a list of users based on a specific condition, such as being over 18 years old. This problem requires understanding of basic programming concepts, not advanced mathematics. ``` users = [{'name': 'Alice', 'age': 25}, {'name': 'Bob', 'age': 17}, {'name': 'Charlie', 'age': 22}] adult_users = [user for user in users if user['age'] > 18] print(adult_users) ``` Myth 2: More Lines of Code Mean Better Software The Truth: Quality and efficiency often come from writing less, but more effective, code. Clean, readable, and maintainable code is far more valuable than a large quantity of complex, convoluted code. Example: Refactoring a verbose function into a more concise and readable one without sacrificing functionality illustrates this point well. ``` # Before def add_numbers_verbose(list_of_numbers): result = 0 for number in list_of_numbers: result += number return result # After def add_numbers(list_of_numbers): return sum(list_of_numbers) ``` Myth 3: Software Development Is Just About Writing Code The Truth: Coding is just one aspect of software development. Understanding user needs, planning, collaborating with team members, and testing are equally important. Example: Writing a simple unit test for a function can illustrate the importance of testing in ensuring code quality. ``` def multiply(x, y): return x * y def test_multiply(): assert multiply(2, 3) == 6 print("Test passed!") test_multiply() ``` Myth 4: Developers Work Alone The Truth: Software development is highly collaborative. Developers often work in teams, contributing to different parts of a project and relying on each other's expertise to solve problems. Example: Collaborative tools like version control systems (e.g., Git) facilitate teamwork. Here's a basic example of how a developer might push changes to a shared repository. ``` git add . git commit -m "Add new feature" git push origin main ``` Myth 5: You Need to Know Every Programming Language The Truth: It's more important to have a strong grasp of fundamental concepts and one or two languages deeply than to have a superficial understanding of many languages. Example: Learning the principles of Object-Oriented Programming (OOP) can be more beneficial than jumping between languages. These principles apply across many languages, demonstrating the importance of understanding concepts over syntax. ``` class User: def __init__(self, name, age): self.name = name self.age = age def greet(self): return f"Hello, my name is {self.name} and I am {self.age} years old." alice = User("Alice", 25) print(alice.greet()) ``` Conclusion Software development is a field rife with myths that can obscure the truth about what it means to be a developer. By debunking these myths, we hope to shed light on the reality of software development - a challenging, rewarding, and dynamic field that requires a blend of technical skills, teamwork, and continuous learning. Whether you're just starting out or looking to deepen your expertise, remember that the journey is as important as the destination. --- Thank you for reading my article! For more updates and useful information, feel free to connect with me on LinkedIn and follow me on Twitter. I look forward to engaging with more like-minded professionals and sharing valuable insights.
nitin-rachabathuni
1,765,184
The Power of Expert Web Design in Calgary
In the bustling digital landscape of Calgary, effective web design is not just a luxury but a...
0
2024-02-19T07:20:23
https://dev.to/devbion/the-power-of-expert-web-design-in-calgary-3gj
website, development
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9x93k0uod07r4geqd45.jpg) In the bustling digital landscape of Calgary, effective web design is not just a luxury but a necessity for businesses striving to stand out. With the online sphere becoming increasingly competitive, the importance of a well-crafted website cannot be overstated. In this article, we delve into the significance of web design in Calgary and how it can propel your online presence to new heights. ## Understanding the Calgary Digital Landscape: **[Web design Calgary](https://devbion.com/web-design/calgary/)** boasts a vibrant economy and a diverse business landscape, making it a hub for entrepreneurial endeavors. In such a dynamic environment, having a visually appealing and user-friendly website is crucial for businesses to captivate their target audience. With consumers relying heavily on digital platforms for their purchasing decisions, a professionally designed website can be the differentiating factor that sets your brand apart from the competition. ## The Impact of Web Design on User Experience: A well-designed website goes beyond aesthetics; it is the cornerstone of a positive user experience. In Calgary, where consumers are discerning and tech-savvy, intuitive navigation, fast loading speeds, and mobile responsiveness are paramount. A seamless user experience not only fosters customer satisfaction but also increases engagement and conversions, driving business growth and success. ## Optimizing for Local SEO: In a city as bustling as Calgary, local SEO plays a pivotal role in ensuring your website ranks prominently in search engine results. Expert web design incorporates local elements seamlessly, optimizing content and metadata to enhance visibility within the Calgary market. By aligning your website with local search intent and leveraging relevant keywords such as "web design Calgary," you can attract qualified leads and boost your online visibility. ## Building Trust and Credibility: In the digital realm, first impressions are everything. A professionally designed website instills trust and credibility in your brand, reassuring potential customers of your professionalism and commitment to quality. In Calgary's competitive business landscape, establishing a strong online presence can make all the difference in cultivating lasting relationships with your audience and fostering brand loyalty. ## Harnessing the Power of Visual Storytelling: Calgary is a city teeming with creativity and innovation, and your website should reflect that ethos. Expert web design harnesses the power of visual storytelling to captivate visitors and convey your brand's narrative effectively. Through compelling imagery, engaging multimedia elements, and cohesive branding, you can create a memorable online experience that resonates with your target audience and leaves a lasting impression. ## Conclusion: In the digital age, your **[Calgary web design](https://devbion.com/web-design/calgary/)** serves as the cornerstone of your online presence. In Calgary's competitive business landscape, investing in expert web design is not just advantageous but essential for success. By prioritizing user experience, optimizing for local SEO, and leveraging visual storytelling, you can elevate your brand above the noise and position yourself as a leader in your industry. Embrace the power of web design in Calgary and unlock the full potential of your online presence.
devbion
1,765,193
Today's Anniversary and Bio, Net Worth, Age - Married Biography
"Married Biography: Your ultimate hub for celebrity relationships, bios, net worth, height,...
0
2024-02-19T07:38:34
https://dev.to/marriedbiograph/todays-anniversary-and-bio-net-worth-age-married-biography-488o
"Married Biography: Your ultimate hub for celebrity relationships, bios, net worth, height, lifestyle, and the latest updates. Dive deep into your favorite stars' lives with in-depth insights, uncovering their journeys to success. Stay informed with the latest news on Hollywood's power couples and iconic figures. Your go-to source for everything celebrity!"[](https://marriedbiography.com) "<a href="https://marriedbiography.com/">Married Biography</a>: Your ultimate hub for celebrity relationships, bios, <a href="https://marriedbiography.com/">net worth</a>, height, lifestyle, and the latest updates. Dive deep into your favorite stars' lives with in-depth insights, uncovering their journeys to success. Stay informed with the latest news on Hollywood's power couples and iconic figures. Your go-to source for <a href="https://marriedbiography.com/">everything celebrity</a>!"
marriedbiograph
1,765,210
How do we foster a culture of continuous learning and professional development?
Empowering Growth: Cultivating a Culture of Continuous Learning! 📚🌱 ...
0
2024-02-19T08:02:44
https://dev.to/yagnapandya9/how-do-we-foster-a-culture-of-continuous-learning-and-professional-development-7e0
javascript, programming, devops, aws
## Empowering Growth: Cultivating a Culture of Continuous Learning! 📚🌱 ## Introduction In today's rapidly evolving business landscape, the importance of continuous learning and professional development cannot be overstated. Organizations that prioritize learning and development not only attract and retain top talent but also foster innovation, adaptability, and resilience. In this comprehensive article, we will explore the key components of fostering a culture of continuous learning and professional development within organizations, examine strategies for implementation, and discuss the benefits of investing in employee growth and skill enhancement. Understanding Continuous Learning and Professional Development Continuous learning and professional development refer to the ongoing process of acquiring new knowledge, skills, and competencies to adapt to changing work environments, industry trends, and technological advancements. This process encompasses formal training programs, informal learning opportunities, and self-directed learning initiatives aimed at enhancing individual and organizational performance. ## Creating a Learning Culture [Leadership Commitment:](https://fxdatalabs.com/) Fostering a culture of continuous learning starts at the top, with leaders demonstrating a commitment to lifelong learning and professional development. Leaders should lead by example, actively participating in learning initiatives, supporting employee development efforts, and recognizing and rewarding learning achievements. [Open Communication:](https://fxdatalabs.com/) Encourage open communication and dialogue around the importance of learning and development within the organization. Create channels for employees to share knowledge, exchange ideas, and provide feedback on learning programs and initiatives. Foster a culture of collaboration and knowledge sharing that encourages continuous growth and improvement. [Learning Resources:](https://fxdatalabs.com/) Provide access to a variety of learning resources and tools to support employee development, including online courses, workshops, seminars, and educational materials. Invest in a learning management system (LMS) or digital learning platform that centralizes learning resources, tracks progress, and facilitates collaboration and peer-to-peer learning. ## Empowering Employees to Drive Their Own Development [Personalized Learning Plans:](https://fxdatalabs.com/) Encourage employees to take ownership of their learning journey by creating personalized learning plans that align with their career goals, interests, and development needs. Offer guidance and support in setting learning objectives, identifying relevant learning opportunities, and tracking progress towards skill enhancement and career advancement. [Self-Directed Learning:](https://fxdatalabs.com/) Promote self-directed learning initiatives that empower employees to pursue learning opportunities outside of formal training programs. Encourage employees to explore new technologies, methodologies, and industry trends, and provide resources and support to facilitate self-directed learning initiatives, such as access to online courses, learning communities, and professional networks. [Continuous Feedback and Reflection:](https://fxdatalabs.com/) Encourage employees to seek feedback from peers, mentors, and managers on their learning progress and performance. Foster a culture of continuous feedback and reflection that encourages employees to identify areas for improvement, set goals, and adapt their learning strategies based on feedback and self-assessment. ## Integrating Learning into Everyday Work [On-the-Job Learning Opportunities:](https://fxdatalabs.com/) Embed learning into everyday work experiences by providing on-the-job learning opportunities, such as job rotations, stretch assignments, and cross-functional projects. Encourage employees to learn from their experiences, take on new challenges, and apply newly acquired skills and knowledge in real-world situations. [Mentorship and Coaching:](https://fxdatalabs.com/) Establish mentorship and coaching programs that pair employees with experienced mentors who can provide guidance, support, and advice on career development and skill enhancement. Encourage mentors to share their knowledge, insights, and experiences with mentees, fostering a culture of mentorship and continuous learning within the organization. [Learning Communities and Networks:](https://fxdatalabs.com/) Foster learning communities and networks within the organization where employees can connect, collaborate, and share knowledge and expertise. Encourage employees to participate in communities of practice, professional associations, and industry events where they can learn from peers, experts, and thought leaders in their field. ## Measuring and Evaluating Learning Impact [Key Performance Indicators (KPIs):](https://fxdatalabs.com/) Establish key performance indicators (KPIs) to measure the impact and effectiveness of learning and development initiatives, such as employee engagement, retention rates, skill acquisition, and performance improvement. Regularly track and analyze KPIs to assess the return on investment (ROI) of learning programs and identify areas for improvement. [Feedback and Evaluation:](https://fxdatalabs.com/) Gather feedback from employees through surveys, focus groups, and one-on-one discussions to evaluate the effectiveness of learning programs and initiatives. Solicit input on program design, content, delivery, and impact to ensure that learning initiatives meet the needs and expectations of employees and drive positive outcomes. [Continuous Improvement:](https://fxdatalabs.com/) Foster a culture of continuous improvement by regularly reviewing and evaluating learning programs and initiatives to identify strengths, weaknesses, and opportunities for enhancement. Use feedback and evaluation data to refine and optimize learning strategies, resources, and processes to better support employee development and organizational goals. ## Conclusion: Investing in Growth and Development Cultivating a culture of continuous learning and professional development is essential for organizations seeking to stay competitive, innovative, and adaptable in today's fast-paced business environment. By prioritizing learning, empowering employees to drive their own development, integrating learning into everyday work, and measuring learning impact, organizations can create a culture of lifelong learning that fosters individual growth, organizational excellence, and long-term success. As we embrace the future of work, let us invest in the growth and development of our greatest asset—our people—to build a workforce that is resilient, agile, and future-ready. For more insights into AI|ML and Data Science Development, please write to us at: contact@htree.plus| [F(x) Data Labs Pvt. Ltd.](https://fxdatalabs.com/) #ContinuousLearning #ProfessionalDevelopment #GrowthMindset #EmpowermentCulture 🌱💼
yagnapandya9
1,765,382
Determining the number of "if" statements in your codebase
Think of conditional statements as if they are bombs waiting to go off. It might be a great metric...
0
2024-02-19T10:51:43
https://dev.to/shailennaidoo/determining-the-number-of-if-statements-in-your-codebase-5g6o
discuss, cicd, coding, programming
Think of conditional statements as if they are bombs waiting to go off. It might be a great metric for the team to identify how many conditional statements are being introduced into the system or how many have been removed. I wrote a simple bash command using the popular `grep` and `wc` tools found natively in most bash implementations to scan my source code and identify the number of conditional statements. ```bash grep -rni --exclude-dir="node_modules" "if (" * | wc -l ``` Having a **number** for people to see gives them an idea of the general complexity or branches within an application. I believe that reducing this number should be a general goal of a software development team.
shailennaidoo
1,765,404
Installing Boomi Atom runtime on Docker
In today's digital tide, businesses collect data like sponges, but it's all trapped in different...
0
2024-02-19T11:17:10
https://dev.to/eyer-ai/installing-boomi-atom-runtime-on-docker-4dn2
ipaas, integration, docker, boomi
In today's digital tide, businesses collect data like sponges, but it's all trapped in different systems, creating silos that stifle collaboration and growth. [Boomi](https://boomi.com/), a leading integration platform, bridges these data silos, connecting your systems and automating workflows to unleash the power of your data for seamless collaboration and real-time insights across your organization. Moreover, Boomi goes beyond just seamless integration; by leveraging its support for incredible tools like [Eyer](https://www.eyer.ai/), it ensures that your integrations are seamless and [intelligently monitored and optimized](https://dev.to/eyer-ai/how-to-unlock-ai-powered-observability-insights-in-your-boomi-integration-2edd). Now, while Boomi promises ease of use, leveraging Docker takes it further. This containerization expert bundles everything — operating system, requirements, and the Boomi Atom itself — into a single, readily installable package. This simplifies both installations and ensures consistent software behavior across any environment. No more configuration hassles or compatibility concerns — Docker streamlines the process, and this article shows you how. ## Prerequisites To get the most out of this article, you must have the following: * A Boomi platform account: if you do not have one, create a 30-day free trial account * A Docker version of at least 19.03.8. Run the docker -v command to check your current version. If you are running an older version, download and install the latest Docker from the official [Get Docker documentation.](https://www.docker.com/products/docker-desktop/) * A basic understanding of Docker and its commands ## Installing a Boomi Atom The [Boomi Atom](https://help.boomi.com/docs/atomsphere/integration/getting%20started/c-atm-atoms_aa350919-15e5-4ec5-a11a-ee308fddd087/) is a lightweight, dynamic runtime engine where your Boomi processes can be executed. After deploying your integration processes into an Atom, it contains all the components required to run your processes end-to-end, including connectors, transformation rules, decision handling, and processing logic. There are two different options for installing a Boomi Atom based on your integration needs, and they are: * **Local installation**: This option is for you if your integration connects to resources within your network, like databases, file systems, or other on-premise applications. You need to install the Atom on a computer that can access all these resources. This article will guide you through the local installation process. * **In the cloud**: You can also set up to run your processes virtually on an Atom within a Boomi Atom cloud(or in another Atom Cloud if you have access to one) if your integration scenario requires only internet-accessible applications or data locations. **Creating a new Boomi environment** First, create a new Boomi environment to host your Atom. Log in to your Boomi account and navigate to the **Integrations** page. ![Boomi Landing page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skrb2bauxlxf6xc4fnbx.png) ![Integration page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fq6q41lyqzzymb51ven1.png) Next, navigate to the **Manage tab** and select **Atom Management** from the dropdown menu. ![Manage Dropdown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bdsrv2wnaccpzxxs4sof.png) Selecting **Atom Management** will direct you to the **Environments** page. ![Environment Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lf0iatfvq8s3kg8l4pmc.png) To create a new environment, click the "**+**" sign. Provide a **name** and select your desired environment classification before clicking **Save**. ![Open modal to create a new Boomi Environment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/up2y1c4bv8ncch6u9ng3.png) Once you've saved your new environment, head to its dedicated page by clicking on it. There, locate and copy the **Environment ID**. You'll need this ID to tell the Boomi Atom exactly where to be created. If not explicitly specified, the Atom appears in the Unattached Atoms list within Atom Management. ![Environment Dedicated Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ledwsbd9hhxotgdqfctw.png) In your new Boomi environment, navigate to the **+New** button at the top of the page. From the available options, select the **Atom** tab to begin creating a Boomi Atom. ![the +New Dropdown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4zjz96wp9umhgpxhlxt.png) Selecting **Atom** opens up the **Atom setup** modal. ![Atom setup modal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3cdf069zym9hziit30gl.png) Select **Local** for the **Setup Preference**, as this guide covers local Boomi installation. Expand the **Security Options** section in the **Atom Setup** and click **Generate Token**. Copy the provided **Atom Installer Token**. You will need this token later to authenticate your Docker container. ![Atom setup modal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hpyvsnvfg2q4ik4fctdy.png) ![Generate Token modal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bve89zh1eowuldl68ax1.png) Once your environment is configured, the next step is to install the Boomi Atom using its Docker image. ### **Pulling the Boomi image** A [Docker image](https://hub.docker.com/_/docker) is a read-only template containing all the instructions and components needed to create a Docker container, an isolated environment running an application or service. The Boomi Atom Docker image is no different, as it contains the following components: * A minimal Red Hat Enterprise Linux Universal Base Image (RHEL UBI) * The latest Boomi-supported version of Java * Code to retrieve, set up, and invoke the Atom 64-bit installer To prevent compatibility issues, ensure you download the latest Boomi Docker image. As of this writing, the most recent version is [5.0.2.-rhel](https://hub.docker.com/layers/boomi/atom/5.0.2-rhel/images/sha256-3ac4c1962efdd8ef353818a72a29e1d8d5ed5e2f6a5f056d6d98d55696943c3c?context=explore). You can always verify the latest release on [Docker Hub](https://hub.docker.com/r/boomi/atom/tags). **Setting up your app** Before pulling or downloading the docker image, create a project directory in which your image will live. ``` mkdir <name of your project> // make a new project directory cd <name of your project> // change the current directory to the project directory ``` **Pulling the Docker Boomi Atom Image** To download the Boomi Atom Docker image, run the following command, replacing `<image_tag>` with the specific version you want: ``` docker pull boomi/atom:<image_tag> ``` ![Terminal console after pulling the Boomi docker image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1flblrscwa4jlrmfyrsg.png) **Creating a docker-compose.yml file** Next, open up your project’s directory in your text editor. In your text editor, create a `docker-compose.yml` file and paste the following code while updating the placeholders with your values: ```yml services: atom: image: boomi/atom:5.0.2-rhel container_name: <your name of the container> volumes: - <your host directory>:/mnt/boomi:Z environment: - BOOMI_ATOMNAME=<your atom name> - INSTALL_TOKEN=<your atom installer token> - BOOMI_ENVIRONMENTID=<your environment ID> - ATOM_LOCALHOSTID=<your atom name> ports: [9090:9090] ``` These fields in the code block configure your Boomi Docker Container: * `image`: This specifies the base Docker image for creating the container * `container`: This allows you to personalize the container name, making it easier to identify and manage * `- <your host directory>:/mnt/boomi:Z`: This mounts a host directory onto the container's /mnt/boomi directory with the "Z" compression option. Make sure the host directory exists and has the necessary permissions * `BOOMI_ATOMNAME`: This allows you to give your installed Atom a custom name. This name will show up on the AtomSphere Atom Management page * `INSTALL_TOKEN`: This field specifies the unique **Atom Installer Token** you copied from your Atom Setup security options. It authorizes the installation process * `BOOMI_ENVIRONMENTID`: This field holds the Boomi environment ID that you copied after creating your Boomi environment * `ATOM_LOCALHOSTID`: Specifies a unique and persistent local host ID independent of any assigned IP address. It ensures consistent identification within the container Once you've finished creating your `docker-compose.yml` file, launch the container by running the command `docker compose up -d`. If successful, you should find a new Atom listed within your Boomi environment on the Boomi website. ![New Docker Atom](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7h74ioiogp6o4rhhk12r.png) You've successfully installed the Boomi Atom runtime on Docker! Head to the [Boomi documentation](https://help.boomi.com/docs/Atomsphere/Integration/Getting%20started/int-First_Boomi_Integration_process_eb8e485d-0d81-4cfb-8a0e-10ddd6fecee9) for the next steps to learn how to create your first integration process. ## Conclusion Seamless data integration is not a convenience but a necessity for businesses striving to stay ahead. As you've seen throughout this article, Boomi offers a powerful solution for tackling data silos, acting as a bridge that seamlessly connects disparate applications and databases. But the benefits don't stop there; by harnessing the power of Docker, you can further streamline your integration journey with simplified deployment, enhanced portability, and resource efficiency. This article shows you how to get started. Exploring the intricacies of installing Boomi Atom runtime on Docker, from creating environments to pulling the Docker image and configuring the runtime environment. Check out the[ Boomi website](https://boomi.com/) to learn more about what Boomi offers.
amaraiheanacho
1,765,456
Supercharge Your VSCode Experience: 10 Essential Extensions for Basic Productivity — part2
Visual Studio Code (VSCode) has become the go-to choice for developers across various domains due to...
26,406
2024-02-19T12:25:47
https://dev.to/sadanandgadwal/supercharge-your-vscode-experience-10-essential-extensions-for-basic-productivity-part2-4pid
vscode, extensions, productivity, sadanandgadwal
Visual Studio Code (VSCode) has become the go-to choice for developers across various domains due to its versatility, extensibility, and user-friendly interface. One of the key features that make VSCode stand out is its vast ecosystem of extensions, which can significantly enhance your coding experience. In this article, we’ll explore some essential extensions that can streamline your development workflow and boost productivity. 1) ESLint ESLint is a powerful tool for ensuring code quality and consistency in JavaScript projects. This extension integrates ESLint directly into VSCode, providing real-time feedback on potential errors and stylistic issues as you write code. With customizable rules and quick-fix suggestions, ESLint helps maintain code readability and adherence to best practices. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0cl50y9qq0ehvzhsrrs.png) 2) ES7+ React/Redux/React-Native Snippets For developers working with React, Redux, or React Native, this extension is a time-saving gem. It offers a collection of snippets for common code patterns and boilerplate code, allowing you to quickly scaffold components, actions, reducers, and more. With support for ES7+ syntax, you can efficiently write React applications with minimal typing. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2coxhwo5yq2qqp9wtet.png) 3) indent-rainbow Keeping track of nested code blocks can be challenging, especially in large codebases. indent-rainbow enhances code readability by colorizing indentation levels, making it easier to visually distinguish different scopes and structures. This extension improves code comprehension and reduces the likelihood of indentation-related errors. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7jhw6lwmlblao5x927v.png) 4) Import Cost Import Cost is a handy tool for assessing the impact of imported libraries on your project’s bundle size. It displays the size of each imported package directly in the editor, allowing you to make informed decisions about dependencies and optimize your application’s performance. By visualizing import costs, you can prioritize lightweight alternatives and minimize unnecessary bloat. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/659j0zha8n03bjsvjwln.png) 5) Better Comments Comments play a crucial role in code documentation and understanding. Better Comments enriches your commenting experience by introducing different comment styles with customizable syntax highlighting. You can categorize comments based on their significance, such as “TODO,” “FIXME,” or “IMPORTANT,” making it easier to identify action items and prioritize tasks within your codebase. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhodkik9ft92qben9e8w.png) 6) Code Time Code Time is a productivity tool that provides insights into your coding habits and time spent on different projects. It tracks metrics such as coding activity, keystrokes, and overall productivity, helping you identify areas for improvement and maintain a healthy work-life balance. With customizable goals and performance benchmarks, Code Time empowers you to optimize your coding workflow and achieve greater efficiency. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65zjp8s5lim9izwac2e8.png) 7) Auto Import Managing imports manually can be tedious and error-prone, especially in large projects with numerous dependencies. Auto Import simplifies the import process by automatically adding import statements as you reference new symbols in your code. This extension supports various programming languages and frameworks, reducing boilerplate and streamlining the development experience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmnp92tz4qodnrljr6d0.png) 8) Auto Close Tag Writing HTML or XML code often involves typing matching opening and closing tags — a repetitive task that can disrupt your flow. Auto Close Tag automatically generates closing tags as you type, eliminating the need for manual tag completion and reducing syntax errors. This extension improves code consistency and accelerates markup development in VSCode. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/80cfmkrszv301e3nntw5.png) 9) Auto Rename Tag Changing the name of an HTML tag or XML element usually requires updating the corresponding closing tag to maintain consistency. Auto Rename Tag simplifies this process by automatically renaming matching opening and closing tags as you edit the tag name. With seamless tag synchronization, you can refactor code more efficiently and ensure structural integrity across your documents. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5l2u4qwp69qriw9q7av1.png) 10) Code Runner Code Runner is a versatile tool for executing code snippets or entire files directly within VSCode. It supports multiple programming languages and provides a convenient way to test and debug code without switching to external terminals or IDEs. With customizable execution settings and output visibility, Code Runner enhances your coding workflow and facilitates rapid prototyping and experimentation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40rx00bc5g0e4xcx8dv3.png) Bonus Extension: Version Lens Version Lens is a feature-rich extension that enhances dependency management in package.json files. It displays the latest available versions of npm packages directly within the editor, allowing you to stay up-to-date with the latest releases and security patches. With integrated version information and changelogs, Version Lens enables informed decision-making and simplifies the process of updating dependencies in your projects. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebd89kb8iv8rclxft5pm.png) In conclusion, these extensions represent just a fraction of the vast ecosystem available in Visual Studio Code. By leveraging the power of extensions, you can customize your development environment to suit your workflow preferences and maximize productivity. Whether you’re a seasoned developer or just getting started, exploring and integrating these tools into your VSCode setup can significantly enhance your coding experience and help you write better software faster. 🌟 Stay Connected! 🌟 Hey there, awesome reader! 👋 Want to stay updated with my latest insights,Follow me on social media! [🐦](https://twitter.com/sadanandgadwal) [📸](https://www.instagram.com/sadanand_gadwal/) [📘](https://www.facebook.com/sadanandgadwal7) [💻](https://github.com/Sadanandgadwal) [🌐](https://sadanandgadwal.me/) [💼 ](https://www.linkedin.com/in/sadanandgadwal/) [Sadanand Gadwal](https://dev.to/sadanandgadwal)
sadanandgadwal
1,765,466
Rocket League BotChat powered by TensorRT-LLM: My submission for NVIDIA's Generative AI on RTX PCs Developer Contest
A Rocket League plugin that lets bots chat!
0
2024-02-19T12:42:16
https://dev.to/briancaffey/rocket-league-botchat-powered-by-tensorrt-llm-my-submission-for-nvidias-generative-ai-on-rtx-pcs-developer-contest-2oao
nvidia, rtx, llm, rocketleague
--- title: Rocket League BotChat powered by TensorRT-LLM: My submission for NVIDIA's Generative AI on RTX PCs Developer Contest published: true description: A Rocket League plugin that lets bots chat! tags: nvidia,rtx,llm,rocketleague --- ![Rocket League BotChat](https://briancaffey.github.io/img/rlbc/cover.png) ## tl;dr This article is about my submission to NVIDIA's Generative AI on RTX PCs Developer Contest: Rocket League BotChat. Rocket League BotChat is a BakkesMod plugin for Rocket League that allows bots to send chat messages based on in-game events. It is designed to be used with a local LLM service optimized and accelerated with NVIDIA's TensorRT-LLM library. Here's my project submission post on 𝕏: {% twitter 1760529251072118901 %} Here's a link to the Rocket League BotChat GitHub repository. NVIDIA's Gen AI Developer Contest The following email caught my attention last month: Generative AI on RTX PCs Developer Contest: Build your next innovative Gen AI project using NVIDIA TensorRT or TensorRT-LLM on Windows PC with NVIDIA RTX systems The part about “on Windows PC” made me think: why would a developer contest focus on a particular operating system? I use all three of the major operating systems: macOS, Ubuntu and Windows 11, but most of the development work I do is on macOS and Ubuntu. I discovered WSL (Windows Subsystem for Linux) a few years ago and really enjoy using that for development as well, but I had never considered doing development work on Windows outside of WSL. I had also never used any of the Windows-specific development frameworks like .NET or Visual Studio. My experience with Windows goes back to 2016 when I built my fist PC with an NVIDIA GeForce GTX 1080 graphics card. When I built another personal computer last year in 2023, getting the NVIDIA GeForce RTX 4090 graphics card was a big step up. I bought two NVMe drives in order to dual boot into both Windows and Ubuntu operating systems. Switching between the operating systems requires turning off the computer, going into the BIOS settings and changing the boot order and restarting the computer. Last year I started learning more about AI image generation using Stable Diffusion with programs like Automatic1111, InvokeAI and ComfyUI. I set up everything on my PC's Ubuntu operating system, and frequently had to switch between using Ubuntu for working with stable diffusion and Windows for gaming and other Windows-specific software. The friction of having to constantly switch operating systems pushed me to move my stable diffusion software workflows to Windows. All of my models and images are stored to external drives, so moving things over to Windows was pretty easy. I learned PowerShell and got more familiar with how Windows works as a development machine. Environment variables and system variables are one example of how Windows does things differently compared ot Linux-based operating systems. And just like that, I became a Windows developer! This experience got me interested in coming up with an idea for the NVIDIA Generative AI on NVIDIA RTX PCs Developer Contest. ![winfetch](https://briancaffey.github.io/img/rlbc/winfetch.png) Coming up with an Idea The contest description and some related NVIDIA articles about the contest helped me with brainstorming: > Whether it’s a RAG-based chatbot, a plug-in for an existing application, or a code generation tool, the possibilities are endless. > Many use cases would benefit from running LLMs locally on Windows PCs, including gaming, creativity, productivity, and developer experiences. This contest is focused on NVIDIA's consumer hardware line: GeForce RTX. It has a diverse set of use cases including gaming, crypto mining, VR, simulation software, creative tools and new AI techniques including image generation and LLM (Large Language Model) inference. ![A stacked bar chart showing the composition of Nvidia's revenue each quarter going back to fiscal 2019](https://g.foolcdn.com/image/?url=https%3A%2F%2Fg.foolcdn.com%2Feditorial%2Fimages%2F764886%2Fnvda_revenue_bar.png&op=resize&w=700) Gaming seemed like an interesting avenue for me to explore. PC gaming is still an industry that is developed primarily for Windows operating systems, and the gaming industry has been the largest revenue driver of NVIDIA in recent years, only recently surpassed by the data center segment. GPUs are needed to render graphics of enormous open-world environments. Some story-driven games include huge amounts of dialogue that can be considered as huge literary works in their own right. Red Dead Redemption and Genshin Impact are two massively popular games of this type. There might be an interesting project idea that could use LLMs and RAG (retrieval augmented generation), but I don't play these types of games and it didn't seem practical for a project that would be built in just over a month. I thought about trying to build something for a simpler game that I already know. Rocket League is a vehicular soccer game that is played on both game consoles and on PCs. It is an eSports with a very high skill ceiling and a massive player base (85 million active players in the last 30 days). I started playing it during the pandemic with some of my friends and all got hooked. We also came to learn that Rocket League's in-game is varies from entertaining, annoying, toxic and in some cases, sportsmanlike. One other thing I learned about Rocket League is that it has an active modding community. Developers create plugins for the game for all different purposes, such as coaching, practice drills, capturing replays, tracking player statistics, etc. Most Rocket League Mods are written in a popular framework called Bakkesmod (developed Andreas "bakkes" Bakke, a Norwegian software engineer). Rocket League's in-game chat inspired the idea for my submission to NVIDIA's Generative AI Developer Contest: Rocket League BotChat. The idea for my project is to build a plugin with Bakkesmod that allows Rocket League bots to send chat messages based on game events using an LLM accelerated and optimized by TensorRT-LLM (more on TensorRT-LLM soon!) Bots are built into the Rocket League game and you can play with or against them in offline matches. However, the built-in bots are not very good. Another 3rd-party project called RLBot allows players to play against community-developed AI bots that are developed with machine learning frameworks like TensorFlow and PyTorch. These bots are very good, but they are not infallible. My contest project idea was now clear: develop a plugin for Rocket League capable of sending messages from bot players. This idea seemed to check the boxes for the large language model category of NVIDIA's developer contest: develop a project in a Windows environment for a Windows-specific program, and use an LLM powered by TensorRT-LLM. ![RLBot](https://briancaffey.github.io/img/rlbc/bot.png) Putting together the puzzle pieces With this idea in mind, I looked into the project's feasibility. I really had no idea if this would work. I looked through the Bakkesmod documentation and found some helpful resources that gave me confidence that I could pull something together for at least a proof-of-concept. - The Bakkesmod Plugin Wiki (https://wiki.bakkesplugins.com/) - HttpWrapper for sending HTTP requests from Bakkesmod - StatEvents that allow for running custom code when specific event functions are triggered in the game (such as scoring a goal, or making a save). - The Bakkesmod plugin template: https://github.com/Martinii89/BakkesmodPluginTemplate - This provides a great starting-off point for developing Bakkesmod plugins. Plugins for Bakkesmod are written in C++ and this repo provides an organized file structure that allows your to get started quickly - Plugin Tutorial: https://wiki.bakkesplugins.com/plugin_tutorial/getting_started/ - Open-source chat-related Bakkesmod plugins on GitHub - BetterChat: https://github.com/JulienML/BetterChat - Translate: https://github.com/0xleft/trnslt Starting with the Plugin Template, I wrote a simple console command that when triggered sends an HTTP request to localhost:8000/hello. I set up a Hello World Flask app running on localhost:8000 and I was able to get a response from my Hello World server! There didn't seem to be any network or permission errors that would prevent my game code from communicating with other applications on my PC. Next I started looking into how to build and run optimized LLMs with NVIDIA's TensorRT-LLM library, the software that this contest is promoting. The contest announcement included an interesting building block that I thought could be very useful: an example repo showing how to run `CodeLlama-13b-instruct-hf` optimized by TensorRT-LLM to provide inference for a VSCode extension called Continue (Continue.dev). - `CodeLlama-13b-instruct-hf` is an open source model from Meta that is trained on code and can help with code generation tasks - TensorRT-LLM is a Python library that accelerates and optimizes inference performance of large language models. It takes a Large Language Model like `CodeLlama-13b-instruct-hf` and generates an engine that can be used for doing inference - VSCode is an open source code editor developed by Microsoft with an large number of community plugins - Continue.dev is a startup backed by Y Combinator that is developing an open-source autopilot (code assistant) for VSCode and JetBrains that works with local LLMs or paid services like ChatGPT To get the coding assistant project working I needed to build the TensorRT-LLM engine. Building TensorRT-LLM engines on Windows can be done in one of two ways: - using a "bare-metal" virtual environment on Windows (with PowerShell) - using WSL At the time of writing, building a TensorRT-LLM engine on Windows can only be done with version v0.6.1 of the TensorRT-LLM repo and version v0.7.1 of the tensorrt_llm Python package. With WSL you can use the up-to-date versions of the TensorRT-LLM repo (main branch). The engines produced by Windows and WSL (Ubuntu) are not interchangeable and you will get errors if you try to use an engine created with one operating system on another operating system. Once the engines are built you can use them to run the example from the trt-llm-as-openai-windows repo. The example repo exposes an OpenAI-compatible API locally that can do chat completions. You then need to configure the Continue.dev extension to use the local LLM service: ```json { "title": "CodeLlama-13b-instruct-hf", "apiBase": "http://192.168.5.96:5000/", "provider": "openai", "apiKey": "None", "model": "gpt-4" } ``` The Continue.dev extension using CodeLlama-13b-instruct-hf accelerated and optimized by TensorRT-LLM is very fast. According to this post on Continue.dev's blog, C++ is a "first tier" language: > C++ has one of the largest presences on GitHub and Stack Overflow. This shows up in its representation in public LLM datasets, where it is one of the languages with the most data. Its performance is near the top of the MultiPL-E, BabelCode / TP3, MBXP / Multilingual HumanEval, and HumanEval-X benchmarks. However, given that C++ is often used when code performance and exact algorithm implementation is very important, many developers don’t believe that LLMs are as helpful for C++ as some of the other languages in this tier. Most of the time I'm working with either Python and TypeScript. I've read about C++ but haven't used it for anything before doing this project. I primarily used Microsoft Visual Studio to build the plugin, but VSCode with the Continue.dev autopilot extension was helpful for tackling smaller problems in a REPL-like environment. For example, I used Continue.dev in VSCode to figure out how to handle JSON. Coming from Python and JavaScript languages, I found the nlohmann/json JSON library syntax to be somewhat different. For example, here is how to add a message to messages in the body of an OpenAI API request: ```cpp messages.push_back({ {"role", role}, {"content", content } }); ``` In Python the code for appending a message to a list of messages would be written differently: ```python messages.append({"role": role, "content": content}) ``` ## Development environment While working on different projects using web technologies and frameworks in the Python and JavaScript ecosystems, I developed an appreciation for well-structured development environments that are easy to use. Development environment refers to the tools and processes by which a developer can make a change to source code and see these changes reflected in some version of the application running on a local environment. The local environment (the developer's computer) should be a close proxy for the production environment where the code will ultimately deployed to for end users. For this project the local development environment is our PC itself, which simplifies things. A development environment should support hot-reloading so incremental changes can be run to test functionality, offering a tight feedback loop. I really like the development environment for this project. Here's a screenshot that shows the different parts of the development environment I used for working on Rocket League BotChat: ![Screenshot of Rocket League BotChat development environment](https://briancaffey.github.io/img/rlbc/devenv2.png) - Rocket League (running with the `-dev` flag turned on). The console is helpful for viewing log messages and the plugin settings panel can be used to view and change plugin configuration values. The BakkesMod plugin also needs to be running in order to inject plugin code into the game engine - Visual Studio for working on the plugin code. `Control`+`Shift`+`B` rebuilds the code and automatically reloads the plugin in the game - OpenAI-compatible LLM server powered by TensorRT-LLM (using `Llama-2-13b-chat-hf` with AWQ INT4 quantization) running in a docker container on Ubuntu in WSL - VSCode for debugging C++ code with Continue.dev extension powered by TensorRT-LLM (using `CodeLlama-13b-instruct-hf` with AWQ INT4 quantization) running in a virtual environment on Windows ### Building the TensorRT-LLM engines I was able to build and run the TensorRT LLM engines for my game plugin's inference and the Continue.dev extension's inference both in Python virtual environments on Windows and on Ubuntu in WSL. For building the Llama-2-13b-chat-hf model with INT4 AWQ quantization on Windows 11 I used this command: ``` (.venv) PS C:\Users\My PC\GitHub\TensorRT-LLM\examples\llama> python build.py --model_dir D:\llama\Llama-2-13b-chat-hf\ --quant_ckpt_path D:\llama\Llama-2-13b-chat-hf\llama_tp1_rank0.npz --dtype float16 --use_gpt_attention_plugin float16 --use_gemm_plugin float16 --use_weight_only --weight_only_precision int4_awq --per_group --enable_context_fmha --max_batch_size 1 --max_input_len 3500 --max_output_len 1024 --output_dir D:\llama\Llama-2-13b-chat-hf\single-gpu\ --vocab_size 32064 ``` ### Running the TensorRT-LLM engines Using Windows PowerShell to start the CodeLlama server for Continue.dev: ``` (.venv) PS C:\Users\My PC\GitHub\trt-llm-as-openai-windows> python .\app.py --trt_engine_path "D:\llama\CodeLlama-13b-Instruct-hf\trt_engines\1-gpu\" --trt_engine_name llama_float16_tp1_rank0.engine --tokenizer_dir_path "D:\llama\CodeLlama-13b-Instruct-hf\" --port 5000 --host 0.0.0.0 ``` Tip: Adding `--host 0.0.0.0` isn't required here, but it allows me to use the CodeLlama/TensorRT-LLM server with VSCode any computer on my local network using my PC's local IP address in the Continue.dev configuration. Using docker in WSL to start the Llama-2-13b-chat-hf LLM server: ``` root@0a5b5b75f079:/code/git/TensorRT-LLM/examples/server/flask# python3 app.py --trt_engine_path /llama/Llama-2-13b-chat-hf/trt_engines/1-gpu/ --trt_engine_name llama_float16_t_rank0.engine --tokenizer_dir_path /llama/Llama-2-13b-chat-hf/ --port 5001 --host 0.0.0.0 ``` Note: Here I also add `--host 0.0.0.0`, but this is required in order for the service in the docker container to be reached from WSL by the game running on Windows. BakkesMod includes a console window that came in handy for debugging errors during development. At the beginning of this developer contest on January 9, NVIDIA announced Chat with RTX. This is a demo program for Windows that automates a lots of the processes needed to set up a TensorRT-LLM-powered LLM running on your PC. Keep an eye on this project as it may become the best way to install and manage large language models on Windows PCs. ![Chat with RTX](https://briancaffey.github.io/img/rlbc/chat_with_rtx.jpeg) ## How it works Here's a quick look at key parts of the plugin source code (https://github.com/briancaffey/RocketLeagueBotChat). ### Hooking events Hooking events is the core of how this plugin works. StatTickerMessage events cover most of the events that are triggered in Rocket League, such as scoring a goal, making a save or demolishing a car. ```cpp // Hooks different types of events that are handled in onStatTickerMessage // See https://wiki.bakkesplugins.com/functions/stat_events/ gameWrapper->HookEventWithCallerPost<ServerWrapper>("Function TAGame.GFxHUD_TA.HandleStatTickerMessage", [this](ServerWrapper caller, void* params, std::string eventname) { onStatTickerMessage(params); }); ``` ### Handling events and building the prompt We can unpack values from the event to determine the player to which the event should be attributed. The code then translates the game event and related data into an English sentence. This is appended to a vector of message objects with the appendToPrompt method. ```cpp // handle different events like scoring a goal or making a save if (statEvent.GetEventName() == "Goal") { // was the goal scored by the human player or the bot? if (playerPRI.memory_address == receiver.memory_address) { appendToPrompt("Your human opponent just scored a goal against you! " + score_sentence, "user"); } else { appendToPrompt("You just scored a goal against the human player! " + score_sentence, "user"); } } ``` ### Making requests and handling responses The last main part of the code is making a request to the LLM server with the prompt that we have formed above based on game messages. This code should look familiar to anyone who has worked with OpenAI's API. ```cpp std::string message = response_json["choices"][0]["message"]["content"]; ``` The `LogToChatbox` method is used to send a message to the in-game chat box with the name of the bot that is sending the message. Since messages could possibly be longer than the limit of 120 characters, I send messages to the chatbox in chunks of 120 characters at a time. ```cpp gameWrapper->LogToChatbox(messages[i], this->bot_name); ``` That's it! The code isn't that complicated. I had to sanitize the message so that it would not include emoji or the stop character that the LLM server would include in messages (`</s>`). Oddly, I had a hard time getting the LLM to not use emoji even when I instructed it to not use emoji in the system prompt. ## Rocket League BotChat UI Most BakkesMod plugins for RocketLeague UIs that allow for controlling settings. Here's what the UI for Rocket League BotChat looks like: ![Rocket League BotChat Plugin UI](https://briancaffey.github.io/img/rlbc/rlbcui.png) ### System prompt The system prompt instructs the bot on how it shoud reply. This is an important part of the prompt engineering for this project, and I used Postman to experiment with lots of different types of instructions. Here's the default prompt that I used: ```cpp std::string ai_player = "You are an elite AI player in the car soccer game Rocket League. "; std::string one_v_one = "You are playing a 1v1 match against a human player. "; std::string instructions = "You will send short chat messages to your human opponent in response to what happens in the game. "; std::string details = "Respond to the human player with brief messages no more than 12 words long."; // initial system prompt std::string initial_system_prompt = ai_player + one_v_one + instructions + details; ``` The last part about no more than 12 words long was the most effective way of controlling the length responses from the LLM. I tried changing the max_output_len when building the TensorRT engine, but this degraded the quality of the responses. The system prompt can be changed by the user. Changing the system prompt was a lot of fun to expirment with! ### Temperature and Seed These values are included in the body of the request to the LLM, but I didn't have much luck with these. Early on I had issues with getting sufficient variation in the responses from the LLM, so I tried using random values for seed and temperature, but this didn't really work. ### Messages This section of the UI displays the messages that are used in requests to the LLM. In order keep the prompt within the context window limit, I only used the most recent six messages sent from the "user" (which are messages about game events) and the "assistant" (which are LLM responses from the bot). Whenever the user changes the system prompt, the messages vector is reset to only include the new system prompt. ## Demo Video for Contest Submission I used Blender's sequence editor to create a demo video for my contest submission. I don't edit a lot of videos, but it is a fun process and I learned a lot about Blender and non-linear video editing in the process. Here's how I approached creating the demo video for my project. ![Blender video sequence editor UI used to create my project video](https://briancaffey.github.io/img/rlbc/blender.png) - Structure the video in three main parts: introduction to my project and the contest, description of how it works, demo of my project in action - Find an upbeat song from playlists included in Rocket League with no vocals to use as background music. I used "Dads in Space" by Steven Walking - Get stock Rocket League footage from YouTube with youtube-dl (this is an amazing tool!). I mostly used footage from the RLCS 2023 Winter Major Trailer. This video was uploaded at 24 fps, and my Blender Video project frame rate was set to 29.97, so I used ffmpeg to convert this video from 24 fps to 29.97 fps. Record myself playing Rocket League with my plugin enabled using NVIDIA Share. Miraculously, I was able to score against the Nexto bot! - Use ComfyUI to animate some of the images used in the contest description and use these in my video ![ComfyUI workflow for animating images using img2vid model](https://briancaffey.github.io/img/rlbc/comfyui.png) - Use ElevenLabs to narrate a simple voice over script that describes the video content. This tuned out a lot better than I expected. I paid $1 for the ElevenLabs creator plan and got lots of tokens to experiment with different settings for voice generation using a clone of my voice. ![Eleven Labs Voice Generation Web UI](https://briancaffey.github.io/img/rlbc/elevenlabs.png) ## Shortcomings of my project This plugin is a proof of concept and it has some shortcomings. One issue is that some events that my plugin listens to can happen in rapid succession. This results in "user" and "assistant" prompts getting out of order which breaks assertions on the trt-llm-as-openai-windows repo. It would make more sense to have the bot send messages not immediately after the events are triggered, but on a different type of schedule that allows for multiple events to happen before sending the prompt to the LLM. There are lots of events that are triggered that would be interesting things for the bot to react to, but I decided not to prompt on every event since the above situation would be triggered frequently. For example, suppose I listen for events like taking a shot on goal and scoring a goal. If the goal is scored immediately after the shot is taken, then the second prompt is sent before the response for the first prompt comes back. For this reason I decided to simply not listen to events like "shot on goal" to avoid prompt messages getting out of order. This could also be addressed with more code logic. Prompt engineering is something that can always be improved. It is hard to measure and testing it is subjective. I am pleased with the results I was able to capture for the demo video, but the quality of the LLM responses can very depending on what happens during gameplay. One idea I had to address this would be to provide multiple English translations for any given event, and then select one at random. This might help improve the variety of responses, for example. I faced some limitations that are built in to the game iteself. For example, it is not possible for a player to send messages to the in-game chat in offline matches, which makes sense! I built a backdoor for doing this through the BakkesMod developer console, so you can send messages to the bot by typing something like SendMessage Good shot, bot!, for example. ## What's next? Participating in this contest was a great opportunity to learn more about LLMs and how to use them to extend programs in a Windows environment. It was also a lot of fun to build something by putting together new tools like TensorRT-LLM. Seeing the bot send me chat messages was very satisfying when I first got it to work! Overall it is a pretty simple implementation, but this idea could be extended to produce useful application. I could imagine a "Rocket League Coach" plugin that expands on this idea to give helpful feedback based on higher-level data, statistical trends, training goals, etc. I think the gaming industry's adoption of LLMs for new games will be BIG, and it will present a huge opportunity for LLM optimization and acceleration software like TensorRT-LLM that I was able to use in my Rocket League BotChat. This is not to discredit the work of writers which play an important role in game development. I'm excited to see what other developers have built for this contest, especially submissions that are building mods for games using TensorRT-LLM. Thanks NVIDIA and the TensorRT and TensorRT-LLM teams for organizing this contest! Keep on building!!
briancaffey
1,765,507
Calculate Age/Prediction app using Html, CSS & Javascript TFjs
Age prediction javascript is an app that uses the machine learning model of opencv and tensorflow JS...
0
2024-03-03T18:47:07
https://pratikpathak.com/age-prediction-using-javascript/
ai, html, javascript
--- title: Calculate Age/Prediction app using Html, CSS & Javascript TFjs published: true date: 2024-02-19 09:52:44 UTC tags: AI,HTML,Javascript,JS canonical_url: https://pratikpathak.com/age-prediction-using-javascript/ --- Age prediction javascript is an app that uses the machine learning model of opencv and tensorflow JS to predict the Age of a person using face recognition. It is solely built by using an open-source model of TFjs ## Code for Age Prediction using JS This project appears to be a web-based application that uses OpenCV.js and TensorFlow.js for some form of image or video processing. [Live Preview](https://zpratikpathak.github.io/25-Javascript-Projects-for-beginner/03-age-prediction-tfjs) | [Source Code](https://github.com/zpratikpathak/25-Javascript-Projects-for-beginner) - [HTML](javascript:void(0)) - [Js](javascript:void(0)) ``` <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Opencv JS</title> <script src="js/utils.js"></script> <script async src="js/opencv.js" onload="openCvReady();"></script> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@2.0.0/dist/tf.min.js"></script> <style> canvas { position: absolute; } h2 { position: relative; top: -250px; right: 350; } body { margin: 0; background-color: whitesmoke; padding: 0; width: 100vw; height: 100vh; display: flex; justify-content: center; align-items: center; } </style> </head> <body> <video id="cam_input" height="680" width="900"></video> <canvas id="canvas_output"></canvas> <script type="text/JavaScript" src="script.js"></script> <!-- <h2 id="output">Initializing</h2> --> </body> </html> let model; //openCvReady is the function that will be executed when the opencv.js file is loaded function openCvReady() { cv["onRuntimeInitialized"] = () => { // The variable video extracts the video the video element let video = document.getElementById("cam_input"); // video is the id of video tag // navigator.mediaDevices.getUserMedia is used to access the webcam navigator.mediaDevices .getUserMedia({ video: true, audio: false }) .then(function (stream) { video.srcObject = stream; video.play(); }) .catch(function (err) { console.log("An error occurred! " + err); }); //src and dst holds the source and destination image matrix let src = new cv.Mat(video.height, video.width, cv.CV_8UC4); let dst = new cv.Mat(video.height, video.width, cv.CV_8UC1); //gray holds the grayscale image of the src let gray = new cv.Mat(); //cap holds the current frame of the video let cap = new cv.VideoCapture(cam_input); //RectVector is used to hold the vectors of different faces let faces = new cv.RectVector(); let predictions = "Detecting..."; //classifier holds the classifier object let classifier = new cv.CascadeClassifier(); let utils = new Utils("errorMessage"); //crop holds the ROI of face let crop = new cv.Mat(video.height, video.width, cv.CV_8UC1); let dsize = new cv.Size(48, 48); // Loading the haar cascade face detector let faceCascadeFile = "haarcascade_frontalface_default.xml"; // path to xml utils.createFileFromUrl(faceCascadeFile, faceCascadeFile, () => { classifier.load(faceCascadeFile); // in the callback, load the cascade from file }); //Loading the model with async as loading the model may take few miliseconds //The function dont take and return anything //the model holds the model (async () => { model = await tf.loadLayersModel("./model/model.json"); console.log(model); })(); const FPS = 30; // processvideo will be executed recurrsively function processVideo() { let begin = Date.now(); cap.read(src); src.copyTo(dst); cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0); // converting to grayscale try { classifier.detectMultiScale(gray, faces, 1.1, 3, 0); // detecting the face console.log(faces.size()); } catch (err) { console.log(err); } //iterating over all the detected faces for (let i = 0; i < faces.size(); ++i) { let face = faces.get(i); // filtering out the boxes with the area of less than 45000 if (face.width * face.height < 40000) { continue; } let point1 = new cv.Point(face.x, face.y); let point2 = new cv.Point(face.x + face.width, face.y + face.height); // creating the bounding box cv.rectangle(dst, point1, point2, [51, 255, 255, 255], 3); //creating a rect element that can be used to extract let cutrect = new cv.Rect(face.x, face.y, face.width, face.height); //extracting the ROI crop = gray.roi(cutrect); cv.resize(crop, crop, dsize, 0, 0, cv.INTER_AREA); //converting the image matrix to a 4d tensor const input = tf.tensor4d(crop.data, [1, 48, 48, 1]).div(255); //console.log(input) //making the prediction and adding the prediction it to the output canvas predictions = model.predict(input).dataSync(0); console.log(predictions); //adding the text above the bounding boxes cv.putText( dst, String(parseInt(predictions)), { x: face.x, y: face.y - 20 }, 1, 3, [255, 128, 0, 255], 4, ); } // showing the final output cv.imshow("canvas_output", dst); let delay = 1000 / FPS - (Date.now() - begin); setTimeout(processVideo, delay); } // schedule first one. setTimeout(processVideo, 0); }; } ``` 25+ More Javascript Projects for beginners, [click here to learn more](https://pratikpathak.com/top-javascript-projects-with-source-code-github/) [https://pratikpathak.com/top-javascript-projects-with-source-code-github/](https://pratikpathak.com/top-javascript-projects-with-source-code-github/) ## More about index.html – Travel India The HTML structure includes a `<video>` element for displaying a video stream (likely from a webcam given the id “cam\_input”) and a `<canvas>` element for displaying the processed output. The project uses several JavaScript files: - `utils.js` is likely a utility script that provides helper functions used in the project. - `opencv.js` is the main OpenCV.js library, which provides a wide range of image and video processing functions. - `tf.min.js` is the TensorFlow.js library, which provides machine learning capabilities. - `script.js` is likely the main script for the application, which uses the above libraries to implement the specific functionality of the project. The CSS within the `<style>` tags positions the canvas and video elements, and styles the body of the page ## More about script.js – Travel India The `script.js` file provides the functionality for a web-based application that uses OpenCV.js and TensorFlow.js for real-time face detection and emotion prediction from a webcam feed. Here’s a breakdown of the JavaScript code: - `openCvReady()` is the function that will be executed when the OpenCV.js library is loaded. - Inside `openCvReady()`, the `cv["onRuntimeInitialized"]` function is defined to run once the OpenCV.js runtime is initialized. - The video from the webcam is accessed using `navigator.mediaDevices.getUserMedia` and displayed in the “cam\_input” video element. - Several OpenCV.js objects are created to hold the video frames (`src`, `dst`, `gray`, `cap`), detected faces (`faces`), and the face classifier (`classifier`). - The Haar cascade face detector is loaded from an XML file using `utils.createFileFromUrl` and `classifier.load`. - The TensorFlow.js model is loaded from a JSON file using `tf.loadLayersModel`. - `processVideo()` is a function that is called repeatedly to process each frame of the video. It reads a frame from the video, converts it to grayscale, detects faces in the frame using the Haar cascade classifier, and for each detected face, it extracts the region of interest (ROI), resizes it, and feeds it to the TensorFlow.js model for emotion prediction. The predicted emotion is then drawn on the frame, and the processed frame is displayed in the “canvas\_output” canvas element. - `processVideo()` is scheduled to run repeatedly at a rate of 30 frames per second (FPS). This script enables the application to perform real-time face detection and emotion prediction from a webcam feed.
pratikpathak
1,765,627
gap -- CSS
O objetivo deste contéudo é fornecer um recurso útil para desenvolvedores que desejam aprender e...
0
2024-02-19T14:37:01
https://dev.to/vitoriobsb/gap-css-1l1a
codepen
<p>O objetivo deste contéudo é fornecer um recurso útil para desenvolvedores que desejam aprender e dominar o uso da propriedade gap no CSS para criar layouts mais eficientes e responsivos.</p> {% codepen https://codepen.io/cbvitorio/pen/jOJJJyv %}
vitoriobsb
1,771,314
인제출장마사지⏬라인☹️AG775 ⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775 ⏬인제키스방⛔인제출장서비스⛔인제조건만남
[img]https://i.imgur.com/xHJKRn6.jpeg[/img] 인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​...
0
2024-02-25T03:12:36
https://dev.to/fdgjdfgf111/injeculjangmasajirainag775injemotelculjanginjeculjangsyabinjekolgeolsyabrainag775injekiseubanginjeculjangseobiseuinjejogeonmannam-553h
[img]https://i.imgur.com/xHJKRn6.jpeg[/img] 인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스⛔인제조건만남 인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스⛔인제조건만남 인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스⛔인제조건만남 인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스⛔인제조건만남 https://shop.notjustalabel.com/search?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://abtech.edu/search?keys=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://forum.portswigger.net/search?category=&q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://opendatasus.saude.gov.br/mn_MN/dataset/?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://www.transfermarkt.de/schnellsuche/keinergebnis/schnellsuche?query=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://www.summitvalve.com/?s=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://tw.dictionary.search.yahoo.com/search?p=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://women.volleybox.net/nl/search?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://beach.volleybox.net/nl/search?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://volleybox.net/nl/search?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://engineering.jhu.edu/fsag/?s=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://www.dba.dk/soeg/?soeg=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://pixelmonmod.com/viewtopic.php?f=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://barasportswear.com/pages/search-results-page?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://forums.stardewvalley.net/search/291816/?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스&o=date https://www.tinkercad.com/search?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스&staffPicks=0 https://www.behej.com/vyhledavani-v-tematech?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스&type=both&submit=Vyhledat https://www.dell.com/community/fr/conversations/search?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://www.dba.dk/soeg/?soeg=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://community.synology.com/enu/search/posts?query=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://forum.zoom.nl/search/?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스&quick=1 https://www.morfix.co.il/인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://www.tilbudsugen.dk/offer/인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스 https://sunnydazedecor.com/pages/search-results-page?q=인제출장마사지⏬라인☹️AG775​​​​​​​⏬인제모텔출장⛔인제출장샵⛔인제콜걸샵⏬라인☹️AG775​​​​​​​⏬인제키스방⛔인제출장서비스
fdgjdfgf111
1,771,933
Unveiling the Essence of Machine Learning: A Comprehensive Exploration for Data Science Enthusiasts
Introduction: In the realm of data science, where the deluge of information continues to expand...
0
2024-02-26T03:54:08
https://dev.to/brianmk/unveiling-the-essence-of-machine-learning-a-comprehensive-exploration-for-data-science-enthusiasts-2ll
machinelearning, datascience, ai
Introduction: In the realm of data science, where the deluge of information continues to expand exponentially, machine learning (ML) has emerged as a beacon of hope and innovation. This comprehensive exploration aims to dissect the intricate layers of machine learning, unraveling its principles, methodologies, applications, and the ever-evolving landscape within the domain of data science. Foundations of Machine Learning: At the heart of machine learning lies the amalgamation of statistical analysis, computer science algorithms, and domain expertise, fostering systems' abilities to learn from data and enhance their performance iteratively without explicit programming. The essence of ML is rooted in its capability to discern patterns, glean insights, and facilitate data-driven decision-making processes. Understanding the Spectrum of Machine Learning: Machine learning encompasses a spectrum of methodologies, broadly categorized into supervised, unsupervised, and reinforcement learning paradigms. Supervised learning involves training models on labeled datasets, enabling algorithms to map inputs to corresponding outputs with precision. In contrast, unsupervised learning explores unlabeled data, striving to uncover latent structures or patterns intrinsic to the dataset. Reinforcement learning thrives on feedback mechanisms, where agents learn optimal strategies through interactions with dynamic environments. Delving into Methodologies and Techniques: Within each paradigm, a plethora of methodologies and techniques flourish, catering to diverse data science tasks and challenges. Supervised learning methods include regression for predicting continuous outcomes and classification for discerning data into predefined categories. Unsupervised learning techniques encompass clustering algorithms such as k-means and dimensionality reduction methods like principal component analysis (PCA). Reinforcement learning algorithms, including Q-learning and deep reinforcement learning, delve into the realm of autonomous decision-making under uncertainty. Applications Pervading Industries: The applications of machine learning permeate various sectors, reshaping industries and revolutionizing business landscapes. In healthcare, ML algorithms drive advancements in disease diagnosis, drug discovery, and personalized treatment recommendations, thereby fostering precision medicine. Financial institutions leverage ML for fraud detection, risk assessment, and algorithmic trading, optimizing operational efficiency and mitigating risks. E-commerce platforms harness ML-powered recommendation systems to enhance user experience, increase customer engagement, and drive sales. Marketing strategies are bolstered through predictive analytics, customer segmentation, and sentiment analysis, enabling organizations to tailor campaigns and optimize marketing spend effectively. Navigating Challenges and Considerations: Despite its transformative potential, machine learning encounters multifaceted challenges and ethical considerations. Data quality issues, biased algorithms, model interpretability, scalability concerns, and ethical dilemmas surrounding algorithmic decision-making pose significant hurdles. Adhering to robust validation methodologies, promoting transparency and fairness, and integrating ethical frameworks are imperative to mitigate risks and foster trust in machine learning systems. Future Horizons and Emerging Trends: As the landscape of machine learning continues to evolve, propelled by advancements in technology and innovative research endeavors, several emerging trends shape the trajectory of the field. The fusion of machine learning with other domains such as natural language processing, computer vision, and reinforcement learning heralds new frontiers in AI research and applications. Federated learning, edge computing, and explainable AI are poised to redefine the landscape of machine learning, addressing scalability, privacy, and interpretability concerns. Conclusion: Machine learning stands as the linchpin of modern data science, empowering organizations to unlock the latent potential of data, glean actionable insights, and drive innovation across diverse domains. By delving into the intricacies of machine learning, understanding its principles, methodologies, and applications, data science enthusiasts can navigate the complexities of the digital age, harnessing the transformative power of ML to forge a path towards a data-driven future.
brianmk
1,773,334
Build CLI Apps with Ease using Python's Click Library
Command-line interface (CLI) applications are essential tools in the software development world,...
0
2024-02-27T09:09:30
https://developer-service.blog/build-cli-apps-with-ease-using-pythons-click-library/
python, programming, cli
Command-line interface (CLI) applications are essential tools in the software development world, providing a simple way to interact with applications, scripts, or services. Python, known for its simplicity and readability, offers excellent libraries for building CLI applications, one of which is Click. [Click](https://github.com/pallets/click/) is a Python package for creating beautiful command-line interfaces in a composable way with as little code as necessary. It's a modern alternative to argparse, providing a cleaner API and more functionality out of the box. In this example, we'll create a basic CLI application using Click to demonstrate its ease of use and flexibility. --- ## Setting Up First, ensure you have Click installed. If not, you can install it using pip: ``` pip install click ``` --- ## A Simple CLI Application We'll create a simple CLI tool that greets the user. The application will accept the user's name as an argument and an optional flag to shout the greeting. ``` import click @click.command() @click.option('--shout', is_flag=True, help="Shout the greeting.") @click.argument('name') def greet(name, shout): """Simple program that greets NAME.""" message = f"Hello, {name}!" if shout: click.echo(message.upper()) else: click.echo(message) if __name__ == '__main__': greet() ``` --- Full article at: https://developer-service.blog/build-cli-apps-with-ease-using-pythons-click-library/
devasservice
1,774,854
Rockwin Casino Review: Your Ultimate Guide to a Top-Quality iGaming Experience
Welcome to our in-depth review of Rockwin Casino, the latest iGaming sensation owned by the renowned...
0
2024-02-28T11:17:52
https://dev.to/weratrw/rockwin-casino-review-your-ultimate-guide-to-a-top-quality-igaming-experience-11mo
javascript, beginners, programming, webdev
Welcome to our in-depth review of [Rockwin Casino](https://rockwin.casino/), the latest iGaming sensation owned by the renowned Hollycorn N.V. In this comprehensive review, we'll explore what sets Rockwin Casino apart from its competitors, covering various aspects such as games, bonuses, registration, user interface, mobile support, licensing, and more. So, let's dive right in! 1. A Modern and Stylish User Interface 🌟 Rockwin Casino Login immediately catches your eye with its unique user interface. The design is not only visually appealing but also perfectly aligned with the casino's theme. The limited color palette creates an inviting and luxurious atmosphere that's easy on the eyes. 2. A Payment System That Caters to Everyone 💰 Whether you're new to online casinos or a seasoned player, understanding payment procedures and licensing is essential. Rockwin Casino accepts both crypto and fiat money, making it accessible to a wide range of players. Plus, it boasts a Curacao license, ensuring a secure and regulated gaming environment. 3. Generous Bonuses and Promotions 🎁 Rockwin Casino doesn't hold back when it comes to bonuses and promotions. From a loyalty club to a lucrative welcome package, cashback offers, free spins, reload bonuses, and exciting tournaments, they've got it all. Players are in for a treat with these rewards. 4. A Vast Library of Exciting Games 🎮 The heart of any online casino lies in its game selection, and Rockwin Casino doesn't disappoint. With a diverse library of popular and new titles, including slots, table games, and live dealer options, there's something for every type of player. It's the perfect recipe for endless entertainment. 5. Exceptional Customer Service 📞 Customer support can make or break a gaming experience, and Rockwin Casino understands this. Their fantastic customer service is easily accessible via live chat and email. The live chat option provides quick and efficient assistance, making it the preferred choice for many players. 6. Localized Versions for a Global Audience 🌍 Rockwin Casino caters to players from various countries with localized versions, such as RockWin Casino Deutschland Test, RockWin Casino Greece Αξιολόγηση, RockWin Kasyno Polska Recenzja, RockWin Casino Brasil Avaliacao, and RockWin Casino Norge anmeldelse. This approach ensures that players worldwide can enjoy their services. 7. Exciting VIP and Loyalty Programs 🏆 For those seeking an elevated gaming experience, Rockwin Casino offers a VIP program. By verifying your mobile number and making a substantial deposit, you'll gain access to exclusive offers, priority service, personal account management, and more. Additionally, the loyalty program rewards players with various prizes as they climb its levels. 8. A Wide Range of Game Providers 🎰 Rockwin Casino collaborates with some of the industry's top game developers, including Pragmatic Play, Play'N Go, Evolution, Netent, Red Tiger, Yggdrasil, Quickspin, Playtech, Betsoft, and Iron Dog Studio, among others. This ensures a diverse and high-quality gaming experience. 9. Hassle-Free Deposits and Withdrawals 💳 Rockwin Casino supports a wide array of payment methods, including digital currencies, credit/debit cards, e-wallets, instant banking, and prepaid vouchers. Players can choose from a variety of options, making transactions convenient and tailored to their preferences. 10. Secure and Licensed Gaming Environment 🔒 Rockwin Casino operates under the Curacao license, providing players with a regulated and legitimate gaming platform. While Curacao may have had its critics in the past, it offers practical benefits that have made it a choice for many operators. Conclusion: Rockwin Casino, owned by Hollycorn N.V., stands out as a top-notch iGaming platform for 2023. With its visually appealing user interface, diverse game library, generous bonuses, secure payment options, and exceptional customer service, it has quickly become a favorite among players worldwide. Whether you're a newcomer or a seasoned gambler, Rockwin Casino offers an engaging and rewarding gaming experience that's worth exploring. Join the excitement and see what this online casino has in store for you! 🎉 Frequently Asked Questions (FAQs) about Rockwin Casino 1. What is Rockwin Casino, and who owns it? Rockwin Casino is an online gaming platform launched in 2023. It is owned and operated by Hollycorn N.V., a well-known iGaming operator with a strong presence in the industry. Hollycorn N.V. is recognized for its popular gaming platforms, and Rockwin Casino is its latest offering. 2. What sets Rockwin Casino apart from its competitors? Rockwin Casino distinguishes itself with a unique user interface, a well-thought color scheme, and a luxurious yet casual vibe. It offers a wide range of games, accepts both crypto and fiat currencies, and holds a Curacao license. These features contribute to its appeal among players. 3. What bonuses and promotions does Rockwin Casino offer? Rockwin Casino provides various promotions, including a loyalty club, a welcome package, cashback promotions, free spin offers, reload bonuses, tournaments, and more. New players can enjoy a welcome package of up to €500 and 225 free spins, or high rollers can opt for a 50% Welcome Bonus of up to €2000. 4. What games can I find at Rockwin Casino? Rockwin Casino boasts a diverse gaming library with a vast selection of both popular and new titles. It offers a wide range of slots, table games, and live dealer options. Players can enjoy games like Roulette, Blackjack, Baccarat, and various slot titles. 5. How can I contact Rockwin Casino's customer service? Rockwin Casino provides customer support through live chat and email. These channels offer assistance to players whenever needed, with live chat being the preferred option for its speed and convenience. 6. Can I play at Rockwin Casino on my mobile device? Yes, Rockwin Casino offers mobile support, allowing players to enjoy their favorite games on smartphones and tablets. The platform is designed to provide a seamless gaming experience on various devices. 7. What payment methods are accepted at Rockwin Casino? Rockwin Casino accepts a wide range of payment methods, including credit and debit cards, e-wallets, cryptocurrencies, prepaid vouchers, and more. The available options may vary depending on your country of residence. 8. Is Rockwin Casino secure and licensed? Rockwin Casino operates under the Curacao license and is certified and regulated. While the Curacao license is known for its practicality and cost-effectiveness, Rockwin Casino maintains its commitment to security and legitimacy. 9. Does Rockwin Casino have a VIP program? Yes, Rockwin Casino offers a VIP program for its players. To become a VIP member, you'll need to verify your mobile number and make a single deposit of €1000 or the equivalent in supported currencies. VIP members enjoy exclusive benefits and privileges. 10. What is the Loyalty Program at Rockwin Casino? Rockwin Casino features a Loyalty Program where players earn points for every €10 in real money bets. There are ten levels in the program, each offering special prizes without wagering requirements. Prizes include bonus cash and free spins.
weratrw
1,776,057
"JEP 467: Markdown Documentation Comments" uses my original method on 2019-02-17
"JEP 467: Markdown Documentation Comments" uses my original method on 2019-02-17, It offers much more...
0
2024-02-29T13:14:33
https://dev.to/lincpa/jep-467-markdown-documentation-comments-uses-my-original-method-on-2019-02-17-3pck
java, markdown, jep, programming
"JEP 467: Markdown Documentation Comments" uses my original method on 2019-02-17, It offers much more features, Extensibility, simplicity and visualization, It works with all programming languages and markup languages. Markdown Literate programming that don't break the syntax of any programming language https://github.com/linpengcheng/PurefunctionPipelineDataflow/blob/master/doc/markdown_literary_programming.md Live preview in Notepad++ https://github.com/linpengcheng/ClojureBoxNpp ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fuydxdna9whsk5bg9p6q.png)
lincpa
1,777,453
Atlas-CountryDialCodesMapping
const CountryDialCodesMapping = { "Andorra": "+376", "United Arab Emirates": "+971", ...
0
2024-03-01T17:23:07
https://dev.to/akash32755/atlas-countrydialcodesmapping-1l0l
``` const CountryDialCodesMapping = { "Andorra": "+376", "United Arab Emirates": "+971", "Afghanistan": "+93", "Antigua and Barbuda": "+1-268", "Anguilla": "+1-264", "Albania": "+355", "Armenia": "+374", "Angola": "+244", "Antarctica": "+672", "Argentina": "+54", "American Samoa": "+1-684", "Austria": "+43", "Australia": "+61", "Aruba": "+297", "Åland Islands": "+358-18", "Azerbaijan": "+994", "Bosnia and Herzegovina": "+387", "Barbados": "+1-246", "Bangladesh": "+880", "Belgium": "+32", "Burkina Faso": "+226", "Bulgaria": "+359", "Bahrain": "+973", "Burundi": "+257", "Benin": "+229", "Saint Barthélemy": "+590", "Bermuda": "+1-441", "Brunei": "+673", "Bolivia": "+591", "Caribbean Netherlands": "+599", "Brazil": "+55", "Bahamas": "+1-242", "Bhutan": "+975", "Bouvet Island": "+47", "Botswana": "+267", "Belarus": "+375", "Belize": "+501", "Canada": "+1", "Cocos (Keeling) Islands": "+61", "Congo": "+243", "DR Congo": "+243", "Central African Republic": "+236", "Republic of the Congo": "+242", "Switzerland": "+41", "Côte d'Ivoire (Ivory Coast)": "+225", "Cook Islands": "+682", "Chile": "+56", "Cameroon": "+237", "China": "+86", "Colombia": "+57", "Costa Rica": "+506", "Cuba": "+53", "Cape Verde": "+238", "Curaçao": "+599", "Christmas Island": "+61", "Cyprus": "+357", "Czechia": "+420", "Germany": "+49", "Djibouti": "+253", "Denmark": "+45", "Dominica": "+1-767", "Dominican Republic": "+1-809, +1-829, +1-849", "Algeria": "+213", "Ecuador": "+593", "Estonia": "+372", "Egypt": "+20", "Western Sahara": "+212", "Eritrea": "+291", "Spain": "+34", "Ethiopia": "+251", "European Union": "+388", "Finland": "+358", "Fiji": "+679", "Falkland Islands": "+500", "Micronesia": "+691", "Faroe Islands": "+298", "France": "+33", "Gabon": "+241", "United Kingdom": "+44", "England": "+44", "Northern Ireland": "+44", "Scotland": "+44", "Wales": "+44", "Grenada": "+1-473", "Georgia": "+995", "French Guiana": "+594", "Guernsey": "+44", "Ghana": "+233", "Gibraltar": "+350", "Greenland": "+299", "Gambia": "+220", "Guinea": "+224", "Guadeloupe": "+590", "Equatorial Guinea": "+240", "Greece": "+30", "South Georgia": "+500", "Guatemala": "+502", "Guam": "+1-671", "Guinea-Bissau": "+245", "Guyana": "+592", "Hong Kong": "+852", "Heard Island and McDonald Islands": "+672", "Honduras": "+504", "Croatia": "+385", "Haiti": "+509", "Hungary": "+36", "Indonesia": "+62", "Ireland": "+353", "Israel": "+972", "Isle of Man": "+44", "India": "+91", "British Indian Ocean Territory": "+246", "Iraq": "+964", "Iran": "+98", "Iceland": "+354", "Italy": "+39", "Jersey": "+44", "Jamaica": "+1-876", "Jordan": "+962", "Japan": "+81", "Kenya": "+254", "Kyrgyzstan": "+996", "Cambodia": "+855", "Kiribati": "+686", "Comoros": "+269", "Saint Kitts and Nevis": "+1-869", "North Korea": "+850", "South Korea": "+82", "Kuwait": "+965", "Cayman Islands": "+1-345", "Kazakhstan": "+7", "Laos": "+856", "Lebanon": "+961", "Saint Lucia": "+1-758", "Liechtenstein": "+423", "Sri Lanka": "+94", "Liberia": "+231", "Lesotho": "+266", "Lithuania": "+370", "Luxembourg": "+352", "Latvia": "+371", "Libya": "+218", "Morocco": "+212", "Monaco": "+377", "Moldova": "+373", "Montenegro": "+382", "Saint Martin": "+590", "Madagascar": "+261", "Marshall Islands": "+692", "North Macedonia": "+389", "Mali": "+223", "Myanmar": "+95", "Mongolia": "+976", "Macau": "+853", "Northern Mariana Islands": "+1-670", "Martinique": "+596", "Mauritania": "+222", "Montserrat": "+1-664", "Malta": "+356", "Mauritius": "+230", "Maldives": "+960", "Malawi": "+265", "Mexico": "+52", "Malaysia": "+60", "Mozambique": "+258", "Namibia": "+264", "New Caledonia": "+687", "Niger": "+227", "Norfolk Island": "+672", "Nigeria": "+234", "Nicaragua": "+505", "Netherlands": "+31", "Norway": "+47", "Nepal": "+977", "Nauru": "+674", "Niue": "+683", "New Zealand": "+64", "Oman": "+968", "Panama": "+507", "Peru": "+51", "French Polynesia": "+689", "Papua New Guinea": "+675", "Philippines": "+63", "Pakistan": "+92", "Poland": "+48", "Saint Pierre and Miquelon": "+508", "Puerto Rico": "+1", "Palestine": "+970", "Portugal": "+351", "Palau": "+680", "Paraguay": "+595", "Qatar": "+974", "Réunion": "+262", "Romania": "+40", "Serbia": "+381", "Russia": "+7", "Rwanda": "+250", "Saudi Arabia": "+966", "Solomon Islands": "+677", "Seychelles": "+248", "Sudan": "+249", "Sweden": "+46", "Singapore": "+65", "Saint Helena, Ascension and Tristan da Cunha": "+290", "Slovenia": "+386", "Svalbard and Jan Mayen": "+47", "Slovakia": "+421", "Sierra Leone": "+232", "San Marino": "+378", "Senegal": "+221", "Somalia": "+252", "Suriname": "+597", "South Sudan": "+211", "São Tomé and Príncipe": "+239", "El Salvador": "+503", "Sint Maarten": "+1-721", "Syria": "+963", "Eswatini (Swaziland)": "+268", "Turks and Caicos Islands": "+1-649", "Chad": "+235", "French Southern and Antarctic Lands": "+262", "Togo": "+228", "Thailand": "+66", "Tajikistan": "+992", "Tokelau": "+690", "Timor-Leste": "+670", "Turkmenistan": "+993", "Tunisia": "+216", "Tonga": "+676", "Turkey": "+90", "Trinidad and Tobago": "+1-868", "Tuvalu": "+688", "Taiwan": "+886", "Tanzania": "+255", "Ukraine": "+380", "Uganda": "+256", "United States Minor Outlying Islands": "+1", "United Nations": "", "United States": "+1", "Uruguay": "+598", "Uzbekistan": "+998", "Vatican City (Holy See)": "+379", "Saint Vincent and the Grenadines": "+1-784", "Venezuela": "+58", "British Virgin Islands": "+1-284", "United States Virgin Islands": "+1-340", "Vietnam": "+84", "Vanuatu": "+678", "Wallis and Futuna": "+681", "Samoa": "+685", "Kosovo": "+383", "Yemen": "+967", "Mayotte": "+262", "South Africa": "+27", "Zambia": "+260", "Zimbabwe": "+263" } export default CountryDialCodesMapping; ```
akash32755
1,779,487
Unlocking WordPress Mastery: Essential Tips and Techniques
Introduction: WordPress has evolved from a simple blogging platform to a versatile content...
0
2024-03-04T08:09:58
https://dev.to/jamesmartindev/unlocking-wordpress-mastery-essential-tips-and-techniques-a7k
webdev, javascript, wordpress, tips
## Introduction: WordPress has evolved from a simple blogging platform to a versatile content management system (CMS) powering millions of websites worldwide. Its flexibility, ease of use, and robust ecosystem make it a top choice for developers. However, mastering WordPress development requires more than just basic knowledge. In this comprehensive guide, we'll delve into essential tips and techniques to help you become a proficient WordPress developer. ## 1. Understanding WordPress Architecture: - Familiarize yourself with the core components of WordPress, including themes, plugins, and the database structure. - Learn about the WordPress loop and template hierarchy to efficiently display content on your website. - Explore the roles of functions.php, wp-config.php, and other key files in customizing WordPress functionality. ## 2. Utilizing Child Themes: - Always use child themes when customizing existing WordPress themes to prevent losing modifications during theme updates. - Learn how to create and activate child themes effectively, ensuring seamless integration with parent themes. - Utilize child themes to override CSS styles, template files, and even PHP functions without modifying the parent theme files directly. ## 3. Leveraging Custom Post Types and Taxonomies: - Understand the concept of custom post types and taxonomies to create unique content structures tailored to your website's needs. - Use plugins like Custom Post Type UI or code snippets to register custom post types and taxonomies effortlessly. - Explore advanced techniques such as creating custom meta boxes and utilizing custom fields to enhance the editing experience for custom post types. ## 4. Optimizing Performance and Security: - Implement caching mechanisms like caching plugins (e.g., W3 Total Cache, WP Super Cache) and content delivery networks (CDNs) to improve website speed. - Follow best practices for WordPress security, including keeping themes, plugins, and WordPress core up to date, using strong passwords, and limiting login attempts. - Consider implementing security plugins like Wordfence or Sucuri to enhance website security and mitigate potential threats. ## 5. Mastering Theme and Plugin Development: - Dive into theme development by learning HTML, CSS, JavaScript, and PHP, along with WordPress-specific functions and template tags. - Explore starter themes like Underscores or Sage to kickstart your theme development process with clean, well-structured code. - Develop custom plugins to extend WordPress functionality according to specific requirements, following the WordPress coding standards and best practices. ## 6. Enhancing User Experience with Responsive Design: - Prioritize responsive design to ensure optimal user experience across various devices and screen sizes. - Utilize CSS frameworks like Bootstrap or Foundation to streamline the responsive design process and create visually appealing layouts. - Test website responsiveness using tools like Google's Mobile-Friendly Test and browser developer tools to identify and fix compatibility issues. ## 7. Implementing SEO Best Practices: - Optimize website content for search engines by incorporating relevant keywords, meta tags, and descriptive alt attributes for images. - Install and configure SEO plugins such as Yoast SEO or Rank Math to manage on-page SEO elements effectively. - Create XML sitemaps, optimize website speed, and focus on high-quality, authoritative content to improve search engine rankings. ## 8. Continuous Learning and Community Engagement: - Stay updated with the latest WordPress trends, developments, and best practices by following reputable blogs, attending WordCamps, and participating in online communities. - Contribute to the WordPress community by sharing knowledge, contributing to core development, or volunteering at local WordPress events. - Network with fellow WordPress developers, designers, and enthusiasts to exchange ideas, seek advice, and collaborate on projects. ## 9. Avoiding Common Development Mistakes: Avoiding common **[development mistakes](https://wpeople.net/common-advanced-wordpress-development-mistakes/)** is crucial for successful WordPress projects. **- Not Using Version Control:** Always use version control systems like Git to track changes in your codebase, collaborate with team members, and revert to previous versions if needed. **- Ignoring WordPress Coding Standards:** Adhering to WordPress coding standards ensures consistency, readability, and compatibility with future updates. Use tools like PHP_CodeSniffer to validate code against WordPress standards. **- Overlooking Security Vulnerabilities:** Neglecting security best practices can leave your website vulnerable to attacks. Regularly audit and update themes, plugins, and WordPress core to patch security vulnerabilities. **- Not Testing Across Different Environments:** Test your website thoroughly across various environments, including different web browsers, devices, and screen sizes, to ensure consistent performance and user experience. **- Relying Too Much on Plugins:** While plugins can extend WordPress functionality, excessive reliance on plugins can bloat your website, impact performance, and introduce compatibility issues. Evaluate the necessity of each plugin and consider custom solutions where appropriate. **- Ignoring Performance Optimization:** Neglecting performance optimization techniques such as image optimization, lazy loading, and code minification can lead to slow page load times and poor user experience. Prioritize performance optimization throughout the development process. **- Not Backing Up Your Website Regularly: **Failure to regularly backup your website leaves you vulnerable to data loss in the event of server crashes, hacks, or accidental deletions. Implement automated backup solutions and store backups securely offsite. - Skipping Documentation: Documenting your code, including functions, hooks, and customizations, is essential for maintaining code clarity, facilitating collaboration, and troubleshooting issues. Invest time in comprehensive documentation to streamline future development and maintenance. **- Underestimating User Experience (UX) Design:** Focusing solely on functionality without considering user experience can result in confusing navigation, cluttered layouts, and high bounce rates. Prioritize intuitive UX design principles to enhance user engagement and satisfaction. **- Neglecting Accessibility (ADA) Compliance:** Failure to make your website accessible to users with disabilities not only excludes a significant portion of the audience but also exposes you to potential legal liabilities. Follow accessibility guidelines and tools like WAVE or Axe to ensure ADA compliance. **- Ignoring Analytics and User Feedback:** Disregarding analytics data and user feedback prevents you from understanding user behavior, identifying areas for improvement, and making data-driven decisions. Utilize tools like Google Analytics and user feedback forms to gather insights and refine your website continuously. ## Conclusion: Mastering WordPress development requires dedication, continuous learning, and hands-on experience. By understanding WordPress architecture, leveraging child themes, custom post types, and taxonomies, optimizing performance and security, mastering theme and plugin development, prioritizing responsive design, implementing SEO best practices, avoiding common development mistakes, and engaging with the WordPress community, you can elevate your skills and build exceptional websites with WordPress. Keep exploring, experimenting, and pushing the boundaries of WordPress development to unlock its full potential.
jamesmartindev
1,779,503
Adding a React-native In-app Notification Feed for Real-time Updates
In this guide, we'll walk through the process of integrating the SuprSend In-App Notification Center...
0
2024-03-04T08:29:48
https://dev.to/suprsend/adding-a-react-native-in-app-notification-feed-for-real-time-updates-1l0a
react, reactnative, javascript, opensource
In this guide, we'll walk through the process of integrating the SuprSend In-App Notification Center into your React applications. SuprSend offers a convenient SDK that allows you to seamlessly incorporate Inbox and Toast notifications into your web applications. --- Try Playground/ Sandbox first - {% embed https://inbox-playground.suprsend.com/ %} ## Prerequisites Before we begin, ensure that you have the following prerequisites installed: - Node.js and npm/yarn: Make sure you have Node.js and npm (or yarn) installed on your development machine. ## Installation First, install the SuprSend React Inbox package using npm or yarn: ```bash npm install --save @suprsend/react-inbox ``` or ```bash yarn add @suprsend/react-inbox ``` ## Integration Once you've installed the package, you can integrate the SuprSend In-App Notification Center into your React components. ```javascript import SuprSendInbox from '@suprsend/react-inbox'; function Example() { return ( <SuprSendInbox workspaceKey='<workspace_key>' workspaceSecret='<workspace_secret>' subscriberId='<subscriber_id>' distinctId='<distinct_id>' /> ); } ``` In the above code snippet: - `workspaceKey`: This is your SuprSend workspace key. - `workspaceSecret`: This is your SuprSend workspace secret. - `subscriberId`: This is the unique identifier of the subscriber. - `distinctId`: This is the unique identifier for the user. It can be a user ID or any other identifier that you use in your application. https://www.suprsend.com/ Ensure that you replace the placeholder values with your actual SuprSend credentials and subscriber information. ### Interaction with React Components - Example Once integrated, the SuprSend In-App Inbox will automatically handle the display of messages within your React application. You can interact with the Inbox component using various methods provided by the SuprSend SDK. For example, you can trigger the display of a notification message when a certain event occurs in your application. This can be achieved by calling the appropriate SDK method within your React components. ```react import { SuprSendInbox } from '@suprsend/react-inbox'; function Example() { const handleClick = () => { // Trigger a notification message SuprSendInbox.showNotification('New message received!'); }; return ( <div> <button onClick={handleClick}>Show Notification</button> <SuprSendInbox workspaceKey='<workspace_key>' workspaceSecret='<workspace_secret>' subscriberId='<subscriber_id>' distinctId='<distinct_id>' /> </div> ); } ``` ## Interaction with Your Application The SuprSend In-App Inbox seamlessly integrates with your React application, allowing you to deliver targeted messages and notifications to your users based on their interactions and behaviors within the application. - Contextual Messages: You can send contextual messages and announcements to users based on specific events or actions within your application. - User Engagement: The In-App Inbox enhances user engagement by providing a centralized location for users to access important messages and updates. - Personalization: You can personalize messages and notifications based on user attributes and preferences, ensuring relevant and meaningful communication. ## Usage Once integrated, the SuprSend In-App Notification Center will handle the display of inbox and toast notifications within your React application based on the configuration provided during integration. ## Conclusion Incorporating the SuprSend In-App Notification Center into your React application is a straightforward process. By following the steps outlined in this guide, you can enhance user engagement and communication by leveraging the power of inbox and toast notifications provided by SuprSend. For more advanced usage and customization options, refer to the official documentation provided by SuprSend. Happy coding! --- ### Here is the Github react native repo for the same. Try [headless](https://docs.suprsend.com/docs/react-native) to get your own custom native inbox: --- {% github suprsend/suprsend-react-inbox %}
nikl
1,779,537
جلب الحبيب للزواج 00𝟐0𝟏0𝟐𝟳0𝟑𝟗𝟐𝟱𝟒 السعودية الكويت قطر الامارات
جلب الحبيب للزواج ، شيخ روحاني ، جلب الحبيب ، رقم شيخ روحاني ، رد المطلقة ، جلب الحبيب بالملح
0
2024-03-04T09:14:33
https://dev.to/jeda667/jlb-lhbyb-llzwj-00000-lswdy-lkwyt-qtr-lmrt-4g49
جلب الحبيب للزواج ، شيخ روحاني ، جلب الحبيب ، رقم شيخ روحاني ، رد المطلقة ، جلب الحبيب بالملح
jeda667
1,779,571
https://dumpsboss.com/certification-provider/acams/
Organizations aws ans c01 across industries are increasingly adopting cloud technologies, creating a...
0
2024-03-04T09:46:27
https://dev.to/awsansc01/httpsdumpsbosscomcertification-provideracams-5gka
Organizations [aws ans c01 ](https://dumpsboss.com/certification-provider/acams/) across industries are increasingly adopting cloud technologies, creating a high demand for skilled AWS Solutions Architects. Higher Earning Potential: AWS-certified professionals, especially Solutions Architects, command higher salaries compared to their non-certified counterparts. The certification validates your expertise in designing and implementing AWS solutions, making you eligible for lucrative career opportunities and salary advancements. Industry Recognition: AWS certifications are highly regarded in the industry and are recognized by employers worldwide. Holding the AWS Certified Solutions Architect - Associate certification enhances your credibility as a cloud architect and demonstrates your commitment to professional [ans c01](https://dumpsboss.com/certification-provider/acams/) excellence. Skill Validation: Achieving the AWS Certified Solutions Architect - Associate certification validates your skills and expertise in cloud architecture design and implementation. It serves as tangible proof of your proficiency in architecting scalable, secure, and cost-effective solutions on AWS, making you a valuable asset to potential employers.
awsansc01
1,779,601
Lopsided Christians
  Weekend, March 2, 2024   Lopsided Christians   Therefore I, a prisoner for serving the Lord, beg...
0
2024-03-04T10:18:29
https://dev.to/mreligion/lopsided-christians-52l4
  Weekend, March 2, 2024   [Lopsided Christians](https://www.mreligion.com/lopsided-christians-14256.htm)   Therefore I, a prisoner for serving the Lord, beg you to lead a life worthy of your calling, for you have been called by God. Ephesians 4:1 NLT   Some Christians are lopsided. They may have one area in their life in which they’re spiritually strong, but they’re weak in other areas. For instance, I’ve met people who have an impressive knowledge of the Bible. Meanwhile, their personal life is in shambles. They’re always struggling with temptation and falling into sin.   The problem is they are imbalanced. They have an understanding of doctrine, but their life is out of balance.   The apostle Paul wrote to the Christians in Ephesus, “Therefore I, a prisoner for serving the Lord, beg you to lead a life worthy of your calling, for you have been called by God” Ephesians 4:1 NLT.   It’s worth noting that Paul wrote this from a prison in Rome. He was a prisoner because he was unwilling to compromise his beliefs, his life, and his message.   And the first thing Paul tells Christians to do in this section of Ephesians is to lead a life worthy of our calling. Maybe you’re thinking, “I’m in trouble. How could I ever be worthy? There is nothing I could ever do to deserve God’s grace.”   However, that is not what “worthy” means here. We also could translate the original word for “worthy” as “balance the scales.” It can be applied to anything that is expected to correspond to something else. It’s a word that speaks of the coordination of things.   Therefore, Paul was saying, “I want you to walk worthy. I want you to live a balanced life.”   We can have an understanding and belief in doctrine and prophecy as well as facts and figures, but if it isn’t affecting the way that we live as Christians, then we’re missing the point. The Bible says, “And all who have this eager expectation will keep themselves pure, just as he is pure” 1 John 3:3 NLT.   In other words, our doctrine and belief should affect the way that we live.   Yet there are also Christians who are lopsided in another way. They don’t know much doctrinally. They don’t really know what the Bible teaches on certain subjects, but they love the Lord. And they are passionate about their faith in Jesus Christ.   We might hear them say something like this: “Let’s not quibble over doctrine. I just love Jesus.”   That sounds nice, but we need to realize this is a dangerous statement. If we’re not careful, we might end up loving the wrong Jesus. We might end up believing the wrong gospel. This is where doctrine comes in.   The Bible clearly teaches that in the last days there will be false Christs, false gospels, and even false miracles. Paul warned, “For a time is coming when people will no longer listen to sound and wholesome teaching. They will follow their own desires and will look for teachers who will tell them whatever their itching ears want to hear” 2 Timothy 4:3 NLT.   There needs to be a balance between our beliefs and our practice, between our doctrine and the way that we live. We need both areas working together. That is what it means to walk worthy of the Lord.   Copyright © 2024 by Harvest Ministries. All rights reserved.   For more relevant and biblical teaching from Pastor Greg Laurie, go to www.harvest.org   and   Listen to Greg Lauries daily broadcast on OnePlace.com.   Watch Greg Lauries weekly television broadcast on LightSource.com.   In thanks for your gift, you can receive a copy ofIs God Real?by Lee Strobel.   In a world that insists God is just a myth, there is overwhelming evidence to the contrary that demands our attention. Renowned author Lee Strobel takes the question of God’s existence head-on in an engaging new book titled Is God Real? Tackling the big questions of life, this fantastic resource is both easy to read and useful to share. We will mail you a copy in thanks for your support of Harvest Ministries this month.   Click here to find out more!
mreligion
1,779,640
Automatic License Plate Recognition
Automatic License Plate Recognition System (LPR) | Number Plate Reader Technology ● Proptia: Proptia...
0
2024-03-04T11:11:45
https://dev.to/proptia/automatic-license-plate-recognition-1kpo
[Automatic License Plate Recognition](https://www.proptia.com/home-page/license-plate-recognition/) System (LPR) | Number Plate Reader Technology ● Proptia: Proptia uses smart video surveillance and ALPR technology to enhance parking lot and dealership security with automatic number plate recognition.
proptia
1,780,241
Advanced Tricks and Techniques for Optimal Code Performance
It's good to understand various coding techniques when reviewing other people's code, but it's not...
0
2024-03-04T20:59:08
https://dev.to/patfinder/advanced-tricks-and-techniques-for-optimal-code-performance-25ch
It's good to understand various coding techniques when reviewing other people's code, but it's not necessary to employ them all. Sometimes, simpler code may actually be better. ==== https://dev.to/akash32755/unleashing-javascript-advanced-tricks-and-techniques-for-optimal-code-performance-lii
patfinder
1,780,567
Enhancing Vehicle Safety and Security with HSM Technology-part1
Greetings, readers! 👋😍 My name is Nagaraj B Hittalamani, and I work as a Junior Software Engineer at...
0
2024-03-05T06:34:03
https://dev.to/nagaraj8687/enhancing-vehicle-safety-and-security-with-hsm-technology-3kk6
Greetings, readers! 👋😍 My name is Nagaraj B Hittalamani, and I work as a Junior Software Engineer at Luxoft India. My journey with Luxoft has been enriched by diverse opportunities to contribute to numerous projects. In this article, we explore the detailed introduction to Hardware security module in automotive domain. Your presence and engagement in this discussion are truly appreciated. Let's dive in! **What is a Hardware Security Module?** A general-purpose hardware protection module is a requirements-compliant cryptographic device that makes use of bodily security features, logical protection controls, and sturdy encryption to shield sensitive statistics in transit, in use, and at relaxation. An HSM might also be referred to as a secure application module, a private laptop security module, or a hardware cryptographic module. The hardware protection module creates depended-on surroundings for appearing quite a few cryptographic operations, inclusive of key trade, key management, and encryption. In this context, “trusted” means freed from malware and viruses, and guarded from exploits and unauthorized get right of entry to. An HSM can be depended on the fact that it is built a top certified, properly tested, specialized hardware. It runs a security-targeted OS. Its complete layout actively protects and hides cryptographic statistics, and it has confined get admission to the community via a moderated interface that is strictly controlled by inner policies. Without a hardware security module, regular operations and cryptographic operations take region inside the same locations, so attackers can get admission to regular commercial enterprise logic data alongside sensitive records including keys and certificate. Hackers can install arbitrary certificates, make bigger unauthorized access, adjust code, and in any other case dangerously effect cryptographic operations. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0f0v1yk11ttgf30bnvr7.png) **Key Threats to Enterprise Hardware** **1. Outdated firmware** Not each employer within the ‘clever devices’ phase is an expert in IT security. For instance, local producers of IoT and IIoT additives which includes smart HVAC, related RFID get entry to, and plant robots would possibly offer firmware full of insects and different security flaws. Careless patch management can cause further complications and the introduction of recent vulnerabilities. **2. Inadequate encryption** An growing number of organization gadgets are transferring towards becoming IP-related. However, not all of them are connected to a network using the correct encryption protocols. Encryption for each, records at rest and statistics in motion, is critical for the safety of operational technology devices linked to a community. **3. Unsecured local access** Hardware with IoT and IIoT packages is frequently accessible via an on-premise interface or local network. Companies, particularly smaller ones, may neglect to configure these nearby access factors successfully or shield them physically. This leaves the organisation hardware surroundings open to the movements of malicious actors who can access and tamper with enterprise systems effortlessly. **4. Unchanged default passwords** Most business enterprise devices come with a ‘default password’ which can and ought to be changed. However, even organizations that practice current software safety may lack simple hygiene with regards to hardware safety. Personnel might hold to use the default passwords for low-fee IoT gadgets and turnkey hardware. Often, the password is written on the device itself and can be accessed by using pretty much everybody with bodily access to the same. **5. Backdoors** A backdoor is a hidden vulnerability this is often inserted deliberately at some stage in the manufacturing degree of a device. Backdoors enable threat actors to skip authentication techniques and advantage root get right of entry to to the tool without the proprietor’s consent. Unlike software program backdoors that can be patched easily, hardware backdoors are tons greater difficult to plug. **6. Modification assaults** Modification attacks invasively tamper with the everyday functioning of a tool and allow bad actors to override restrictions on hardware operating limits. A change attack takes matters one step beforehand of an eavesdropping assault via enhancing the communication that a device engages in. The unauthorized party then profits the potential to execute a person-in-the-middle assault, allowing them to receive and alter packets earlier than sending them to the meant recipient. Unauthorized modifications to incorporated circuits or the introduction of hardware Trojans are not unusual methods to engage in change assaults. **Best Practices for Hardware Security ** **1. Study your hardware provider** Evaluating the safety of organization hardware requires the analysis of vulnerabilities that exist at some stage in its lifecycle, starting proper from the pre-manufacturing level. To minimize the hazard of working with prone or counterfeit hardware, start by means of identifying the companies that supply your corporation hardware. Check your seller’s providers and study the parties that integrate the additives and manufacture the character components that your structures use. Also, find out who your seller’s secondary partners are if primary deliver strains are stretched. **2. Encrypt whatever you could** Implement encryption methods and protocol anyplace feasible, even for smaller gadgets along with external garage media and dynamic random get entry to memory (DRAM) interfaces. Most processors manufactured these days come with in-built additives that facilitate encryption and decryption with out compromising processing energy. Wherever viable, information should be encrypted at rest, in-motion, and in-processing. **3. Implement actual-time tracking** Real-time tracking of hardware guarantees ok safety and prevents unauthorized movements, mainly for organizations with far off people. Cloud-primarily based actual-time tracking answers notify security groups in case of a hardware breach and allow immediate incident reaction measures. Wherever viable, implement visual verification measures, pastime reporting, and faraway get entry to to bodily gadgets. This will help decrease response times in case of a safety breach. **4. Implement adequate digital protection** Electronic safety can be bolstered the use of a stable detail for storing a master key. This lets in customers to encrypt or decrypt other credentials and records every time required. Secure factors guard systems against threats along with key extraction and tampering. If hardware-steady elements are not a possible choice, hardware-enforced isolation or every other hardware protection measure can be used as an alternative. So, This is the short introduction to Hardware Security Module in automotive domain. We will continue more about HSM in the upcoming articles.
nagaraj8687
1,784,536
Desenvolvendo Soft Skills
Habilidades técnicas todo mundo aprende, mas o que te destaca são as habilidades não...
0
2024-03-09T19:13:44
https://dev.to/techinrio/desenvolvendo-soft-skills-418
beginners, career, braziliandevs, community
Habilidades técnicas todo mundo aprende, mas o que te destaca são as habilidades não técnicas. Aprenda como sair na frente e se virar nas mais diversas situações com soft skills que o tornam um profissional completo. ## O que são Soft Skills A definição mais simplista é: habilidades não técnicas. Normalmente soft skills são definidas como habilidades interpessoais, ou seja, suas capacidades de comunicação, sociais, fala e hábitos. Contudo, gosto de conferir outro sentido à Soft Skills: - Toda habilidade essencial para a boa execução do seu trabalho é considerada uma habilidade técnica (Hard Skills); - Toda habilidade não essencial para o seu trabalho pode ser considerada uma soft skill. Logo, existem habilidades que não são exatamente sobre interações sociais mas que me ajudam muito no dia a dia do trabalho, por exemplo: leitura dinâmica e técnicas de memorização. Gosto deste tipo de visão pois te habilita a olhar além das capacidades sociais e pensar em tudo que pode compor seu cinto de utilidades de soft skills. ## Tipos de Soft Skills - Comunicação assertiva Articule ideias e seja compreendido de forma clara e simples. Comunicação assertiva é fazer mais com menos, compreender e ser compreendido, o que é essencial em ambientes de trabalho em equipe, quase todo trabalho. - Trabalho em equipe Falando em trabalho em equipe, de nada adianta saber comunicar suas ideias se não conseguir colaborar em equipe. Trabalho em equipe envolve saber dividir para conquistar, confiar nos seus pares e principalmente entender que todos estão ali para um objetivo em comum. - Pensamento crítico Pense e entenda o que está fazendo e como isso tem valor para o objetivo, só assim você consegue sair da caixinha e agregar valor de verdade. Um bom exemplo é tarefas que as pessoas querem automatizar, mas o trabalho para fazer isso vai gastar mais tempo e ter menos retorno do que apenas fazer manualmente de forma ocasional. É o tipo de coisa que nem todo mundo pára pra pensar antes de fazer. - Resolução de problemas Muitas das vezes as pessoas não estão esperando os problemas aparecerem, mas eles chegam mesmo assim. Essa habilidade é sobre você ter a calma e controle para pegar o controle da situação e fazer o melhor que puder com o que você tem. Nem sempre poderá fazer muito sobre isso, mas é importante que não desista antes de tentar. - Inteligência emocional Nem sempre o caminho que as coisas tomam é aquele que você gostaria, e por isso mesmo é super importante que você se conheça e treine esta habilidade. Lidar bem com situações atípicas e frustrações é algo bem complicado e isso daqui costuma ser uma jornada pessoal para cada pessoa. - Empatia Sim, eu sei que tem momentos onde você quer simplesmente detonar uma bomba em cima de uma pessoa, mas é importante lembrar que ela é tão humana quanto você e amanhã pode ser o dia onde você precisaria de empatia dela. - Gestão de tempo Ahh essa daqui é boa!! Já sentiu que você está sempre no olho de um furacão com tudo rodando ao seu redor onde você só pode se concentrar nas emergências, sempre tem muita coisa com prazos críticos e etc?? Nossa é muito ruim isso, qual foi a última vez que você estava relativamente tranquilo, com bastante folga nos prazos e tempo de sobra? Pois é, toda vez que você ver alguém que tá sempre `na correria` lembre-se desta habilidade. - Adaptabilidade Eu que trabalhei bastante de startups sempre convivi bastante com ambientes onde tudo está mudando constantemente, mudanças de prioridade, tudo é emergência, tudo é importante, etc. Bem isso não é nem de longe agradável mas assim como outros `calos` da profissão, te preparam para se adaptar aos mais diversos cenários. Uma das coisas mais importantes dessa habilidade é saber analisar suas prioridades e se organizar de acordo com elas. Tudo mudar muito rápido não é uma desculpa para ser desorganizado. - Liderança Todas habilidades anteriores bem desenvolvidas podem te levar naturalmente para o caminho da liderança. Faça as contas: - Se comunica bem; - Sabe lidar com problemas; - Sabe lidar com pessoas; - Sabe usar seu tempo; - Se mantém firme diante dos problemas; - Faz o que tem que fazer para chegar no objetivo. Olhando pra isso tudo parece quase natural colocar uma pessoa com essas habilidades em uma posição de liderança onde pode ajudar outras pessoas a colaborar para um objetivo maior. ## Onde as Soft Skills te Levam? Gosto de dizer essas habilidades te levam além e desbloqueiam novos caminhos. ### Cargos de liderança e gestão Uma pessoa que desenvolveu bem suas soft skills naturalmente é mais cogitada para cargos de liderança e gestão. ### Oportunidades Se conectar com pessoas e desenvolver relações genuínas é uma forma muito eficaz de acessar ótimas oportunidades de trabalho e negócios. ### Sucesso e crescimento profissional Pessoas com melhor domínio de soft skills aumentam significativamente as chances de sucesso dos projetos que participam bem como sua velocidade e destaque no meio profissional. ## Como treinar Como muitas das habilidades não técnicas são relacionadas a situações sociais, se expor socialmente é um caminho natural: - Apresentar trabalhos/palestras; - Ir em eventos; - Socializar com pessoas; - Falar com pessoas que pensam diferente; - Ter discussões construtivas; - Ensinar; Assim como mencionei no início deste artigo, toda e qualquer habilidade que possa te ajudar nas suas tarefas é bem vinda. Então preste bastante atenção nas atividades que você desempenha rotineiramente para entender se é possível melhorar seu desempenho e resultados. ## Leituras recomendadas O que não falta por aí é conteúdo falando sobre soft skills, mas gostaria de recomendar pessoalmente estes 2 livros que me ajudaram bastante alcançar outro nível nessas habilidades: - Soft skills: competências essenciais para os novos tempos por Lucedile Antunes - Como fazer amigos e influenciar pessoas por Dale Carnegie ## Referências Gupy Blog. "Soft Skills: o que são, sua importância e 10 principais soft skills para o mercado de trabalho." Disponível em: https://www.gupy.io/blog/soft-skills. Solides Blog. "Conheça soft skills para desenvolver." Disponível em: https://blog.solides.com.br/conheca-soft-skills-para-desenvolver/. Na Prática. "Como desenvolver soft skills." Disponível em: https://www.napratica.org.br/como-desenvolver-soft-skills/.
robertheory
1,785,578
Using Ollama: Getting hands-on with local LLMs and building a chatbot
This is the first part of a deeper dive into Ollama and things that I have learned about local LLMs...
0
2024-03-15T16:38:46
https://dev.to/arjunrao87/using-ollama-getting-hands-on-with-local-llms-and-building-a-chatbot-2gp3
programming, beginners, ai, machinelearning
This is the first part of a deeper dive into Ollama and things that I have learned about local LLMs and how you can use them for inference-based applications. In this post, you will learn about -  - [How to use Ollama](#how-to-use-ollama) - [How to create your own model in Ollama](#how-to-create-your-own-model-in-ollama) - [Using Ollama to build a chatbot](#using-ollama-to-build-a-chatbot) > To understand the basics of LLMs (including Local LLMs) you can refer to my previous post on this topic [here](https://arjunrao.co/posts/llm-rag).  ## First, some background In the space of local LLMs, I first ran into LMStudio. While the app itself is easy to use, I liked the simplicity and maneuverability that Ollama provides. To learn more about Ollama you can go [here](https://ollama.com/).  tl;dr: Ollama hosts its own curated list of models that you have access to. You can download these models to your local machine and then interact with those models through a command line prompt. Alternatively, when you run the model, Ollama also runs an inference server hosted at port 11434 (by default) that you can interact with by way of APIs and other libraries like Langchain.  As of this post, Ollama has 74 models, which also include categories like embedding models.  ![ollama models](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qchjdq0r4fqh31ujxz83.png) ## How to use Ollama [Download](https://ollama.com/download) Ollama for the OS of your choice. Once you do that, you run the command ollama to confirm its working. It should show you the help menu -  ```sh Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model pull Pull a model from a registry push Push a model to a registry list List models cp Copy a model rm Remove a model help Help about any command Flags: -h, --help help for ollama -v, --version Show version information Use "ollama [command] --help" for more information about a command. ``` To use any model you first need to "pull" them from Ollama, much like you would pull down an image from Dockerhub (if you have used that in the past) or something like Elastic Container Registry (ECR). Ollama ships with some default models (like `llama2` which is Facebook's open source LLM) which you can see by running - ```sh ollama list ``` Select the model (let's say phi) that you would like to interact with from the [Ollama library page](https://ollama.com/library). You can now pull down this model by running the command - ```sh ollama pull phi ``` ![ollama-pull](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nmxso0l5umc7ky22vzkt.png) Once the download is complete, you can check to see whether the model is available locally by running -  ```sh ollama list ``` ![ollama-list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0437khshw8y3i1kt48ky.png) Now that the model is available, it is ready to be run with. You can run a model using the command -  ```sh ollama run phi ``` The accuracy of the answers isn't always top notch, but you can address that by selecting different models or perhaps doing some fine tuning or implementing a RAG-like solution on your own to improve accuracy.  What I have demonstrated above is how you can use Ollama models using the command line prompt. However, if you check the inference server that Llama has running you can see that there are programmatic ways of accessing this by hitting port 11434.  ![ollama-server](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5czo9sd9rqdg9mqtcs24.png) If you wanted to use Langchain to access your Ollama model, you can use something like -  ```py from langchain_community.llms import Ollama from langchain.chains import RetrievalQA prompt = "What is the difference between an adverb and an adjective?" llm = Ollama(model="mistral") qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True, ) response = qa(prompt) ``` ## How to create your own model in Ollama You can also create your own model variant using the concept of `Modelfile` in Ollama. For more parameters to configure in your Modelfile you can look at these docs.  Example Modelfile -  ```python # Downloaded from Hugging Face https://huggingface.co/TheBloke/finance-LLM-GGUF/tree/main FROM "./finance-llm-13b.Q4_K_M.gguf" PARAMETER temperature 0.001 PARAMETER top_k 20 TEMPLATE """ {{.Prompt}} """ # set the system message SYSTEM """ You are Warren Buffet. Answer as Buffet only, and do so in short sentences. """ ``` Once you have the Modelfile, you can create your model using - ```sh ollama create arjunrao87/financellm -f Modelfile ``` where `financellm` is the name of your LLM model and `arjunrao87` would be replaced by your ollama.com username (which also acts as the namespace of your online ollama registry). At this point, you can use your created model like any other model on ollama.  You can also choose to push your model to the remote ollama registry. To make this happen, you need to  - Create your account on ollama.com - Add a new model - Have the public keys set up to allow you to push models from your remote machine.  Once you have created your local llm, you can push it to the ollama registry using -  ```sh ollama push arjunrao87/financellm ``` 🦄 Now lets get to the good part. ## Using Ollama to build a chatbot  During my quest to use Ollama, one of the more pleasant discoveries was this ecosystem of python based web application builders that I came across. [Chainlit](https://docs.chainlit.io/get-started/overview) can be used to build a full fledged chat bot like ChatGPT. As their page says,  > Chainlit is an open-source Python package to build production ready Conversational AI I walked through a few of the Chainlit tutorials to get a handle on what you can do with chainlit, which includes things like creating sequences of tasks (called "steps"), enabling buttons and actions, sending images, and all kinds of things. You can follow this part of my journey [here](https://github.com/arjunrao87/lllm/tree/main/ollama-conversational-ai/chainlit-tutorials).  Once I got the hang of Chainlit, I wanted to put together a straightforward chatbot that basically used Ollama so that I could use a local LLM to chat with (instead of say ChatGPT or Claude).  With less than 50 lines of code, you can do that using Chainlit + Ollama. Isn't that crazy?  Chainlit as a library is super straightforward to use. I also used Langchain for using and interacting with Ollama.  ```python from langchain_community.llms import Ollama from langchain.prompts import ChatPromptTemplate import chainlit as cl ``` The next step is to define how you want the loading screen of the chat bot to look like, by using the ```python @cl.on_chat_start decorator of chainlit -  @cl.on_chat_start async def on_chat_start(): elements = [cl.Image(name="image1", display="inline", path="assets/gemma.jpeg")] await cl.Message( content="Hello there, I am Gemma. How can I help you?", elements=elements ).send() ... ... ``` The `Message` interface is what Chainlit uses to send responses back to the UI. You can construct messages with the simple content key and then you can embellish it with things like elements which in my case I have added an Image to, to show an image when the user first logs in.  ![ollama-chat](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjehq6nqughmg857jbgf.png) The next step is to invoke Langchain to instantiate Ollama (with the model of your choice) and construct the prompt template. The usage of the cl.user_session is to mostly maintain separation of user contexts and histories, which just for the purposes of running a quick demo is not strictly required. Chain is a Langchain interface called `Runnable` that is used to create custom chains. You can read more about that [here](https://python.langchain.com/docs/expression_language/interface).  ```python @cl.on_chat_start async def on_chat_start(): ... ... model = Ollama(model="mistral") prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a knowledgeable historian who answers super concisely", ), ("human", "{question}"), ] ) chain = prompt | model cl.user_session.set("chain", chain) ``` Now you have all the pieces to have a chatbot UI and accept user inputs. What do you do with the prompts the user provides? You will use the `@cl.on_message` handler from Chainlit to do something with the message the user provided. ```python @cl.on_message async def on_message(message: cl.Message): chain = cl.user_session.get("chain") msg = cl.Message(content="") async for chunk in chain.astream( {"question": message.content}, ): await msg.stream_token(chunk) await msg.send() ``` `chain.astream` as the docs suggest "stream back chunks of the response async" which is what we want for our bot.  That is really it. A few imports, couple of functions, a little bit of sugar and you have a functional chatbot.  ![chat1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u2k3likcb9gln54r3cc.png) >> ⬆️ a good historian response. ![chat2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nm11z2t4l365h54iqt9.png) >> ⬆️ a sassy historian who is (understandably) not good at math ;)  For the full code, you can see my [github](https://github.com/arjunrao87/lllm/blob/main/ollama-conversational-ai/main.py). --- If this content is interesting to you, hit that 👏 button or subscribe to my newsletter here → https://a1engineering.beehiiv.com/subscribe. It gives me the feedback that I need to do more or less of something! Thanks ❤️
arjunrao87
1,785,792
Unlocking the Power of WebAssembly
WebAssembly, often abbreviated as WASM. Let's do a brief introduction into major concepts of...
0
2024-03-10T06:59:36
https://dev.to/adarshgoyal/unlocking-the-power-of-webassembly-22o
webassembly, webdev, javascript, security
WebAssembly, often abbreviated as WASM. Let's do a brief introduction into major concepts of WebAssembly: - WebAssembly is a compile-targeted language for running bytecode on the web. - Relative to Javascript, WebAssembly offers predictable performance. It is not inherently **faster** than Javascript, but it **can be faster than JavaScript** in the correct use case. Such as **computationally intensive tasks**, like nested loops or handling large amounts of data. Therefore, **WebAssembly is a complement to JavaScript, and not a replacement**. - WebAssembly is extremely portable. WebAssembly runs on: all major web browsers, V8 runtimes like [Node.js](https://nodejs.org/en/), and independent Wasm runtimes like [Wasmtime](https://wasmtime.dev/), [Lucet](https://github.com/bytecodealliance/lucet), and [Wasmer](https://github.com/wasmerio/wasmer). - WebAssembly has Linear Memory, in other words, one big expandable array. And in the context of Javascript, synchronously accessible by Javascript and Wasm. - WebAssembly can export functions and constants, And in the context of Javascript, synchronously accessible by Javascript and Wasm. - WebAssembly, in its current MVP, only handles integers and floats. However, tools and libraries exist to make passing high-level data types convenient. ### **Introduction to WebAssembly:** ## What is WASM? WebAssembly is a binary instruction format that serves as a compilation target for programming languages, enabling them to run in web browsers at near-native speeds. Unlike JavaScript, which is traditionally used for web development, WebAssembly provides a low-level, efficient alternative that can unlock new possibilities for web applications. ### **Understanding the Difference between WASM and JavaScript:** While JavaScript is a high-level scripting language, WebAssembly operates at a lower level, allowing for more efficient execution and improved performance. This key distinction opens up opportunities for developers to leverage existing codebases written in languages like C++, Rust, and AssemblyScript, among others, and seamlessly integrate them into web applications. ## Why WASM? The answer lies in its efficiency, speed, and safety. WASM is designed to be a low-level virtual machine that runs at near-native speed. This means we can now run heavy computational tasks directly in our browsers, opening up a world of possibilities for web-based applications. ## Examples - [Hello World](https://wasmbyexample.dev/example-redirect?exampleName=hello-world) - [Linear memory](https://wasmbyexample.dev/examples/webassembly-linear-memory/webassembly-linear-memory.assemblyscript.en-us.html) - [Graphics](https://wasmbyexample.dev/examples/reading-and-writing-graphics/reading-and-writing-graphics.assemblyscript.en-us.html) ## Latest Trends in WASM In the recent years, WASM has gained significant traction. Let's look at some of the latest trends: 1. **WASI**: The WebAssembly System Interface (WASI) is a promising trend. It's a modular system interface for WebAssembly aiming to provide a wide range of system call capabilities, increasing the usefulness of WASM. 2. **WASM outside the browser**: While initially designed for web browsers, WASM is now being used outside the browser environment too. It's being utilized in edge computing, serverless computing, and more. 3. **Language support**: More and more programming languages are adding support for compiling to WASM. This is making it even more accessible and widespread. ## WASM + AI While it was primarily designed for web applications, it has been increasingly used outside of the browser environment, including in machine learning applications. 1. **TensorFlow.js:** TensorFlow.js is a library for training and deploying machine learning models in JavaScript environments, including web browsers. It uses WebAssembly to accelerate computations, especially for operations like matrix multiplications, which are fundamental to many machine learning algorithms. 2. **ONNX.js:** Open Neural Network Exchange (ONNX) is an open-source format for representing deep learning models. ONNX.js is a JavaScript library that allows developers to run ONNX models in the browser and on Node.js. It leverages WebAssembly to optimize performance, making it suitable for running complex machine learning models in resource-constrained environments. 3. **Hugging Face Transformers:** Hugging Face provides a library for natural language processing (NLP) tasks called Transformers. They offer a JavaScript version of the library that runs in the browser using WebAssembly. This allows developers to perform tasks such as text generation, sentiment analysis, and question answering directly in web applications without requiring server-side computation. 4. **WebDNN:** WebDNN is a deep neural network library that uses WebAssembly and WebGL to accelerate inference on web browsers. It provides a framework for converting trained deep learning models into a format that can be efficiently executed in the browser. This allows for the deployment of complex machine learning models directly in web applications without relying on server-side processing. 5. **DeepLearn.js:** DeepLearn.js is a JavaScript library for deep learning that includes support for training and running neural networks in the browser. It uses WebAssembly to optimize performance, enabling real-time inference for applications such as image recognition, object detection, and more. ## Challenges for Adoption 1. Limited debugging support. 2. Increased complexity in development. 3. Lack of direct DOM access. 4. Security concerns with untrusted code. 5. Increased bundle size affecting load times. 6. Potential browser compatibility issues. **Refs** https://wasmbyexample.dev/examples/introduction/introduction.all.en-us.html https://www.assemblyscript.org/compiler.html#using-the-compiler https://developer.mozilla.org/en-US/docs/WebAssembly https://digest.browsertech.com/archive/browsertech-digest-how-modyfi-is-building-with/https://www.modyfi.com/
adarshgoyal
1,785,805
Feeling Stuck With CSS? 🤔 Open This! 🎨
Well, there's not some magic trick, I'm gonna tell here. But some shortcuts that can save you time...
0
2024-03-10T19:30:00
https://dev.to/arjuncodess/feeling-stuck-with-css-open-this-4g9f
webdev, beginners, css, design
Well, there's not some magic trick, I'm gonna tell here. But some shortcuts that can save you time and effort in your next encounter _with this thing we all hate called CSS_. **CSS generators** can save a lot of time and effort by creating consistent styles quickly and easily. *** ### 🤓 Quick Fact Did you know that a group of flamingos is called a flamboyant? They are known for their vibrant pink colour. Flamboyant and beautiful, just like the websites you can create with these generators! Hehe. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2t9xms8nj6a6qcrctitc.png) Here are the 8 most useful CSS generators: *** ### [1️⃣ Button CSS generator](http://markodenic.com/tools/buttons-generator) 100+ buttons you can use in your project. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9x7z7esardyoxs6ibsvb.png) <figcaption>http://markodenic.com/tools/buttons-generator</figcaption> *** ### [2️⃣ Soft UI generator](http://neumorphism.io) CSS code generator that will help with colours, gradients, and shadows to adopt this new design trend or discover its possibilities. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h44caf3eduzpbb5nlm95.png) <figcaption>http://neumorphism.io</figcaption> *** ### [3️⃣ CSS Grid Generator](http://cssgrid-generator.netlify.app) Generate basic CSS Grid code to make dynamic layouts. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5ads8tb7pqizp9lqbov.png) <figcaption>http://cssgrid-generator.netlify.app</figcaption> *** ### [4️⃣ Get Waves](http://getwaves.io) A free SVG wave generator to make unique SVG waves for your next web design. Choose a curve, adjust complexity, and randomize! ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvcpiy3uj7qhnfhctg32.png) <figcaption>http://getwaves.io</figcaption> *** ### [5️⃣ Fancy Border Radius Generator](http://9elements.github.io/fancy-border-radius) Generator to build organic shapes with a CSS3 border-radius. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2uytqjjb00x796mzo6fv.png) <figcaption>http://9elements.github.io/fancy-border-radius</figcaption> *** ### [6️⃣ Glassmorphism CSS Generator](http://markodenic.com/tools/glassmorphism-css-generator) Create a stunning glass effect for your UI designs. %} ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bway17yi9q5z65zkmggh.png) <figcaption>http://markodenic.com/tools/glassmorphism-css-generator</figcaption> *** ### [7️⃣ Animista](http://animista.net) Animista is a CSS animation library where you can play with a collection of ready-made CSS animations and download only those you will use. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ns6gecxtn545pm33fq0z.png) <figcaption>http://animista.net</figcaption> *** ### [8️⃣ Keyframes](http://keyframes.app) Keyframes help you write better CSS with a suite of tools to create CSS animations, box shadows, colours, & more. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i7u60ueei52zlrufuz3b.png) <figcaption>http://keyframes.app</figcaption> *** ### 🙌 Final Thoughts While helpful in simplifying the development process, it should be a priority to remember the importance of knowing CSS inside-out for one to be a master of web dev and web design. Therefore, try them and experiment with them but at the same time always try to improve your understanding of CSS to come up with more creative solutions. Always remember, it's all about learning in the end. I hope you liked the article! ❤️ Connect with me: [linktree](https://linktr.ee/ArjunCodess/) Happy Coding! 🚀 Thanks for 19676! 🤗
arjuncodess
1,785,926
Integrando Azure Text Translation en una aplicación React con Next.js
En el mundo globalizado de hoy, proporcionar contenido en varios idiomas puede ser una enorme ventaja...
0
2024-03-10T11:45:29
https://danieljsaldana.dev/integrando-azure-text-translation-en-una-aplicacion-react-con-nextjs/
nextjs, react, spanish, azuretexttranslation
--- title: Integrando Azure Text Translation en una aplicación React con Next.js published: true tags: Nextjs, React, Spanish, AzureTextTranslation canonical_url: https://danieljsaldana.dev/integrando-azure-text-translation-en-una-aplicacion-react-con-nextjs/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c74vvz6x1gvf19unrtz4.png --- En el mundo globalizado de hoy, proporcionar contenido en varios idiomas puede ser una enorme ventaja para tu aplicación web. Microsoft Azure ofrece un servicio de traducción de texto que se puede integrar fácilmente en cualquier aplicación web. En este tutorial, te mostraré cómo utilizar Azure Text Translation en una aplicación creada con React para el frontend y Next.js para el API de backend. ### Paso 1: Configuración del Servicio en Azure El primer paso es crear y configurar el servicio Azure Text Translation en la consola de Azure: 1. **Iniciar sesión y crear un recurso** : Ve a [Azure Portal](https://portal.azure.com) y haz clic en "Crear un recurso". Busca "Translator" y sigue las instrucciones para crearlo. 2. **Configuración del servicio** : Elige un nombre para tu servicio, selecciona el plan de precios y la región. 3. **Obtener las claves de acceso** : Una vez creado el servicio, ve a "Claves y punto de conexión" para obtener tu "Clave 1" y "Ubicación". Guarda estos valores, los necesitarás más adelante para tu API. ### Paso 2: Configuración del Backend con Next.js Ahora que hemos configurado nuestro servicio en Azure, es hora de crear el backend que manejará las solicitudes de traducción. Este backend se implementará usando Next.js, aprovechando su capacidad para crear API routes de manera sencilla y eficiente. A continuación, te muestro un ejemplo de cómo configurar un endpoint de API en Next.js para realizar traducciones utilizando el servicio Azure Text Translation: ``` import axios from 'axios'; import { enableCors } from "@/src/middleware/enableCors"; import { methodValidator } from "@/src/utils/methodValidator"; require('dotenv').config(); // Función para eliminar el markdown del texto function stripMarkdown(markdownText) { // Aquí se eliminan diversos elementos de markdown para obtener solo el texto plano return markdownText .replace(/```[\s\S]*?```/g, '') .replace(/\([^\)]*\)/g, '') .replace(/(\*\*|__)(.*?)(\*\*|__)/g, '$2') .replace(/\!\[(.*?)\]\((.*?)\)/g, '') .replace(/\[(.*?)\]\((.*?)\)/g, '$1') .replace(/(\*\*|__)/g, '') .replace(/(\*|-)/g, '') .replace(/(#{1,6}) /g, '') .replace(/`(.+?)`/g, '') .replace(/\n/g, ' '); } // Función para eliminar el HTML del texto function stripHtml(htmlText) { // Elimina las etiquetas HTML para obtener solo el texto return htmlText.replace(/<[^>]*>/g, ''); } // La función principal que maneja la solicitud de traducción const translation = async (req, res) => { await enableCors(req, res); // Habilita CORS para la solicitud await methodValidator(req, res, 'POST'); // Valida que la solicitud sea de tipo POST if (res.headersSent) { console.log('La respuesta ya fue enviada.'); return; } // Extrae los datos necesarios de la solicitud let { title, content, targetLanguage } = req.body; if (!title || !content || !targetLanguage) { console.log('Faltan datos en la solicitud:', { title, content, targetLanguage }); return res.status(400).json({ error: 'Faltan datos en la solicitud. Se requieren title, content y targetLanguage.' }); } // Limpia el título y el contenido de markdown y HTML title = stripMarkdown(stripHtml(title)); content = stripMarkdown(stripHtml(content)); // Configuración para la solicitud a la API de Azure Text Translation const translatorConfig = { headers: { 'Ocp-Apim-Subscription-Key': process.env.TRANSLATOR_SUBSCRIPTION_KEY, 'Ocp-Apim-Subscription-Region': process.env.TRANSLATOR_REGION, 'Content-Type': 'application/json' }, params: { 'api-version': '3.0', 'to': targetLanguage } }; try { // Realiza las solicitudes de traducción para el título y el contenido simultáneamente const [titleResponse, contentResponse] = await Promise.all([ axios.post('https://api.cognitive.microsofttranslator.com/translate', [{ Text: title }], translatorConfig), axios.post('https://api.cognitive.microsofttranslator.com/translate', [{ Text: content }], translatorConfig) ]); // Extrae las traducciones del título y el contenido de las respuestas const translatedTitle = titleResponse.data[0].translations[0].text; const translatedContent = contentResponse.data[0].translations[0].text; // Envía la respuesta con el título y contenido traducidos return res.status(200).json({ translatedTitle, translatedContent }); } catch (error) { console.error('Error durante la traducción:', error); return res.status(500).json({ error: `Error durante la solicitud de traducción: ${error.message}` }); } }; export default enableCors(translation); ``` #### Cómo Funciona Este backend realiza varios pasos importantes para procesar y responder a las solicitudes de traducción: 1. **Habilitación de CORS** : Utiliza un middleware `enableCors` para permitir solicitudes de origen cruzado, lo cual es crucial para APIs accesibles desde el frontend. 2. **Validación del Método de Solicitud** : El `methodValidator` asegura que solo se acepten solicitudes POST, que es lo esperado para enviar datos de texto a traducir. 3. **Limpieza de Texto** : Antes de enviar el texto a la API de Azure, se limpia de markdown y HTML para asegurarse de que la traducción sea sobre el texto puro. 4. **Configuración de la Solicitud de Traducción** : Establece los encabezados y parámetros necesarios, incluyendo la clave de suscripción y la región que obtuviste al configurar el servicio Azure. 5. **Envío y Respuesta de la Traducción** : Realiza solicitudes HTTP paralelas para traducir tanto el título como el contenido, y luego responde con el texto traducido. Al implementar esta función en tu backend, proporcionas una forma robusta y segura de integrar traducciones automáticas en tu aplicación. Entendido, vamos a actualizar el paso 3 del post con el ejemplo de código proporcionado para el componente del frontend y una explicación de cómo funciona: ### Paso 3: Integración del Frontend con React Una vez que el backend está en su lugar, es hora de conectar el frontend para que los usuarios puedan enviar texto para traducir y ver los resultados. Vamos a crear un componente React que maneje esta funcionalidad: ``` import { useState, useEffect, useMemo } from 'react'; import axios from 'axios'; import sanitizeHtml from 'sanitize-html'; import { marked } from 'marked'; import { SiMicrosofttranslator } from 'react-icons/si'; import { GiAllSeeingEye } from 'react-icons/gi'; import './TranslationController.scss'; const TranslationController = ({ content, title, image }) => { // Estados para controlar la UI y almacenar los datos de traducción const [isModalOpen, setIsModalOpen] = useState(false); const [selectedLanguage, setSelectedLanguage] = useState(''); const [translatedTitle, setTranslatedTitle] = useState(title); const [translatedContent, setTranslatedContent] = useState(content); const [isTranslating, setIsTranslating] = useState(false); const [tokens, setTokens] = useState(0); const [dailyLimitInfo, setDailyLimitInfo] = useState({ dailyLimit: 0, tokensUsedToday: 0 }); const [errorMessage, setErrorMessage] = useState(''); // Efectos para cargar información necesaria al cargar el componente useEffect(() => { fetchDailyLimit(); fetchTokens(); }, []); // Aquí se definen las funciones para manejar la lógica de negocios, como obtener los tokens y límites diarios, // actualizar tokens y realizar la traducción. const toggleModal = () => { setIsModalOpen(!isModalOpen); document.body.classList.toggle('no-scroll', !isModalOpen); }; const handleLanguageChange = (e) => { setSelectedLanguage(e.target.value); }; // Funciones para manejar la conversión de Markdown a HTML y la limpieza de HTML const translateContent = async () => { // Lógica para iniciar la traducción y actualizar los estados basados en la respuesta }; // Lógica para calcular los tokens disponibles basada en la información del límite diario const tokensAvailableToday = useMemo(() => { const { dailyLimit, tokensUsedToday } = dailyLimitInfo; return Math.max(dailyLimit - tokensUsedToday, 0); }, [dailyLimitInfo]); // Renderiza el componente, mostrando el formulario de traducción, los resultados y cualquier mensaje de error. return ( // JSX para la interfaz del controlador de traducción ); }; export default TranslationController; ``` #### Cómo Funciona Este componente de React `TranslationController` maneja varias tareas importantes: 1. **Estados y Efectos** : Utiliza los hooks de React para manejar estados y efectos secundarios. Esto incluye abrir y cerrar el modal de traducción, almacenar el contenido traducido, y rastrear si se está realizando una traducción. 2. **Manejo de Límites y Tokens** : Hace solicitudes al backend para obtener y actualizar el número de tokens disponibles y el límite diario de uso. Esto es crucial para no sobrepasar los límites del servicio de traducción. 3. **Selección de Idioma y Traducción** : Permite al usuario seleccionar el idioma de destino y envía el contenido a traducir al backend. Muestra el contenido traducido una vez que esté disponible. 4. **Renderización de Markdown y Sanitización de HTML** : Convierte el contenido de Markdown a HTML y lo limpia antes de mostrarlo, asegurando que el contenido sea seguro y esté formateado correctamente. 5. **UI Interactiva** : Proporciona una interfaz de usuario interactiva y amigable, incluyendo un modal de traducción, selección de idioma y botones para realizar la traducción. También maneja la visualización de mensajes de error y la cantidad de tokens disponibles. Este componente encapsula toda la lógica necesaria para interactuar con el servicio de traducción y proporcionar una experiencia de usuario fluida y funcional. Con este enfoque, los usuarios pueden traducir fácilmente el título y el contenido de sus publicaciones o documentos directamente desde la interfaz de la aplicación.
danieljsaldana
1,786,112
pyfzf : Python Fuzzy Finder
Introduction 📪 Programmers understand how much easier their work is when they use a tool...
0
2024-03-10T15:08:12
https://dev.to/gokayburuc/pyfzf-python-fuzzy-finder-40h8
python, vim, bash, tooling
<!-- FIXME:cover image --> ## Introduction 📪 Programmers understand how much easier their work is when they use a tool like `fzf` to pick data on data pathways in mixed file systems. If you do a quick YouTube video searching, you may get a ton of information on this topic. If I told you that you could utilize this tool without ever leaving your Python coding screen while creating code, how would you react? Especially in large-scale projects, sometimes you have to search for hours with regex etc. to select data from the texts. Using a fuzzy finder would be a smart solution to eliminate this burden. We will do this with `pyfzf` in the codes we wrote in Python. Let's continue the explanation without further ado. ## Requirements 🟥 ### Basic Requirements ![fzf](https://raw.githubusercontent.com/junegunn/i/master/fzf.png) The basic libraries and tools we will need are given below. - fzf : [https://github.com/junegunn/fzf](https://github.com/junegunn/fzf) - pyfzf : [https://github.com/nk412/pyfzf](https://github.com/nk412/pyfzf) Here you see a working example of pyfzf: ![fzf](https://raw.githubusercontent.com/nk412/pyfzf/master/pyfzf.gif) ## Setup ✅ ### fzf Setup The address provided here has all the information you need to install `fzf`: [https://github.com/junegunn/fzf?tab=readme-ov-file#installation](https://github.com/junegunn/fzf?tab=readme-ov-file#installation) ### pyfzf Setup Using `pip`, we will first need to install our library, `pyfzf`. ```bash pip install pyfzf ``` ### Usage 🧰 Workflow and module usage are quite straightforward. Importing the `pyfzf` library comes first. The imported library gets utilized to call the `FzfPrompt` module. ```python from pyfzf.pyfzf import FzfPrompt ``` Your `fzf` application on your system will be automatically found. ```python fzf = FzfPrompt() ``` If `fzf` cannot be found automatically, the `fzf` file path must be defined manually. ```python # example fzf = FzfPrompt("/usr/bin/fzf") ``` ## Sample Project: User Agent Selector 📽️ Let's create a new project folder. Let's start our project by creating a file named `user_agents.txt` and creating a **.txt** file containing hundreds of `User-Agents`. After saving and closing the file, we create a file named `fzf_choicer.py` and start writing our codes. The Foldertree of our project will be as follows. ```bash ├── fzf_choicer.py └── user_agents.txt ``` The content of our `user_agents.txt` file should be like this. Our User-Agent list We will provide it via [this gist](https://gist.githubusercontent.com/pzb/b4b6f57144aea7827ae4/raw/cf847b76a142955b1410c8bcef3aabe221a63db1/user-agents.txt). Here are a few lines of the file content: ```plaintext Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/37.0.2062.94 Chrome/37.0.2062.94 Safari/537.36 Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36 Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/600.8.9 (KHTML, like Gecko) Version/8.0.8 Safari/600.8.9 Mozilla/5.0 (iPad; CPU OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12H321 Safari/600.1.4 Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36 Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10240 Mozilla/5.0 (Windows NT 6.3; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0 Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36 Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0 ... ``` First of all, we start by calling our library. ```python from pyfzf.pyfzf import fzfprompt # fzf = fzfprompt('/usr/bin/fzf') fzf = fzfprompt() ``` Now let's write the module where we will read the data together. ```python def ReadItems(filepath=''): with open(f'{filepath}', 'r') as rf: rawdata = rf.readlines() rawdata = [r.strip() for r in rawdata] return rawdata ``` Here, we used the module named `.strip()` to delete the line spaces since the data will come with the expression `\n` during the reading process. Now, let's write our module that will read from the file and select the data from the stored values: ```python def ItemChooser(rawdata): data = fzf.prompt(rawdata, fzf_options='--reverse') choice = data[0] return choice ``` To run these functions, let's call our starter function named `__main__`. Those who are familiar with languages such as GO will know that this function named __main__ is the basic initializer function. When a script runs, the function named __main__ will always run in the background. When you run your codes within this function, you can perform optimization activities more easily. This is one way to run your programs more efficiently. ```python if __name__ == "__main__": rawdata = ReadItems('user_agents.txt') choice = ItemChooser(rawdata) print(choice) ``` Let's see our code in its entirety: ![carboncode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gp3fpj27gq4x5wf7hqee.png) Now let's examine the visual of how it works: <img src="https://i.ibb.co/PxgM7Wy/pyfzf.gif" alt="pyfzf" border="0"> ## Conlusion 🎙️ We worked to create an application using a library named `pyfzf`. This application allows you to import phrases into your scripts selectively from text files and big lists. ## Contact 📞 You can contact me using the biolink given below: [https://linktr.ee/gokayburuc](https://linktr.ee/gokayburuc) <!-- FIXME: contact info -->
gokayburuc
1,786,125
8 Free Websites that feel illegal to know..!
Welcome Back Readers 👋 I have gathered a list of a few websites, that would make your working on...
0
2024-03-10T15:48:32
https://dev.to/mursalfk/8-free-websites-that-feel-illegal-to-know-3n0i
beginners, productivity, ai, opensource
Welcome Back Readers 👋 I have gathered a list of a few websites, that would make your working on different genres easy-peasy-lemon-squeezy 😉 ### [1. Convertio](https://convertio.co/en/) Converting a file is always a pain in the bum. Convertio allows you to convert files to any format you want. FOR FREE!!! ### [2. HemingWay App](https://hemingwayapp.com/) Craft better sentences. Most of us use more words than necessary. Well, worry no more. Trim the fat from your writing with this AI Copyeditor. ### [3. Consensus App](https://consensus.app/) Get answers based on the latest research. Google is cool and all, but it tends to favour popular answers over correct ones. This web application uses AI to find evidence-based answers from the latest academic research. ### [4. Temp-mail](https://temp-mail.org/) Temp-mail gives you a temporary email and inbox because who wants to give out their actual email address? Right? Well, here's your email and inbox. Unlike Mailinator, temp-mail gives you a free and private inbox to get your email from unknown and non-trust contacts. ### [5. Excel Formula Bot](https://formulabot.com/) Convert text instructions into Excel formulae. Have you forgotten an Excel formula? Simply ask Excel Formula Bot in plain language what you want to do and get a simplified version of your query in the form of an Excel Formula. ### [6. PDFDrive](https://pdfdrive.to/) Read millions of books for free. A PDF search engine that lets you download and read over 80 Million books or files for free. (NOTE: I don't condone piracy, check if you're not breaking any laws before downloading) ### [7. QuillBot](https://quillbot.com/) QuillBot rewrites everything as plagiarism-free text. It even helps you re-write AI-generated text so that AI sniffers do not catch it. ### [8. TinyWow](https://tinywow.com/) Modify any media. TinyWow is an online toolbox that lets you do hundreds of things with your files such as editing PDFs, converting videos to GIFs and removing background from pictures. Follow me and Stay-Tuned for more such stuff.
mursalfk
1,786,132
Comparison of Machine Learning Algorithms...
Önemli*Makalenin Türkçe versiyonu için Linke...
0
2024-03-10T16:03:31
https://dev.to/ertugrulmutlu/comparison-of-machine-learning-algorithms-2ag4
python, knn, svm, decisiontree
##Önemli**Makalenin Türkçe versiyonu için Linke tıkalyın** Türkçe:https://dev.to/ertugrulmutlu/makine-ogrenme-algoritmalarinin-karsilastirilmasi-4o0d ## In this article we will compare **SVM - DecisionTree - KNN** algorithms. The Features we will compare: - Accuracy: The ratio of total correct predictions to total data. That is, the ratio of correct predictions to the total number of predictions. --- - Macro avg precision Score: The average of the precision for each class. Precision is the ratio of correct positive predictions to total positive predictions. This shows how accurately a class is identified. --- - Macro avg Recall Score: The average of the precision for each class. Precision is the ratio of true positive predictions to the total number of true positives. This indicates how successfully a class was detected. --- - Macro avg F1 Score: The average of the F1 score for each class. The F1 score is the harmonic mean of precision and sensitivity. This combines the model's classification ability into a single metric. --- - Weighted avg precision Score: The average of the weighted precision based on the sampling rate of each class. This provides a measure of precision weighted by the importance of each class. --- - Weighted avg Recall Score: The average of the weighted precision based on the sampling rate of each class. This provides a measure of precision weighted by the importance of each class. --- - Weighted avg F1 Score: The average F1 score weighted by the sampling rate of each class. This provides a measure of the F1 score weighted by the importance of each class. ##First the definitions of algorithms. Instead of giving definitions, I found it more appropriate to give you a source that explains them more properly. --- - KNN (K-Nearest-Neighborn): ![KNN Photo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9f1s2adck80dinz9hjm.png) ## Source: --- > Video: https://www.youtube.com/watch?v=v5CcxPiYSlA > Article: https://towardsdatascience.com/machine-learning-basics-with-the-k-nearest-neighbors-algorithm-6a6e71d01761 --- - DT(Decision tree): ![Decision Tree Photo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fe7bee9yk6qg42xf12ol.png) ## Source: --- > Video: https://www.youtube.com/watch?v=ZVR2Way4nwQ > Article: https://medium.com/@MrBam44/decision-trees-91f61a42c724 --- - SVM (Support Vector Machine): ![SVM Photo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t9zlusn7z0l4xjqpzgi0.png) ## Source: --- > Video: https://www.youtube.com/watch?v=1NxnPkZM9bc > Article: https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47 ##Now we can get started.. First let's take a look at the Database I will use Database features: --- Here, we will analyze our CSV using the Pandas library. ``` import pandas as pd csv = pd.read_csv("glass.csv") print(csv.head) ``` To explain the code here in order: 1. We import the Pandas library. 2. We read the CSV file with the Pandas library. 3. Finally, we write the "head" command to get an overview of the CSV file. The output of this code: ![Output of the head command](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lu7aiz3tkshm0q99x0rz.PNG) As you can see, it gave us a general information about the content of the CSV file. It also gave us information about the number of rows and columns. In this CSV file: -214 Row -10 Column It is. --- Now let's get the names of the columns: ``` import pandas as pd csv = pd.read_csv("glass.csv") print(csv.columns) ``` To explain the code here in order: 1. We import the Pandas library. 2. We read the CSV file with the Pandas library. 3. Finally, we write the "columns" command to get an overview of the CSV file. The output of this code: ![Output of the columns script](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0zkp0ksed6mkrgdnjmt.PNG) As you can see, we got the names of the COlumns of the CSV file and then we learned the Type of this data. In this CSV file: -RI (Refractive index) -Na (Sodium) -Mg (Magnesium) -Al (Aluminum) -Si (Silicone) -K (Potassium) -Ca (Calcium) -Ba (Barium) -Fe (Iron) -Type (Glass type) is located. In the light of this data, different types of glass were identified based on the refractive index of the glass and the chemical substances it contains. _Note: For more detailed information, please visit the Source site._ ##Source The site where I downloaded the CSV file: https://www.kaggle.com/datasets/uciml/glass ##Now let's move on to our plan: What We Know - Data in CSV files needs to be shaped for use in Algorithms -Algortimas need to be written using a Library. -Results need to be extracted graphically Let's do the data preparation part. ## Preparation of Data First, let's count the libraries I will use: 1. Sklearn 2. Pandas 3. Numpy ``` data = pd.read_csv(self.url, sep=",") X = np.array(data.drop([columns[len(columns)-1]], axis=1)) y = np.array(data[columns[len(columns)-1]]) x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y, test_size= 0.2) ``` To explain the code here in order: 1. We read our CSV file by separating it with ',' (We use the **PANDAS** library for this operation) 2. The 'X' data contains the properties of the data we want to predict (Type). With this code, we remove the 'Type' Column from the data and make all the data an array using the '**Numpy**' library. 3. 'y' data is the data we want to predict (i.e. **'Type'**). We array it using the '**Numpy**' library just like the 'X' data. 4.Finally, we divide this data into test and train. The reason for this is _in the simplest terms_ to train algorithms with train data. With test data, determine the accuracy rate of the algorithm and take action. (We set this rate as 20% with the **test_size** command, but you can change it if you wish.) _Note: In larger databases or more complex Algorithms you may need **validation** data, but we don't need it here because we are doing a small and simple application._ Yes, our data is ready... ##Integration of Algorithms: Here we will integrate our algorithms with the **Sklearn** library. -**KNN**** ``` from sklearn.neighbors import KNeighborsClassifier KNN = KNeighborsClassifier(n_neighbors=9) KNN.fit(x_train,y_train) ``` To explain the code here in order: 1. We call the _KNeighborsClassifier_ module from _Sklear.neighbors_. 2. KNN is integrated. With the _n_neighbors_ parameter, it is decided how many nearest neighbors to look at. (This value may vary according to the project and database.) 3. Train the model with _.fit_ command with _x_train_ and _y_train_ data. --- -**SVM** ``` from sklearn import svm Svm = svm.SVC(kernel=linear) Svm.fit(x_train,y_train) ``` To explain the code here in order: 1. We call the _svm_ module from _sklearn_. 2. We call the Support Vector Classification function in Svm. (Briefly, this function allows you to perform classification using the Svm infrastructure). As hyperparameter (Kernel :'linear', 'poly', 'rbf', 'sigmoid') can be used. 3. With the _.fit_ command the model is trained with _x_train_ and _y_train_ data. --- -**Decision Tree** ``` from sklearn.tree import DecisionTreeClassifier Dt = DecisionTreeClassifier(random_state=9) Dt.fit(x_train,y_train) ``` To explain the code here in order: 1. We call the _DecisionTreeClassifier_ module from _sklearn.tree_. 2. DecisionTree is integrated. With the _random_state_ parameter, the stability of the algorithm is increased. 3. With the _.fit_ command, the model is trained with _x_train_ and _y_train_ data. Now that we have integrated our algorithms, we can move on to visualization and comparison. #Visualization and Comparison: First, let's count the libraries I will use: 1. matplotlib In short, Matplotlib is a visualization library. It is simple to use and suitable for clean code writing. All algorithms need to be trained to make comparisons. The code we will use after training: ``` dt_report =dt.predict_report(3, dt_x_train, dt_x_test, dt_y_train, dt_y_test) svm_report =Svc.predict_report(3, svc_x_train, svc_x_test, svc_y_train, svc_y_test) knn_report =Knear.predict_report(3, knn_x_train, knn_x_test, knn_y_train, knn_y_test) ``` In short, we can print the values we want on the screen with the very simple _predict_report_ command. Sample output (taken from the internet): ![Predict_score Photo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a81v5sv1zmorgoizuyxy.png) ### Now let's move on to the comparison: --- -Accuracy ![Acc Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzbv0569maulfu8j1060.PNG) 1. Decision_Tree >> 0.6976744186046512 2. KNN >> 0.6511627906976745 3. SVM >> 0.6511627906976745 Here the algorithm with the highest prediction was **Decision Tree**. -Macro avg precision Score ![Macavg prec Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k3thmfhwo2g2j62uuwz2.PNG) 1. Decision_Tree >> 0.7226495726495727 2. SVM >> 0.611111111111111 3. KNN >> 0.5030501089324618 Here the algorithm with the highest prediction was **Decision Tree**. -Macro avg Recall Score ![Macavg recall Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksokbz1hr5b4ocjrgjf1.PNG) 1. Decision_Tree >> 0.6472222222222223 2. SVM >> 0.5863095238095238 3. KNN >> 0.4795454545454545 Here the algorithm with the highest prediction was **Decision Tree**. -Macro avg F1 Score ![Macavg F1 Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekw9mvv5j65p9stcfx95.png) 1. Decision_Tree >> 0.6738576238576238 2. SVM >> 0.5548611111111111 3. KNN >> 0.45506715506715506 Here the algorithm with the highest prediction was **Decision Tree**. -Weighted avg precision Score ![Weiavg Prec Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/htd5mdku2uph7v8z1wch.PNG) 1. Decision_Tree >> 0.7241502683363149 2. SVM >> 0.6627906976744186 3. KNN >> 0.6219182246542027 Here the algorithm with the highest prediction was **Decision Tree**. -Weighted avg Recall Score ![Weiavg Recall Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kayyqguycsv2sg0phsc3.PNG) 1. Decision_Tree >> 0.6976744186046512 2. SVM >> 0.6511627906976745 3. KNN >> 0.6511627906976745 Here the algorithm with the highest prediction was **Decision Tree**. -Weighted avg F1 Score ![Weiavg F1 Comp Graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdtxr96qmzidc58rf1qg.png) 1. Decision_Tree >> 0.7030168797610657 2. SVM >> 0.6397286821705426 3. KNN >> 0.6020444671607461 The algorithm with the highest prediction was **Decision Tree**. ##CONCLUSION As a result, in this article, we compared 3 Machine Learning Algorithms and decided that **Decision Tree** is the best for the Database we have. You can access the codes here and you can change and improve them as you wish. -CODE : https://github.com/Ertugrulmutlu/Machine_Learning_Alg_Comp If you have a "Suggestion-Request-Question", please leave a comment or contact me via e-mail...
ertugrulmutlu
1,786,155
Please help me to find the right Twilio plan for my budget.
Hi everyone, I'm sure you've all heard about Air.AI or have seen a video discussing it. I'm...
0
2024-03-10T16:37:36
https://dev.to/complex_maths/please-help-me-to-find-the-right-twilio-plan-for-my-budget-10dd
Hi everyone, I'm sure you've all heard about Air.AI or have seen a video discussing it. I'm currently attempting to work with Air.AI, but I've hit a roadblock, where I have to interact with my agent. Whenever I try to communicate with my agent, the webpage prompts me to upgrade my Twilio account from the free trial. Air's tech support mentioned that I can use any plan other than the free trial. However, my budget is extremely limited, around $100 per month, and I'm not familiar with Twilio's pricing. Which Twilio plan would be the most affordable while still being usable for Air.AI cold calling? Ideally, I'd like to allocate $1 per day for Air.AI, which totals to $30 per month. Which Twilio plan would you recommend that fits within my budget? I'm a complete beginner, and I greatly appreciate anyone who takes the time to assist me. I'm genuinely striving to improve my circumstances and find hope in this endeavor. Thank you.
complex_maths
1,786,258
5 Resources Each TypeScript Developer Should Know About
Want to become a TypeScript pro? Master advanced TypeScript skills with these resources, from type...
0
2024-03-10T21:03:47
https://medium.com/@alexefimenko/5-resources-to-become-an-advanced-typescript-developer-daa2238dad11
typescript, webdev, javascript, learning
Want to become a TypeScript pro? Master advanced TypeScript skills with these resources, from type definitions and challenging type puzzles to practical utility libraries and API development tools. ## 1. DefinitelyTyped — A collection of type definitions DefinitelyTyped serves as a community-driven collection of high-quality TypeScript type definitions. {% embed https://github.com/DefinitelyTyped/DefinitelyTyped %} In simple terms, it provides TypeScript interfaces and type information for JavaScript libraries that don’t have them built-in. ![DefinitelyTyped Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vadzh89i6fodqc6fzq2z.png) When you’re using a JavaScript library in a TypeScript project, TypeScript needs to know the shapes and types of the library’s exports to type-check your code correctly. It’s highly popular and has over 100 million weekly downloads! ![Weekly downloads DefinitelyTyped npm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5oo4s3diyrwh0fvlxk6p.png) This repo allows developers to use existing JavaScript libraries within their TypeScript projects seamlessly, ensuring type safety. If the library doesn’t provide its own types, you can likely find them in DefinitelyTyped. These type definitions are then used by the TypeScript compiler to understand the library’s structure, offering auto-completion, type checking, and other IDE features for a smoother development experience. ### Example (Lodash) Let’s say you’re working on a TypeScript project and decide to use **Lodash**, a popular utility library. **Lodash** itself is written in JavaScript and doesn’t include TypeScript definitions. ![Lodash Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8oi1s7cjpxndaytog54.png) Here's how you can use Lodash with types in your project with the help of DefinitelyTyped: First, install Lodash: ```bash npm install lodash ``` Then, install the type definitions for Lodash from DefinitelyTyped: ```bash npm install @types/lodash --save-dev ``` Now, you can use Lodash in your TypeScript file with full type support: ```typescript import _ from 'lodash'; // Example usage with full type support let numArray: number[] = [1, 2, 3, 4, 5]; let sum: number = _.sum(numArray); // Lodash's sum function console.log(sum); // Output will be 15 ``` By using DefinitelyTyped's type definitions, you can maintain type safety and take full advantage of TypeScript's features, even when using JavaScript libraries. ## 2. Type Challenges — A collection of TypeScript puzzles The repo provides a collection of TypeScript puzzles similar to LeetCode problems. ![Type Challenges Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kk3ahyjyhq4stvi75rzx.png) Each challenge focuses on a specific aspect of TypeScript’s type system, from basic concepts to complex type manipulation. You’ll often need to use generic types or apply advanced features like conditional types and mapped types to solve them. {% embed https://github.com/type-challenges/type-challenges %} These exercises will help you to test and improve your understanding of TypeScript’s type system. ### Example: [First of Array](https://github.com/type-challenges/type-challenges/blob/main/questions/00014-easy-first/README.md) Let’s look at the “Easy” level challenge. The task is to construct a generic type First<T> that takes an array and returns its first element: ```typescript type arr1 = ['a', 'b', 'c'] type arr2 = [3, 2, 1] type head1 = First<arr1> // expected to be 'a' type head2 = First<arr2> // expected to be 3 ``` To solve the task you can use the [Typescript playground](https://tsch.js.org/14/play) The solution might be something like this: ```typescript type First<T extends any[]> = T extends [infer First, ...infer Rest] ? First : never ``` - `T extends any[]`: We constrain the input T to be an array. - `T extends [infer First, ...infer Rest]`: Pattern matching to extract the first element as First and the rest of the array as Rest. - `First : never`: Conditional type for handling empty arrays. The **Type Challenges** repository provides a structured and engaging way to level up your TypeScript mastery. ## 3. Utility types — A collection of pre-written utility types This collection of pre-written utility types saves you time and effort when working with different data types in TypeScript. ![Utility types repository screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uezkutw3qkqnunvtiakq.png) This repository offers a collection of utility types that can be applied in various TypeScript projects. {% embed https://github.com/piotrwitek/utility-types %} These types exist solely at compile time, leaving no runtime cost in your final JavaScript code. ### Example: TypeScript Typeguard isPrimitive First, install utility-types: ```bash npm install utility-types ``` Let's use `isPrimitive` Typeguard example - a TypeScript Typeguard for the [`Primitive`](https://github.com/piotrwitek/utility-types#primitive) type This can be useful to control the type of a parameter as the program flows. ```typescript import { Primitive, isPrimitive } from 'utility-types'; const consumer = (param: Primitive[] | Primitive): string => { if (isPrimitive(param)) { // typeof param === Primitive return String(param) + ' was Primitive'; } // typeof param === Primitive[] const resultArray = param .map(consumer) .map(rootString => '\n\t' + rootString); return resultArray.reduce((comm, newV) => comm + newV, 'this was nested:'); }; ``` `Primitive`: This type represents the basic building blocks of JavaScript and TypeScript values: strings, numbers, booleans, etc. `isPrimitive`: This type guard function lets you dynamically check if a given variable is a primitive type. This is especially valuable when working with data that could have varying structures. ### Benefits of using utility-types: - Cleaner code: The `isPrimitive` type guard avoids manual `typeof` checks and potential branching. - Type safety: It ensures that we're only manipulating primitive values within the appropriate code block. ## 4. Typescript book — Open-source e-book Free and open-source e-book that dives deeply into TypeScript's features. Perfect if you prefer a traditional book-like format. ![Typescript book Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsdxy5yh0ic4hiznzeti.png) You can freely access, read, and even contribute to the book’s content. {% embed https://github.com/basarat/typescript-book %} The book is known for its easy-to-understand explanations and illustrative examples. This makes it suitable for both beginners and experienced programmers who want to improve their TypeScript knowledge. The book describes various aspects of TypeScript, from its core concepts and syntax to advanced topics like generics, decorators, and metaprogramming. ## 5. [tRPC.io](tRPC.io) — End-to-end typesafe API tRPC offers a solution for building modern APIs with a focus on type safety and developer experience. This open-source project provides tools and libraries needed to construct type-safe APIs. {% embed https://github.com/basarat/typescript-book %} tRPC integrates seamlessly with popular web frameworks such as React, Next.js and Express.js. ![tRPC Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygqqb3wu0pjq80cf7dvu.png) ### Type-safe APIs have several advantages: - Reduced Errors: Static type checking helps catch potential errors at development time, preventing runtime issues that can be difficult to debug later. - Improved Maintainability: A type-safe API provides a clear understanding of the data structures and interactions involved. - Enhanced Developer Experience: Autocompletion and other IDE features powered by static types can significantly improve development speed and overall developer satisfaction. --- Becoming a true TypeScript expert takes time and practice. These resources will help you build a strong start. Keep learning, keep trying new things, and connect with other TypeScript developers to take your skills to the next level. Check out my other articles on TypeScript: - [TypeScript Index Signatures: 4 Examples Type-Safe Dynamic Objects](https://dev.to/alexefimenko/typescript-index-signatures-4-examples-type-safe-dynamic-objects-554o) - [Making React Components More Flexible with TypeScript Generics: 3 Examples](https://dev.to/alexefimenko/3-examples-of-typescript-generic-react-components-4f9) - [TypeScript Enums: 5 Real-World Use Cases](https://dev.to/alexefimenko/typescript-enums-5-real-world-use-cases-4idk) This article was originally [posted on Medium](https://medium.com/@alexefimenko/5-resources-to-become-an-advanced-typescript-developer-daa2238dad11).
alexefimenko
1,786,494
The Developer’s Toolkit: Essential Tools for commercetools Development
When embarking on development with commercetools, a leading platform for building innovative...
0
2024-03-11T05:04:00
https://dev.to/nitin-rachabathuni/the-developers-toolkit-essential-tools-for-commercetools-development-2ap4
When embarking on development with commercetools, a leading platform for building innovative e-commerce solutions, it’s crucial to arm yourself with the right set of tools. The developer’s toolkit for commercetools development not only includes software and libraries directly related to commercetools but also extends to third-party tools and technologies that facilitate a seamless and efficient development experience. In this LinkedIn article, we'll explore these essential tools and provide coding examples to demonstrate their practical application. . commercetools SDKs and APIs Tool: commercetools SDKs (Software Development Kits) for various programming languages. Purpose: These SDKs provide a comprehensive set of tools to interact with the commercetools platform, making it easier to build, test, and deploy applications. Coding Example: Creating a product using the commercetools JavaScript SDK. ``` const { ClientBuilder } = require('@commercetools/sdk-client'); const { createRequestBuilder } = require('@commercetools/api-request-builder'); const { createAuthMiddlewareForClientCredentialsFlow } = require('@commercetools/sdk-middleware-auth'); const { createHttpMiddleware } = require('@commercetools/sdk-middleware-http'); const fetch = require('node-fetch'); const projectKey = 'your-project-key'; const clientId = 'your-client-id'; const clientSecret = 'your-client-secret'; const scope = 'manage_project:your-project-key'; const authUrl = 'https://auth.europe-west1.gcp.commercetools.com'; const apiUrl = 'https://api.europe-west1.gcp.commercetools.com'; const authMiddleware = createAuthMiddlewareForClientCredentialsFlow({ host: authUrl, projectKey, credentials: { clientId, clientSecret }, fetch, }); const httpMiddleware = createHttpMiddleware({ host: apiUrl, fetch }); const client = new ClientBuilder().withProjectKey(projectKey) .withMiddleware(authMiddleware) .withMiddleware(httpMiddleware) .build(); const requestBuilder = createRequestBuilder({ projectKey }); const productDraft = { name: { en: "Sample Product" }, slug: { en: "sample-product" }, productType: { typeId: "product-type", id: "product-type-id" } }; const uri = requestBuilder.products.build(); const request = { uri, method: 'POST', body: productDraft, }; client.execute(request) .then(response => console.log(response)) .catch(error => console.error(error)); ``` . Postman Tool: Postman Purpose: An API client for testing web services. Postman makes it easier to explore commercetools API endpoints and test API requests and responses without writing code. Coding Example: Using Postman to create a new customer in commercetools. Set up a new POST request to the commercetools API endpoint for creating customers. Configure authentication using the OAuth 2.0 settings provided by commercetools. In the request body, enter the customer details in JSON format. Send the request and review the response to ensure the customer was created successfully. . Visual Studio Code Tool: Visual Studio Code (VS Code) Purpose: A powerful code editor that supports a wide range of programming languages, including those commonly used with commercetools (JavaScript, TypeScript). Coding Example: Configuring VS Code with the ESLint plugin to enforce code quality in commercetools projects. Install the ESLint extension in VS Code. Configure .eslintrc.json in your project root to define the coding standards. The ESLint extension will automatically highlight issues in your code as you type, helping to maintain code quality. . Docker Tool: Docker Purpose: Facilitates the creation, deployment, and running of applications by using containers. Docker is invaluable for setting up isolated environments for testing commercetools applications. Coding Example: Running a commercetools mock server in a Docker container for local testing. ``` # Use an official Node runtime as a parent image FROM node:14 # Set the working directory in the container WORKDIR /usr/src/app # Copy the current directory contents into the container at /usr/src/app COPY . . # Install any needed packages specified in package.json RUN npm install # Make port 3000 available to the world outside this container EXPOSE 3000 # Define environment variable ENV NAME commercetools-mock-server # Run app.js when the container launches CMD ["node", "app.js"] ``` Conclusion Building applications on commercetools is an exciting endeavor, offering developers the opportunity to craft cutting-edge e-commerce solutions. By leveraging the essential tools outlined above—ranging from commercetools' own SDKs and APIs to versatile tools like Postman, VS Code, and Docker—developers can enhance their productivity, streamline development processes, and deliver robust, scalable e-commerce platforms. With these tools in your arsenal and a commitment to best practices, you're well-equipped to take on any commercetools project. --- Thank you for reading my article! For more updates and useful information, feel free to connect with me on LinkedIn and follow me on Twitter. I look forward to engaging with more like-minded professionals and sharing valuable insights.
nitin-rachabathuni
1,786,497
Financial Freedom Meaning
Financial freedom meaning is a state where an individual has enough income to sustain their lifestyle...
0
2024-03-11T05:12:59
https://dev.to/loak_in/financial-freedom-meaning-2mgj
financial
Financial freedom meaning is a state where an individual has enough income to sustain their lifestyle without having to actively work. This means that they have a passive income that covers their expenses, allowing them to live comfortably without relying on a steady job. Achieving financial freedom is a common goal for many people, as it offers the freedom to pursue their passions, travel, or simply enjoy life without worrying about money. [Full article](https://blog.loak-in.com/2023/02/07/financial-freedom-meaning/)
loak_in
1,786,529
Unsupervised Machine Learning: Understanding and Applications
Unsupervised Machine Learning: Understanding and Applications Unsupervised learning, a cornerstone...
0
2024-03-11T05:46:11
https://dev.to/askyt/unsupervised-machine-learning-understanding-and-applications-22j0
**[Unsupervised Machine Learning](https://pythonkb.com/unsupervised-machine-learning-understanding-and-applications/): Understanding and Applications** Unsupervised learning, a cornerstone of modern artificial intelligence, offers a unique approach to analyzing data without the need for explicit supervision or labeled outcomes. In this article, we delve into the intricacies of unsupervised learning, exploring its mechanisms, methodologies, and real-world applications. ### What is Unsupervised Learning? At its core, unsupervised learning utilizes self-learning algorithms to uncover patterns, structures, and relationships within raw, unlabeled data. Unlike supervised learning, which relies on labeled data to guide the learning process, unsupervised learning algorithms must infer their own rules and structures based solely on the input data. Consider a scenario where you possess a vast dataset detailing various weather conditions. An unsupervised learning algorithm tasked with analyzing this data would autonomously identify patterns such as temperature ranges or weather phenomena without prior guidance. While the algorithm doesn't inherently understand these patterns, it clusters similar data points together, laying the foundation for subsequent analysis and interpretation. ### How Does Unsupervised Learning Work? Unsupervised learning algorithms leverage several techniques to extract meaningful insights from unlabeled data. Three primary methodologies dominate this field: 1. **Clustering:** Clustering involves partitioning data into distinct groups, or clusters, based on inherent similarities or differences. This technique finds extensive application across diverse domains, including customer segmentation, fraud detection, and image analysis. Various clustering algorithms, such as K-means, hierarchical clustering, and probabilistic clustering, cater to different data structures and requirements. 2. **Association:** Association rule mining uncovers relationships and patterns within datasets, particularly prevalent in retail and medical domains. By identifying frequent if-then associations among data points, association algorithms reveal purchasing patterns, aid in recommendation systems, and facilitate clinical diagnoses. 3. **Dimensionality Reduction:** Dimensionality reduction techniques aim to reduce the complexity of datasets by extracting essential features while discarding irrelevant or redundant information. Algorithms like Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) enable the transformation of high-dimensional data into a lower-dimensional space, preserving critical properties and facilitating visualization and analysis. ### Real-World Applications of Unsupervised Learning Unsupervised learning finds extensive utility across various industries, empowering businesses to extract actionable insights from vast volumes of unlabeled data. Some notable applications include: - **Anomaly Detection:** By identifying deviations from expected patterns within datasets, unsupervised learning aids in anomaly detection, crucial for fraud detection and cybersecurity. - **Recommendation Engines:** Unsupervised algorithms uncover hidden associations and preferences within transactional data, enabling personalized recommendations in e-commerce platforms and content streaming services. - **Customer Segmentation:** Clustering techniques help segment customers based on shared traits or behaviors, facilitating targeted marketing strategies and enhancing customer engagement. - **Fraud Detection:** Unsupervised learning identifies abnormal behaviors or transactions within financial datasets, mitigating risks and safeguarding against fraudulent activities. - **Natural Language Processing (NLP):** Unsupervised learning algorithms categorize, translate, and classify textual data, powering applications such as sentiment analysis, document clustering, and speech recognition. - **Genetic Research:** Hierarchical clustering algorithms aid in analyzing genetic data, uncovering evolutionary relationships and insights crucial for biomedical research. ### Supervised Learning vs. Unsupervised Learning While both supervised and unsupervised learning are indispensable in machine learning, they differ fundamentally in their approach and application. Supervised learning relies on labeled training data to map input features to predetermined outputs, facilitating tasks such as classification and regression. In contrast, unsupervised learning thrives in scenarios where labeled data is scarce or unavailable, autonomously uncovering patterns and structures within raw data. ### Conclusion In conclusion, unsupervised learning stands as a powerful paradigm in machine learning, enabling autonomous discovery and analysis of complex data structures. From clustering and association mining to dimensionality reduction, unsupervised techniques unlock valuable insights across diverse domains, revolutionizing industries and driving innovation. As data volumes continue to escalate, the role of unsupervised learning in extracting actionable intelligence from unlabeled data will only grow, cementing its status as a cornerstone of modern AI.
askyt
1,786,564
Hello Everyone
public class Hello { public static void main(String Args[]) { ...
0
2024-03-11T07:03:45
https://dev.to/vgkrishnaaditya24/hello-everyone-23ba
`public class Hello { public static void main(String Args[]) { System.out.println("Hello World"); } } `
vgkrishnaaditya24
1,786,627
Build a Simple Blog + Multiple Image Upload with Laravel & Vue
Psst! If you need help setting up laravel, here’s a walkthrough: ...
0
2024-03-11T08:28:56
https://dev.to/martinsonuoha/build-a-simple-blog-multiple-image-upload-with-laravel-vue-5pc
laravel, vue, webdev
> Psst! If you need help setting up laravel, here’s a walkthrough: {% embed https://devjavu.space/post/the-hitchhiker-s-guide-to-laravel-setup/ %} Image upload is one of the most essential features of any social application. Regardless of how small or large, your application might be, as long as you’re managing users' data or allowing users to manage their own data, at some point you’d need to provide the users the ability to upload pictures within your application. In this article, I’ll run you through how to implement multiple image upload feature with VueJS and Laravel while pretending to build a blog, lol. Here’s a preview of what we’ll be building ( Blog Café ): ![Preview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrflbeaj4dzw9tlektmq.png) > Some Side Note 📝: - You’d need a basic understanding of Javascript - We’ll be keeping it simple and just use Twitter Bootstrap - You’ll need to have a bit of experience with Laravel/PHP and VUEjs ( Vuex too ) - For this project, we’ll be using Laravel’s local driver to store files, however, for a production-level application, you might want to use other options. --- ## Requirements Laravel `version 5.8.*` As at the time of writing, Laravel 5.8 was the latest version, this would have changed over time. This tutorial assumes you already have a working **LAMP** setup, so we’ll just start off by scaffolding a new Laravel application. Open up your Terminal or any preferred terminal emulator, navigate into your preferred project directory, and create a new Laravel project: ``` laravel new multi_upload ``` or via composer: ``` composer create-project --prefer-dist laravel/laravel multi_upload ``` You should now have a new directory ( multi_upload ) that contains a fresh installation of Laravel. Next, we’ll set up the database: Create a new database using any visual database design tool (MySQL workbench, SequelPro) or from the command line like so: ``` mysql -u username -p ``` Enter your MySQL password ``` MariaDB [(none)]> CREATE DATABASE multi_upload; ``` Navigate into your project directory, copy the `.env.example` file into a `.env` file open up the `.env` file. ``` cd multi_upload && cp .env.example .env ``` Replace the default database credentials with yours. ``` DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=blogcafe DB_USERNAME=root DB_PASSWORD=Caser0le. ``` Don’t forget to generate your application encryption key with: ``` php artisan key:generate ``` Once we have the correct database credentials setup, we can scaffold Laravel’s user authentication. ``` php artisan make:auth && php artisan migrate ``` --- ## Models You should have a working authentication system, but before we run a migration, let’s set up our models. > Disclaimer: My implementation would be very basic and simplified. There are more professional ways to do this and I recommend you make more research for the best ways to achieve this. We’ll have a **Post** model with one or more images associated with it, and a **PostImage** model that is associated with a single post. We’ll also have many **Posts** associated with a Single **User** model. ``` php artisan make:model Post -m && php artisan make:model PostImage -m ``` We’ll edit the **Post** migration file to include some fields. Open the project in your preferred code editor. If you’re using VsCode, simply open it up from the terminal: ``` code . ``` Update the Post migration (database/migrations/create_posts_……php) file to contain these columns: ``` public function up() { Schema::create('posts', function (Blueprint $table) { $table->bigIncrements('id'); $table->integer('user_id')->unsigned(); $table->string('title'); $table->text('body'); $table->timestamps(); }); } ``` We’ll also update the PostImage migration file (database/migrations/create_posts_imag_s……ph_p) to contain the following columns: ``` public function up() { Schema::create('post_images', function (Blueprint $table) { $table->bigIncrements('id'); $table->integer('post_id')->unsigned(); $table->string('post_image_path'); $table->string('post_image_caption')->nullable(); $table->timestamps(); }); } ``` You can now run a fresh migration so all the changes made to table will reflect: ``` php artisan migrate:fresh ``` ## Relationships Relationships can be really complicated…I know, good thing is, we’re not dealing with humans here, so it doesn’t have to be. like I explained earlier. The relationship is simple: - A User has many Posts - A Post can only belong to one User - A Post has many Images - An Image can only belong to one Post Let’s create the first relationship between the User and the Post (A User has many Posts). in your `App\User.php` file, update the code to have the relationship. ``` public function posts() { return $this->hasMany('App\Post'); } ``` We’ll also create the Post Model relationship to the User Model(A Post can only belong to one User) and also the relationship between the Post Model and the PostImage Model. We also want to include the fillable fields for the Post Model (while we’re here). In your `App\Post.php` add the **author** and **post_images** function to return the relationships: ``` class Post extends Model { protected $fillable = ['title', 'body', 'user_id']; public function author() { return $this->belongsTo('App\Models\User'); } public function post_images() { return $this->hasMany('App\Models\PostImage'); } } ``` Remember an Image can belong to only one post, we want to add this relationship in the PostImage Class, we also want to specify the fillable fields: ``` class PostImage extends Model { protected $fillable = ['post_id', 'post_image_path', 'post_image_caption']; public function post() { return $this->belongsTo('App\Post'); } } ``` Just before we start working on our controllers, we need to setup a separate disk for uploading our images. In our config/filesystem.php file, create a new object property called **uploads** under the **disks** field: ``` ... 'uploads' => [ 'driver' => 'local', 'root' => 'uploads', 'url' => 'uploads', ], ... ``` --- ## Controller We’ll need a controller to handle the creating and fetching of posts. we can easily make a new controller using Laravel’s `make:controller` artisan command: php artisan make:controller PostController You should now have a PostController file in the app/Http/Controllers/ folder. In our `PostController.php`, we’ll create two methods: one for fetching posts (**getAllPosts**) and the other for creating a new post (**createPost**). Let’s make sure we have all the necessary classes and facades we’ll be using imported at the top of the controller file: ``` use Illuminate\Http\Request; use App\Models\Post; use App\Models\PostImage; use Auth; use Storage; use Illuminate\Support\Facades\DB; ``` ## Get All Posts First, we need to use [eloquent’s eager loading](https://laravel.com/docs/5.8/eloquent-relationships#eager-loading) to grab all the posts as well as the related images. This is possible because of the relationship we had earlier specified in the Post model. We’ll order the results by the date created in descending order so we get the most recent posts at the top. ``` $posts = Post::with('post_images')->orderBy('created_at', 'desc')->get(); ``` Then return a JSON response with the queried posts. ``` return response()->json(['error' => false, 'data' => $posts]); ``` Putting it all together: ``` public function getAllPosts() { $posts = Post::with('post_images')->orderBy('created_at', 'desc')->get(); return response()->json(['error' => false, 'data' => $posts]); } ``` --- ## Create Post For the create post function, It’ll take an instance of the `Request` class as a parameter, why? cause, like I said we’ll be making ajax requests to the backend, and data from these requests, are contained in the instance of the `Request` object. We’ll also grab all of the data we need from the request object: - The post title - The post content - The array of images - The currently authenticated user Once we have the payload from the request, we’ll run a database transaction, to perform multiple **related** queries, which is creating a post and its related images. > We use transactions for multiple queries that are related so the database does an automatic rollback in case one related query fails. We already used the DB Facade at the top of our controller: ``` use Illuminate\Support\Facades\DB; ``` Both queries to create a post and related images would go within the `DB::transaction` function. ``` DB::transaction(function () use ($request) { // Queries happen here } ``` Within the transaction function, we grab our payload properties: ``` $user = Auth::user(); $title = $request->title; $body = $request->body; $images = $request->images; ``` Next, we’ll create a new post with the title, body, and the user_id: ``` $post = Post::create([ 'title' => $title, 'body' => $body, 'user_id' => $user->id, ]); ``` Next, we’ll store each of the images first in a specific folder then into our database. By “specific folder” I mean a folder unique to each authenticated user and the created post. Something like this: ``` /uploads/grimesbuttom@butt.com/posts/1/skyscraper.png // store each image foreach($images as $image) { $imagePath = Storage::disk('uploads')->put($user->email . '/posts/' . $post->id, $image); PostImage::create([ 'post_image_caption' => $title, 'post_image_path' => '/uploads/' . $imagePath, 'post_id' => $post->id ]); } ``` Once the images have been stored, we return a JSON response to the frontend. ``` return response()->json(200); ``` Putting it all together: ``` public function createPost(Request $request) { DB::transaction(function () use ($request) { $user = Auth::user(); $title = $request->title; $body = $request->body; $images = $request->images; $post = Post::create([ 'title' => $title, 'body' => $body, 'user_id' => $user->id, ]); // store each image foreach($images as $image) { $imagePath = Storage::disk('uploads')->put($user->email . '/posts/' . $post->id, $image); PostImage::create([ 'post_image_caption' => $title, 'post_image_path' => '/uploads/' . $imagePath, 'post_id' => $post->id ]); } }); return response()->json(200); } ``` --- ## Routes For the web routes, we’ll create a route group that’ll use the `web auth middleware`. The route group will have two routes `get_all` and `create_post` — for getting all posts and creating new posts respectively. Open the `web.php` file and add these lines: ``` Route::group(['middleware' => 'auth', 'prefix' => 'post'], function () { Route::get('get_all', 'PostController@getAllPosts')->name('fetch_all'); Route::post('create_post', 'PostController@createPost')->name('create_post'); }); ``` --- ## Frontend ( blade ) Let’s move over to the frontend and do a bit of work, first on the blade part. in our home.blade.php, we need to update the UI to use a 6x6 grid layout. The left grid will hold the create-post component, while the right grid will hold the list of posts. In case you missed it earlier, this is what we’re going for: ![Preview again](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z54miau7sqsnbej5lgjo.png) Let’s update our `home.blade.php`: ``` @extends('layouts.app') @section('content') <div class="container"> <div class="row"> <div class="col-md-6"> <create-post /> </div> <div class="col-md-6 posts-container" style="height: 35rem; overflow-y: scroll"> <all-posts /> </div> </div> </div> @endsection ``` We have our blade view set up properly, most of the work will go into the individual Vue components. --- ## Components ( Vue ) Before we get started with the vue components, we need to set up our VueJs environment. Laravel already comes shipped with Vue, so we won’t have to do much work setting up, thankfully. First, we’ll install all of the npm packages in our `package.json` file: ``` npm i ``` We’ll also need [element-ui](https://element.eleme.io/#/en-US/) - mostly because we need to use the dialog box and the upload component that comes with it. It saves us more work. ``` npm i element-ui -S ``` we’ll also be using Vuex for state management. ``` npm i vuex -S ``` Once you all the necessary packages installed, we’ll set up our components, store, and packages. Create two new files `CreatePost.vue` and `AllPosts.vue` in `resources/js/components`. Also, we’ll create a new folder called store in `resources/js`. In our store folder, we’ll create an index.js file to set up our Vuex store. Your directory should look something like this: ![Directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8nxzcu0ej0ydo4o2716a.png) We’ll now register these new components in our `resources/js/app.js` file. Add this to your `app.js` file: ``` Vue.component('example-component', require('./components/ExampleComponent.vue').default); Vue.component('create-post', require('./components/CreatePost.vue').default); Vue.component('all-posts', require('./components/AllPosts.vue').default); ``` In the same app.js file we would also set up our Vuex store and the element-UI library. Update your app.js file like so: ``` require('./bootstrap'); window.Vue = require('vue'); import store from './store/index'; import ElementUI from 'element-ui'; import 'element-ui/lib/theme-chalk/index.css'; Vue.use(ElementUI); Vue.component('example-component', require('./components/ExampleComponent.vue').default); Vue.component('create-post', require('./components/CreatePost.vue').default); Vue.component('all-posts', require('./components/AllPosts.vue').default); const app = new Vue({ store, el: '#app', }); ``` ## Store Let’s work on our store (to read more about Vuex, go to the docs), we’ll set up a single mutation and action to handle fetching and updating the posts list. in our `store/index.js` file: - We’ll import and use the Vuex module. ``` import Vue from 'vue'; import Vuex from 'vuex'; Vue.use(Vuex); const debug = process.env.NODE_ENV !== 'production'; ``` - Next, we want to set up our state object to hold the posts' property ( an array of all posts ). We’ll then define an asynchronous getAllPosts action to handle the request to get all posts. And finally, a setPosts mutation to update the posts' property in our state. ``` export default new Vuex.Store({ state: { posts: [], }, actions: { async getAllPosts({ commit }) { return commit('setPosts', await api.get('/post/get_all')) }, }, mutations: { setPosts(state, response) { state.posts = response.data.data; }, }, strict: debug }); ``` Now that our store is all set up, we’ll move into the `AppPosts.vue` component to render all the created posts. --- ## View Posts We’ll render all the created posts in individual cards inside columns and use a dialog box to view individual post details ( this is where element-ui comes in handy ). Let’s update the `AllPosts.vue` file: ``` <template> <div class="row"> <div class="col-md-6" v-for="(post, i) in posts" :key=i> <div class="card mt-4"> <img v-if="post.post_images.length" class="card-img-top" :src="post.post_images[0].post_image_path"> <div class="card-body"> <p class="card-text"><strong>{{ post.title }}</strong> <br> {{ truncateText(post.body) }} </p> </div> <button class="btn btn-success m-2" @click="viewPost(i)">View Post</button> </div> </div> <el-dialog v-if="currentPost" :visible.sync="postDialogVisible" width="40%"> <span> <h3>{{ currentPost.title }}</h3> <div class="row"> <div class="col-md-6" v-for="(img, i) in currentPost.post_images" :key=i> <img :src="img.post_image_path" class="img-thumbnail" alt=""> </div> </div> <hr> <p>{{ currentPost.body }}</p> </span> <span slot="footer" class="dialog-footer"> <el-button type="primary" @click="postDialogVisible = false">Okay</el-button> </span> </el-dialog> </div> </template> ``` Next in our script section, we want to use Vuex `mapState` helper which generates computed getter functions for us. We’ll pass an array of strings to `mapState` with the mapped posts property. We’ll also trigger the `getAllPost` action in a `beforeMount` hook using `store.dispatch` (Alternatively, we can use the `mapAction` helper to grab the getAllPost action). We’ll then define a helper function `truncateText` to truncate long post contents and one more function `viewPost` to view a post’s detail in a dialog box. ``` <script> import { mapState } from 'vuex'; export default { name: 'all-posts', data() { return { postDialogVisible: false, currentPost: '', }; }, computed: { ...mapState(['posts']) }, beforeMount() { this.$store.dispatch('getAllPosts'); }, methods: { truncateText(text) { if (text.length > 24) { return `${text.substr(0, 24)}...`; } return text; }, viewPost(postIndex) { const post = this.posts[postIndex]; this.currentPost = post; this.postDialogVisible = true; } }, } </script> ``` To bundle everything and watch our files for changes, we’ll run [Laravel Mix](https://laravel.com/docs/5.8/mix): ``` npm run watch ``` You can start your Laravel application with: ``` php artisan serve ``` Your application should be running on `localhost:8000`. Register a new user and be sure you can view your home page (of course you might not see a lot on it yet). I’ll go ahead to use [Tinker](https://laravel.com/docs/8.x/artisan#tinker) to create some Post and related Post Images. Tinker allows you to interact with your entire Laravel application on the command line, including the Eloquent ORM, it’s an interactive PHP shell. You can access the tinker interface with: ``` php artisan tinker ``` I won’t go into all the details about using tinker, but if everything works fine you should be able to create posts and images from the tinker shell: ``` >>> $user = App\User::find(1) >>> $post = App\Post::create([ "title" => "Some blog post", "body" => "this is a random post about absolutely nothing" "user_id" => $user->id ]) >>> $postImage = App\PostImage::create([ "post_image_caption" => $post->title, "post_image_path" => "https://skillsouq.com/wp-content/uploads/2014/10/background_01.jpg" ]) ``` --- ## Create Posts To create a new post, we’ll need a form to take the post title, post content, and post images. for the image upload, we’ll be using element-ui’s upload component, this will help us handle and preview the files properly…It also has a better user experience. We’ll update the `CreatePost.vue` component: ``` <template> <div class="card mt-4" :key="componentKey"> <div class="card-header">New Post</div> <div class="card-body"> <div v-if="status_msg" :class="{ 'alert-success': status, 'alert-danger': !status }" class="alert" role="alert" >{{ status_msg }}</div> <form> <div class="form-group"> <label for="exampleFormControlInput1">Title</label> <input v-model="title" type="text" class="form-control" id="title" placeholder="Post Title" required /> </div> <div class="form-group"> <label for="exampleFormControlTextarea1">Post Content</label> <textarea v-model="body" class="form-control" id="post-content" rows="3" required></textarea> </div> <div class> <el-upload action="https://jsonplaceholder.typicode.com/posts/" list-type="picture-card" :on-preview="handlePictureCardPreview" :on-change="updateImageList" :auto-upload="false" > <i class="el-icon-plus"></i> </el-upload> <el-dialog :visible.sync="dialogVisible"> <img width="100%" :src="dialogImageUrl" alt /> </el-dialog> </div> </form> </div> <div class="card-footer"> <button type="button" @click="createPost" class="btn btn-success" >{{ isCreatingPost ? "Posting..." : "Create Post" }}</button> </div> </div> </template> ``` We’ll add some styling to the upload component: ``` <style> .avatar-uploader .el-upload { border: 1px dashed #d9d9d9; border-radius: 6px; cursor: pointer; position: relative; overflow: hidden; } .avatar-uploader .el-upload:hover { border-color: #409eff; } .avatar-uploader-icon { font-size: 28px; color: #8c939d; width: 178px; height: 178px; line-height: 178px; text-align: center; } .avatar { width: 178px; height: 178px; display: block; } </style> ``` In our script section, we’ll import the `mapAction` helper which maps component methods to store.dispatch calls. ``` import { mapActions } from 'vuex'; ``` Next we’ll be needing these data properties: ``` export default { name: "CreatePost", data () { return { dialogImageUrl: '', dialogVisible: false, imageList: [], status_msg: '', status: '', isCreatingPost: false, title: '', body: '', componentKey: 0 } }, ... ``` Within our method property, we’ll map the `getAllPosts` action we defined in our store to the component: ``` methods: { ...mapActions(['getAllPosts']), } ``` Next in our methods property, we’ll need a couple of methods to handle image preview and update our image list. ``` methods: { updateImageList (file) { this.imageList.push(file.raw) }, handlePictureCardPreview (file) { this.dialogImageUrl = file.url this.imageList.push(file) this.dialogVisible = true }, ... } ``` We’ll need another method to handle showing success and error notifications. ``` ... showNotification (message) { this.status_msg = message setTimeout(() => { this.status_msg = '' }, 3000) } ... ``` One more method to handle validation of our form… (you thought it was over, eh?) ``` ... validateForm () { // no vaildation for images - it is needed if (!this.title) { this.status = false this.showNotification('Post title cannot be empty') return false } if (!this.body) { this.status = false this.showNotification('Post body cannot be empty') return false } if (this.imageList.length < 1) { this.status = false; this.showNotification('You need to select an image'); return false; } return true }, ... ``` And finally our `createPost` Method: ``` createPost (e) { e.preventDefault() if (!this.validateForm()) { return false } const that = this this.isCreatingPost = true const formData = new FormData() formData.append('title', this.title) formData.append('body', this.body) // JQuery comes preinstalled with Laravel-Vue so we can do this $.each(this.imageList, function (key, image) { formData.append(`images[${key}]`, image) }) window.api.post('/post/create_post', formData, { headers: { 'Content-Type': 'multipart/form-data' } }) .then((res) => { this.title = this.body = '' this.status = true this.showNotification('Post Successfully Created') this.isCreatingPost = false this.imageList = [] /* this.getAllPosts() can be used here as well note: "that" has been assigned the value of "this" at the top to avoid context related issues. */ that.getAllPosts() that.componentKey += 1 }) }, ``` Because we need to send image files to our API, we’ll be using the [FormData](https://developer.mozilla.org/en-US/docs/Web/API/FormData) class. The FormData interface provides a way to easily construct a set of key/value pairs representing form fields and their values. This is so our request looks like it’s coming from an actual form and allow Laravel to read the image file properly. If all went well and the gods were with us, we should have it working as expected: ![Working site](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ufexe6p8bnj7k31s455.png) ![Working site](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/my7hvttem3x7y7uufdac.png) I know this was a long ride, hopefully, you were able to make this work on your end. The codebase for this tutorial lives on [this repository](https://github.com/MartinsOnuoha/BlogCafe) feel free to explore. {% embed https://github.com/MartinsOnuoha/BlogCafe %} Cheers ☕️ ---
martinsonuoha
1,786,647
Weekly Roundup 043 (Mar 04): 🔥Hot Topics🔥 in #workplace, #sharepoint, and #powerplatform
Hey fellow developers! It's @jaloplo, here to give you the latest scoop on what's been happening in...
22,696
2024-03-11T08:58:23
https://dev.to/jaloplo/weekly-roundup-043-mar-04-hot-topics-in-workplace-sharepoint-and-powerplatform-4p7i
roundup, sharepoint, workplace, powerplatform
Hey fellow developers! It's @jaloplo, here to give you the latest scoop on what's been happening in the [#workplace](https://dev.to/t/workplace), [#sharepoint](https://dev.to/t/sharepoint), and [#powerplatform](https://dev.to/t/powerplatform) communities. 😎 ## [#workplace](https://dev.to/t/workplace) - [Dungeon Mode](https://dev.to/ademagic/dungeon-mode-2mc4) by [Miko](https://dev.to/ademagic) - [Eye Care: Tips for Managing Dry Eyes and Fatigue (Bite-size Article)](https://dev.to/koshirok096/eye-care-tips-for-managing-dry-eyes-and-fatigue-bite-size-article-1ic8) by [koshirok096](https://dev.to/koshirok096) ## [#sharepoint](https://dev.to/t/sharepoint) - [Two SharePoint Embedded Architectures You Must Know About](https://dev.to/jaloplo/dont-miss-out-on-these-two-sharepoint-embedded-architectures-you-can-use-as-a-developer-3j90) by [Jaime López](https://dev.to/jaloplo) - [Exploring SharePoint with Microsoft Graph API](https://dev.to/tlayach/exploring-sharepoint-with-microsoft-graph-api-4nj5) by [Paulo GP](https://dev.to/tlayach) ## [#powerplatform](https://dev.to/t/powerplatform) - [Power Automate - Scripts with App Scripts](https://dev.to/wyattdave/power-automate-scripts-with-app-scripts-2l9) by [david wyatt](https://dev.to/wyattdave) - [Power Automate: How to trigger a flow only when specific field(s) have certain values](https://dev.to/frederik_vl/power-automate-how-to-trigger-a-flow-only-when-specific-fields-have-certain-values-11k9) by [Frederik Van Lierde](https://dev.to/frederik_vl) - [Using Power Automate to build and execute SharePoint REST API Queries](https://dev.to/fernandaek/using-power-automate-to-build-and-execute-sharepoint-rest-api-queries-53hf) by [Fernanda Ek](https://dev.to/fernandaek) > *Are you well-acquainted with SharePoint, Microsoft Teams, or OneDrive and their capacity to enhance employee value? Familiar with the intricacies of Power Apps, Power Automate, or other Power Platform services? Concerned about the Workplace experience and eager to share insightful perspectives?* > > *Let's harness your expertise to create valuable content! I'm on the lookout for collaborators to co-author insightful articles and support others. 🚀💼 Connect with me on [Twitter](https://twitter.com/jaloplo), [Mastodon](https://techhub.social/@jaimelopezlopez) or [LinkedIn](https://www.linkedin.com/in/jaimelopezlopez/). Feel free to send me a message!* That's all for this week's roundup! Thanks for tuning in, and remember to keep the discussions lively and informative in our tags. 💬 If you have any suggestions for future topics, feel free to drop them in the comments below. See you next week! 👋
jaloplo
1,786,659
Data storage component design challenge
I've just completed a front-end coding challenge from @frontendmentor! 🎉 You can see my solution...
0
2024-03-11T09:11:36
https://dev.to/vignesh470/data-storage-component-design-challenge-2i81
I've just completed a front-end coding challenge from @frontendmentor! 🎉 You can see my solution here: https://www.frontendmentor.io/solutions/simple-data-storage-component-page-DhjoiFTCs- Any suggestions on how I can improve are welcome! #webdevelopment #frontenddeveopment #html #css #webdevelopmentproject ![Desktop solution](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/509daar7ycyejh2sqbvb.png) ![Mobile solution](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scpww3z1fzsoteb2603y.png)
vignesh470
1,786,715
Cleaning service
Company Name: Waely Clean Street Address: 143 Hawbaker Industrial Drive City: State...
0
2024-03-11T09:43:03
https://dev.to/clean/cleaning-service-49ee
Company Name: Waely Clean Street Address: 143 Hawbaker Industrial Drive City: State College Province/State: Pennsylvania (PA) USA Phone: 814-206-7406 https://www.waelyclean.com/ Cleaning service residential cleaning carpet cleaning near me airbnb cleaning service vacation rental cleaning service student storage moving helper near me apartment turnover cleaning services near me Cleaning service near me house cleaners professional cleaners airbnb management state college PA short term rental management companies state college PA vacation rental management state college PA Apartment cleaner box storage Floor Cleaning Services movers helpers
clean
1,788,490
Env vs AppSettings, Want to keep your Env?
Sometimes you just want to keep your lovely .env file as you were just working on a frontend project...
0
2024-03-12T20:59:48
https://dev.to/ethernmyth/env-vs-appsettings-want-to-keep-your-env-149
csharp, dotnet, programming, coding
Sometimes you just want to keep your lovely .env file as you were just working on a frontend project that kept your workflow on 🔥. Then it just rains appsettings.json all of a sudden. <img src="https://media.giphy.com/media/MBGnTTKiVsZtOg2TJd/giphy.gif?cid=790b7611jqs9bl0tl1xdewr01acjlm4wspypkmoc9vicofuq&ep=v1_gifs_search&rid=giphy.gif&ct=g" width="80%" height="80%"/> But who said we can't try anything to beat the mighty environment. I guess you know why many devs/engineers loves .env files, because they are really easy to use on every project, importantly when going for that testing phase. Even on docker, they seem like air. 😎 Overall, everyone has their likes with both. Env just are more available and used on many platforms, check how integration on platforms like Railway and Render are for docker support. They are great if you get that env file setup right. To leverage the use of .env files on .NET can be quite difficult to get right, since you might have to get some configurations done separately. But once again 👻, came to the disappearing rescue. Designed a nuget package that can help you setup that env file quicker and use faster. Even allow dynamic properties to exist. `c-em-env` boast with the richness to get the env file fields right at your door step. This just sounded like an AD 🤣 Nonetheless, go to nuget.org and search for `c-em-env`, might be your luck. [C-EM-ENV](https://www.nuget.org/packages/c-em-env/1.0.1) Also available through Github right here: [C-EM-ENV](https://github.com/Ethern-Myth/c-em-env) **Versions might differ but it is the same package** All the documentation is available and download is available. Try it now. Few lines of code, create your lovely .env in the project root directory, next have your file read. If you find any bugs, errors, or anything to add. Do the basics right here! Comment or suggest. More info will be coming soon. 🌊 coming up on the next episode ....
ethernmyth
1,786,731
Exploring the Boundless Possibilities of AWS PartyRock: A Journey into Generative AI
Are you ready to dive into the exhilarating realm of generative AI without the hassle of complex...
0
2024-03-11T10:05:26
https://dev.to/amarpreetbhatia/exploring-the-boundless-possibilities-of-aws-partyrock-a-journey-into-generative-ai-3bkb
partyrockhackathon, awsbedrock, aws, webdev
Are you ready to dive into the exhilarating realm of generative AI without the hassle of complex coding? Look no further than AWS PartyRock – the ultimate playground for AI enthusiasts and budding developers alike. In this blog post, I'm thrilled to share my firsthand experience with platform that uses AWS Bedrock service and how it's revolutionizing the way we approach AI experimentation. Check the running demo {% embed https://youtu.be/-EBIVaOYhcs?si=xj3RnYpCV8Aw3vPE %} And try your self at: [The Empathy Engine](https://partyrock.aws/u/musiclistener/au7BqKR_B/The-Empathy-Engine) **Unleashing Creativity with AWS PartyRock** AWS PartyRock provides an intuitive interface that empowers users to unleash their creativity and bring their boldest AI ideas to life and discovering the **Magic of Prompt Engineering** At the core of PartyRock's brilliance lies the concept of prompt engineering – a technique that allows users to craft clear and concise instructions to guide AI models towards desired outcomes. With just a few clicks, users can harness the full potential of generative AI, exploring its capabilities and pushing the boundaries of what's possible. **My Journey with AWS PartyRock**: A Testimonial As someone with a passion for AI and a desire to learn, my journey of playing with Foundational Models available in Bedrock has been nothing short of extraordinary. From experimenting with various prompts to witnessing the remarkable transformations brought to life by the AI models, every moment has been an adventure filled with discovery and excitement. This cutting-edge technology provides users with a solid foundation to build upon, enabling them to create innovative and experimental AI applications with ease. Whether you're a seasoned developer or a curious newcomer, AWS Partyrock's simplifies the learning process and empowers you to unleash your creativity without constraints. **Embark on Your AI Journey Today** Ready to embark on your own AI journey? Visit https://partyrock.aws/ to dive into the world of AWS PartyRock and discover the endless possibilities that await. With its user-friendly interface, powerful Bedrock integration, and thriving community, PartyRock is the perfect platform to unleash your creativity and explore the cutting-edge frontier of generative AI. Let's rock the party and redefine the future of AI together!
amarpreetbhatia
1,786,848
Best 48 Web Development Tools Of All Time
Liquid syntax error: Tag '{% https://youtu.be/t5MIxlfNZxg %}' was not properly terminated with...
0
2024-03-11T12:50:00
https://www.lambdatest.com/blog/top-web-development-tools/
For web developers, there is no shortage of resources. Be it text editors, learning materials, build tools, deployment tools, testing tools, or any other category that can ease their lives as web developers!. However, if you are starting a new project, confusion may arise regarding the best-suited tool for your team. To clear this confusion, we have come up with this detailed post covering the 48 best web development tools that developers should. Web development tools is a detailed subject encompassing multiple facets like development, testing, etc. hence, we have covered the tools in categories based on their usage. So, let’s get started… > **With our easy-to-use online [XML to HTML](https://www.lambdatest.com/free-online-tools/xml-to-html?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) converter tool, you can convert your XML files to HTML in seconds and customize your output with various formatting options. ** ## IDEs for Developers Text editor or IDE is one of the most important web development tools. Because obviously, you need a tool for coding, don’t you? So let’s kick start this blog with this most important product category. ## Visual Studio Code ![](https://cdn-images-1.medium.com/max/3200/0*e8ijLVBYtPyr49Tc.png) Visual Studio Code ranks as one of the most popular IDE for web developers. This IDE is quite powerful and comes with built-in support for JavaScript, NodeJS, and TypeScript developers. It also supports multiple extensions for C#, Python, PHP, and C++. This open-source tool from Microsoft has the following popular features: * Text auto-completion with IntelliSense * Syntax highlighting * Easy navigation to functions * Built-in cmd terminal, making it easier for Node, Angular, and React developers to perform their entire work in a single IDE * Boilerplates and templates * Seamless integration with GitHub ## Vim ![](https://cdn-images-1.medium.com/max/3200/0*dQQnwaQJcQcLBtm1.png) Vim is a reliable and stable text editor compatible across Mac, Linux, and Windows platforms. Developers can generate scripts with just a few commands using the Vim editor. You can use this editor from: * Command-line interface * Standalone application in GUI The learning curve is a bit steep, and you must be ready to learn a whole new set of features that are different from the other popular IDEs. > **Want to convert [YAML to XML](https://www.lambdatest.com/free-online-tools/yaml-to-xml?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools)? Save time and effort by converting your YAML files to XML with our free online tool. Fast, easy, and efficient. No signups required. ** ## Webstorm ![](https://cdn-images-1.medium.com/max/3200/0*Z6V4ouVSvRE17BCF.png) If you want to unleash the full power of the JavaScript ecosystem, the ideal choice for you is Webstorm. Here are the top features that you might find useful: * On-the-fly error detection * Smooth navigation across code * Ability to refactor JavaScript, TypeScript, CSS, LESS, SASS, etc. * Intelligent way of code compilation Apart from these features, you can also run and debug your automation scripts using [Jest](https://www.lambdatest.com/jest), [Mocha](https://www.lambdatest.com/mocha-js), Karma, and more. And finally, you will get a simplified way to work with GitHub or Mercurial. Give this open-source tool a try, and you will discover other interesting features as well. > **Want to convert your [HTML to BBCode](https://www.lambdatest.com/free-online-tools/html-to-bbcode?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools)? Look no further, Use our free online HTML to BBCode converter tool to convert your website’s HTML content to BBCode format. ** ## Sublime Text ![](https://cdn-images-1.medium.com/max/3200/0*5-HJwWqOr0W082lJ.png) Sublime has been one of the most powerful and lightweight IDEs for web developers. The IDE works across multiple platforms and offers a range of customization options. This is what you can achieve using Sublime: * The application is significantly fast and has an intuitive interface * Supports code navigation as well as function & symbol navigation * You can highlight syntaxes in React or ES6 using the Babel plugin * Rich plugin ecosystem with a plethora of awesome plugins like SublimeLinter, Sidebar Enhancements, DocBlockr, etc. ## Notepad ++ ![](https://cdn-images-1.medium.com/max/3200/0*Zfyy27pj1DUSUmfB.png) Notepad++ is the most popular code editor when it comes to the advanced IDE category. This open-source tool comes in a very compact package. Notepad++ IDE consumes significantly lesser resources in comparison to the other IDEs available in the market. The most important factor that makes this IDE a part of our best web development tools is translating the interface into more than 80 languages. This level of internationalization is not present in many IDEs. What if you don’t find your native language on the list? Well, in that case, you can easily translate the tool into your native language. > **Make the conversion from [YAML to JSON](https://www.lambdatest.com/free-online-tools/yaml-to-json?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) quick and easy with our free online converter YAML to JSON converter tool. No signup or installation required. Try it now! ** ## Browser Plugins Browser plugins are primarily used for performing smaller tasks in the world of web development tools. However, be it inspecting the color code of an element or validating the markup of a code, the importance of plugins cannot be ignored. This is the primary reason why we have added browser plugins to our ultimate list of web development tools. Let’s talk about some plugins that you can use for your project. ## Githunt ![](https://cdn-images-1.medium.com/max/3200/0*xToARFb4ASQGw9lJ.png) Developers are always looking for inspiration to do better coding. Githunt is a React-based plugin for Chrome that allows you to explore the most starred projects hosted on GitHub. By default, you will get a list of the most popular projects curated on a weekly basis. You can also sort the projects based on their daily, monthly, or yearly popularity. You can also customize the view (i.e., list or grid) and apply filters across multiple languages if you do not find a popular project in your native language. ## WhatFont ![](https://cdn-images-1.medium.com/max/3200/0*W_KbgaVe5eIQc-mX.png) WhatFont Chrome extension can be your go-to extension if you port a new site from a legacy codebase. The plugin also supports Google Font API and Typekit. Just click on the extension and hover on the text whose font family you intend to investigate. This plugin will tell the font size, font weight, and font family. With more than a million users, it is one of the most popular tools used by web developers. > **Looking for an easy way to convert your HTML files to JSON? Try our free online HTML to JSON converter tool to convert your [HTML to JSON](https://www.lambdatest.com/free-online-tools/html-to-json?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) format in seconds ** ## LambdaTest Chrome Extension ![](https://cdn-images-1.medium.com/max/3200/0*3chxuX8EM9EIB6Bd.png) The availability of many browsers and platforms has made it important to test your website on different browsers & platform combinations. As a business (or even a developer), it is not wise to invest heavily in a local testing infrastructure. This is where a platform like LambdaTest can be helpful as it provides you a cloud-based [Selenium Grid](https://www.lambdatest.com/learning-hub/selenium-grid). LambdaTest is an AI-powered test orchestration and execution platform that lets you run manual and automated tests at scale with over 3000+ real devices, browsers and OS combinations. With [LambdaTest Chrome Extension](https://www.lambdatest.com/chrome-extension), you can access the LambdaTest interface and perform cross browser testing of your web app simultaneously across multiple browsers and platforms online. This extension helps developers increase productivity, reduce testing time, achieve faster TTM, and improve overall browser coverage. Here are some of the most exciting features of LambdaTest Chrome Extension: * Flexibility to take direct screenshots of up to 25 different browsers and operating systems online * Scheduling screenshots using preferred date, time, and configurations * Perform [geolocation testing](https://www.lambdatest.com/geolocation-testing) across 53 locations across the globe Check out our detailed video tutorial on the LambdaTest Chrome extension for live and [automated screenshot](https://www.lambdatest.com/automated-screenshot) testing: {% https://youtu.be/t5MIxlfNZxg %} > **Need to convert HTML data to CSV format? Use our [HTML to CSV](https://www.lambdatest.com/free-online-tools/html-to-csv?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) converter tool to convert your HTML data into CSV format quickly and easily. Try it out today. ** ## Lorem Ipsum Generator ![](https://cdn-images-1.medium.com/max/3200/0*ugoic9D4idkwlXej.png) Just creating a template with buttons, images, or carousels is not enough for designing a winning website. We also need a default text to mimic an actual site before delivering the template to the client. You must have seen “Lorem ipsum” default text in UI templates. Lorem Ipsum Generator is a useful plugin that provides a quick and easy way to customize and generate default text that compliments your amazing design. Not only the text, but you also get the option to customize the count of lines in each paragraph along with line breaks. > **Say goodbye to insecure passwords and unprotected data. Protect your sensitive data with our [RipeMD160 Hash Calculator](https://www.lambdatest.com/free-online-tools/ripemd160-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools). Generate secure hashes with just a few clicks. ** ## ColorPick Eyedropper ![](https://cdn-images-1.medium.com/max/3200/0*S1MzJ_i8VyfSPsMZ.png) ColorPick Eyedropper is a selector meant for inspecting web pages. You can use it to easily identify the Hex color code of any web page (or a particular element on the page). In addition, the Zoom feature lets you inspect the line color or border color (even if it is as small as 1 px). Once you click on the extension and activate it, your mouse cursor will convert into a crosshair. After that, just hover on the section you want to identify, and on the right-hand side of the screen, you will get the color code in RGB format and the hex color format. ## HTML Validator ![](https://cdn-images-1.medium.com/max/3200/0*u17U_W4FR7kKvCPj.png) Wondering what an easy and quick way to validate markup within the web browser is? HTML validators can come in super handy in such scenarios. Though there are a number of validators out there, HTML Validator is the most popular out of the lot!. To use an HTML validator, you just need to: – * Install the plugin * Open a developer console on the page * Navigate to the HTML validator tab * Get insights into the HTML warnings and errors present on the page page > **Make your data tamper-proof with our [SHA256 Hash Calculator](https://www.lambdatest.com/free-online-tools/sha256-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools). Create secure, one-way hashes for your data in no time. Start creating your hashes now! ** ## JSON Viewer ![](https://cdn-images-1.medium.com/max/3200/0*xF1XeeOlsJb4biWL.png) We normally get the minified version of JSONs when doing compilation or when data of a website is generated dynamically., Wouldn’t it be great if you can hierarchically see the JSON data? Well, if you have a lot of time, you can work with raw data, but you surely do not have infinite time in your hands! However, if you are looking for a quicker way to filter a small field in a huge JSON file, this addon accelerates product development. With over 800,000 users, the JSON Viewer plugin has a rating of 4/5 and is well-known for its ease of use and reliability. ## React Developer Tools ![](https://cdn-images-1.medium.com/max/3200/0*LYNQnKHwYr0pH4KV.png) This plugin is useful if you are developing your website using the React framework. You can inspect the React library using this tool. After installing the React Developer Tools extension, you would see two icons on the toolbar of your Chrome browser. One is for Profiler, and the other one is for Components. Profiler gives you detailed insights into the performance data. On the other hand, Components shows details about the React library being used on the page. In a nutshell, this plugin is a must-have for any React developer. ## Dimension ![](https://cdn-images-1.medium.com/max/3200/0*gfgFSjI2q4s0uWf9.png) Mouse hover is one of the effective ways to inspect paddings or margins in UX tools like Zeplin. Wouldn’t it be useful if we have a similar tool for browsers as well? Dimensions is a plugin that comes as a solution in the Chrome browser. Dimensions can be used to find the dimension, spacing, or gap between elements you see on a web page. After installation, you will find a crosshair icon on the toolbar. Just click on it, and you will see a similar icon on the screen. Move it around the object you want to measure. The exact dimensions will appear on the icon. The tool is simple but quite effective for developers and one of the top contenders in our best web development tools list. > **Need to protect your data? Our [SHA384 Hash Calculator](https://www.lambdatest.com/free-online-tools/sha384-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) creates secure, reliable, one-way hashes quickly and easily with just a few clicks. Try it out now! ** ## Testing Tools The most important activity that runs parallel to development is testing, so we have added them to our list of best web development tools. Finally, we will discuss and list down some popular tools that aim to enhance the quality and further accelerate your web application development. ## LambdaTest ![](https://cdn-images-1.medium.com/max/3794/0*VS9SrQ4h2nPqeu1m.png) Nowadays, the definition of an amazing web application or website is quite exhaustive. However, along with the speed and great technology, user experience also plays quite an important role. In this digital-first era, your website (or web app) must perform exceptionally well on a range of browsers, devices, and operating systems. To realize this perfect experience, you need to perform [cross browser testing](https://www.lambdatest.com/online-browser-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=webpage). LambdaTest is an AI-powered test orchestration and execution platform that lets you run manual and automated tests at scale with over 3000+ real devices, browsers and OS combinations. As a result, you can gain a competitive advantage by delivering a unified user experience, faster time-to-market, along with ensuring top-notch product quality. Here are some of the amazing features of the LambdaTest platform: * Live interactive browser compatibility testing * [Automation testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=webpage) using Selenium and Cypress frameworks * [Parallel testing in Selenium](https://www.lambdatest.com/blog/what-is-parallel-testing-and-why-to-adopt-it?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=blog) and Cypress for executing the faster go-to-market strategy * Automated Screenshot Testing & Geolocation Testing * Detailed test logs (and reports) for faster bug fixes * Supports integration with a large number of project management, CI/CD, & communication tools ## Selenium IDE ![](https://cdn-images-1.medium.com/max/3200/0*a4LfiyoQX6w6vH56.png) Are you looking for a UI-driven record and playback tool? If yes, then Selenium IDE for automation testing is the right choice for you. Selenium IDE is available for Chrome and Firefox browsers. You can export the created test into a script. It also offers easy [debugging](https://www.lambdatest.com/learning-hub/debugging) by setting breakpoints. The tool also offers multiple strategies for locating a recorded element and has a self-healing feature. Selenium IDE in [Selenium 4](https://www.lambdatest.com/learning-hub/selenium-4) also lets you export the recorded tests as code for all official language bindings like Java, C#, Python, etc. > [**Shuffle letters](https://www.lambdatest.com/free-online-tools/shuffle-letters?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) in your text and add a touch of randomness with our simple easy-to-use online tool. Get creative and try it now for free, no download required. ** ## Katalon Studio ![](https://cdn-images-1.medium.com/max/3200/0*lvjYTzl2laVexMTm.png) Katalon is the perfect tool if you are looking for a middle ground between codeless and code-based testing tools. The tool is an all-in-one solution for test automation that comes with the following features: * Productive IDE that can generate tests for all operating systems and platforms * Record and store all UI elements, thereby enhancing reusability * Offers a codeless experience * Seamless integration into the testing ecosystem {% https://youtu.be/dZNt9P5lRko %} > **Tired of manually [sorting list](https://www.lambdatest.com/free-online-tools/sorting-list?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools)? Sort any list of names, strings, or numbers with our hassle-free online sorting tool. Try our automated list sorting tool now! ** ## SoapUI ![](https://cdn-images-1.medium.com/max/3200/0*N6DV_hJWjLKJkPww.png) Built for both testers as well as developers, SoapUI is an open-source tool that offers load, functional, security, as well as mocking testing. With an easy-to-use GUI, this tool is one of the best web development tools if you are a beginner in API testing and want to validate SOAP and REST-based web services. Here are the major features of the SoapUI framework: * Ability to create complex testing scenarios with drag and drop tests * Ability to create load tests based on existing API tests * Mimic web services without waiting for them to be completed * Engaging open-source community ## Apache JMeter ![](https://cdn-images-1.medium.com/max/3200/0*po2CYiwmrqSInK-t.png) Apache JMeter is another popular open-source tool meant for testing the performance of static and dynamic web applications. For example, you can use the tool to simulate a heavy load on a single (or a group of servers), objects, or network. The objective is to test the application’s strength or analyze how the site performs under different loads. The tool is portable, and you can run headless tests from any Java-compatible operating system like Mac, Windows, or Linux. > **Tired of manually splitting long text strings? Try out our free online [String Split by Delimiter](https://www.lambdatest.com/free-online-tools/string-split-by-delimiter?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) tool to quickly split text strings into separate chunks using any delimiter of your choice. ** ## Test Automation Frameworks Test automation frameworks are becoming important with each passing day to increase the team’s efficiency and speed. Especially at present, when customers are all about the Agile way of faster delivery. Automation Testing in Agile accelerates the process of software development, testing, and delivery. Let’s discuss some of the popular test automation frameworks that we have added to our list of best web development tools. ## Robot Framework ![](https://cdn-images-1.medium.com/max/3200/0*ioZ9rXJS77pNd4Sf.png) When it comes to test automation framework, the first that comes to our mind is the [Robot framework](https://www.lambdatest.com/blog/robot-framework-tutorial/?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=blog) in Python. It is a generic open-source framework primarily meant for test automation. It is an open-source tool where tests are written in Gherkin, a language with human-readable keywords. The framework is independent of operating systems and applications. ## Cypress ![](https://cdn-images-1.medium.com/max/3200/0*kXnc_5emaR_Kp3zy.png) If you want a complete end-to-end experience while testing your web application, [Cypress](https://www.lambdatest.com/learning-hub/cypress-tutorial) is the ideal framework. Here are some of the awesome features of the Cypress framework: * Simple installation without any additional dependencies * Preferred for [end-to-end testing](https://www.lambdatest.com/learning-hub/end-to-end-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=learning_hub) of modern web applications * Feature-rich automation dashboard * Reduced (or minimal) network flakiness * [Cypress testing](https://www.lambdatest.com/blog/cypress-test-automation-framework/?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=blog) on cloud lets you run cross-browser tests at scale > **Need to convert CSV to JSON? Try our free [CSV to JSON](https://www.lambdatest.com/free-online-tools/csv-to-json?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) converter tool to convert your CSV files to JSON format. Simple and easy to use. Try it now for free! ** ## WebdriverIO ![](https://cdn-images-1.medium.com/max/3200/0*mGW1lAmXRtcf0t6E.png) Are you looking for an automation framework that allows you to write your scripts in JavaScript while leveraging Selenium? Unlike Selenium, if you are using Webdriver, you don’t need to write any code from scratch. This framework comes with all the resources you need to develop a scalable and sustainable test suite. The open-source framework will certainly make you happy if your team comprises JavaScript developers and coding enthusiasts. In addition, the testing community widely uses the WebDriverIO framework to perform automation testing. Refer to the detailed [WebDriverIO Tutorial](https://www.lambdatest.com/learning-hub/webdriverio) to get started with automation testing with the WebDriverIO framework. ## Jasmine ![](https://cdn-images-1.medium.com/max/3200/0*wxXNxiV6sLGI8c8V.png) With the shift-left approach becoming more mainstream in testing, the popularity of behavior-driven frameworks (BDD) is growing with each passing day. Jasmine is a BDD-based framework that can be used for testing your JavaScript application. The framework comes with a clean human-readable syntax that lets you write automation tests with ease. Since it is a BDD framework, people from all the departments (not limited to only technical departments) can participate in creating and maintaining test scenarios. > **Tired of working with XML data? Our [XML to YAML](https://www.lambdatest.com/free-online-tools/xml-to-yaml?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) converter tool convert your XML data to YAML with ease. No installation required, and it’s 100% free! ** ## Cucumber ![](https://cdn-images-1.medium.com/max/3200/0*Z8HbLuXKmBK6xR1L.png) When it comes to BDD, what if I told you that there is a framework that offers testing and collaboration between teams? If that is your requirement, you should give Cucumber a spin. To learn more on about automation with Cucumber and NightwatchJ, follow this reference on [automation testing with Cucumber and NightWatch JS](https://www.lambdatest.com/blog/automation-testing-with-cucumber-and-nightwatchjs/?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=blog) and get valuable insights. Here are some of the power-packed features of the Cucumber framework: * Writing human-readable test cases that follow the BDD approach * Improved collaboration between teams * Work in small and rapid iterations, thereby improving communication between the team members Not only that, CucumberStudio connects with your source control tool like Git. ## Karma ![](https://cdn-images-1.medium.com/max/3200/0*iDLsRcR-QYaBKuz8.png) Are you looking for a framework where your team should worry less about the configuration to write efficient test scripts? In that case, you should try out Karma. Here are some of the awesome features of Karma: * Control of the entire workflow from the IDE * Flexibility to run automation test scripts on real devices like tablets, phones, or even headless instances like PhantomJS * Open-source tool with a large community * Supports Continuous Integration with Jenkins and Travis * Option to run tests with Jasmine, QUnit, or any other adapter > **Need to convert HTML to XML? Our [HTML to XML](https://www.lambdatest.com/free-online-tools/html-to-xml?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) converter is fast, easy to use, and completely free. Convert your HTML files to XML in just a few seconds. ** ## Serenity ![](https://cdn-images-1.medium.com/max/3200/0*4pFtf2JaCetoXpIa.png) Serenity is an open-source BDD framework meant for writing acceptance tests for your web applications. Here are some of the major features of Serenity: * Flexibility to write maintainable and scalable test scenarios * Generation of illustrative and highly detailed reports * Ease of mapping automated tests to the requirements ## Frontend Frameworks Are you developing a static website? If so, simple HTML, CSS will do the job. However, nowadays, almost everyone prefers dynamic single-page websites. Let’s discuss a few frameworks ranked in the State of JS 2020 survey that lets developers create robust and scalable JavaScript-based web applications. ## React ![](https://cdn-images-1.medium.com/max/3200/0*Civu6zkZLO9FbeH5.png) Created by Facebook, React is one of the most popular web development frameworks. Ranked top in the State of JS 2020 survey, this framework is a favourite among all developers who work on single-page web applications. With the release of React Native, a cross-platform framework for mobile development, React is also gaining traction in the mobile development market. The framework has a virtual DOM, a stable way of writing codes with a number of libraries, and has a rating of over 169k stars in GitHub. ## Vue ![](https://cdn-images-1.medium.com/max/3200/0*to19BrlNYWmyQ2cP.png) Vue is popular since it comes with a rich feature set with a very small memory footprint (i.e., 29KB). Like Angular, you can also work with 2-way data binding. The official site of Vue has detailed documentation which makes it easier for you to learn. However, vue can be a bit complex in case you are using it for a large-scale project. > **Need to convert [XML to CSV](https://www.lambdatest.com/free-online-tools/xml-to-csv?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools)? Simplify your data conversion process with our XML to CSV Converter tool. Convert your XML files to CSV format quickly and easily in seconds! ** ## Svelte ![](https://cdn-images-1.medium.com/max/3200/0*loROem2KK52e7hc5.png) Though the syntax of Svelte aligns closely to web standards, its approach is different from basic HTML, JS, or CSS. The design approach of Svelte is similar to Python. Although complex, it is still popular among developers because of its aesthetically pleasing appearance and clear-to-read templating based on HTML. During compilation, the framework optimizes the code and incorporates changes during runtime with minimal overhead. ## Preact ![](https://cdn-images-1.medium.com/max/3200/0*_YlCKd8BW9jTaYW8.png) What if you want to use React for small projects? The alternative is Preact that comes with a smaller memory footprint (i.e., 3KB) It is based on the ES6 framework, and because of its small size, performance is not an issue. Preact comes with a fast virtual DOM. It uses standard attributes of HTML, and you can share reusable components like UI elements or data providers. The major downside is that it has a smaller developer community ## Ember ![](https://cdn-images-1.medium.com/max/3200/0*2lzQYu7d_necEgVI.png) Ember is one of the best frameworks for developing client-side web applications. It is considered as the one-stop solution for handling application flow and data management. Apps developed using Ember have a high performance since the framework can simultaneously handle multiple tasks. In addition, Ember has its asset pipeline, services, and route, thereby providing you a full-stack framework for development. > **Looking to convert [binary to hex](https://www.lambdatest.com/free-online-tools/binary-to-hex?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools)? Convert binary numbers to hex with ease using our free online Binary to Hex Converter tool. Perfect for developers and coders. ** ## LitElement ![](https://cdn-images-1.medium.com/max/3200/0*k2WvTdYZIO2UiLFe.png) Though LitElement is not that popular like Angular or React, it is still preferred due to its ability to create faster and lightweight web components. Therefore, it is ideal if you are working on small projects. You can also use the components in any CMS or a framework like Vue or React. Moreover, as you can use JavaScript, you no longer need to learn any new language for the template. ## AlpineJS ![](https://cdn-images-1.medium.com/max/2000/0*LCwG44y4UNn6bl3V.png) AlpineJS is another popular framework meant for handling small projects. The framework is popular for apps that run on the server-side. This is because the framework leaves a small footprint in the app. With a size of about 4KB, it works great with the current templates. DOM manipulation is a breezy task with AlpineJS. To learn more above various [web development frameworks](https://www.lambdatest.com/blog/best-web-development-frameworks/) you can refer this guide and get better insights on which framework suites best for your project needs. ## Repository and Collaboration Development and testing are important, but collaboration is increasingly important for teams. Let’s discuss some popular platforms we’ve added to our ultimate list of web development tools that enhance team collaboration. We have also collated a list of the top code repository tools. ## Jira ![](https://cdn-images-1.medium.com/max/3200/0*K9AcwZre_WsY0Xiy.png) If you are using [Agile methodology](https://www.lambdatest.com/learning-hub/Agile-development-methodologies), you might’ve surely come across Jira. It is considered one of the best platforms for cross-team collaboration. It has gained more prominence since the time remote work has picked up due to the ongoing COVID-19 pandemic. Here are some of the major features of Jira: * Powerful features with ease of customization * Easy-to-manage workflows * Flexibility to track changes in the project * Provision of a centralized dashboard through which you can share data with other team members > **Convert your [JSON to CSV](https://www.lambdatest.com/free-online-tools/json-to-csv?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=free_online_tools) format in seconds with our easy-to-use JSON to CSV Converter. It’s fast, reliable, and user-friendly, making data conversion simple. ** ## Asana ![](https://cdn-images-1.medium.com/max/3200/0*SCUJBIhVAbsHtNiH.png) Asana allows you to assign tasks to other team members, check deadlines and assign followers to your projects. Major features of Asana: * Assign and organize development activities that you and your team can view in a list * Timeline feature lets you map work according to time * The rich and interactive dashboard helps your team to focus on priority tasks ## GitHub ![](https://cdn-images-1.medium.com/max/3200/0*SH4hOAotLzhfC4k9.png) Probably the largest platform for hosting code repositories, with GitHub, you can manage code, projects and develop software with other developers. GitHub allows you to: * Write code in a better way * Collaborate with teams * Execute collaborative team reviews * Provision of Secure Private repositories * Access control to the checked-in repositories ## Bitbucket ![](https://cdn-images-1.medium.com/max/3200/0*BE8ZibrucH8nXVOn.png) Offered by the creators of Jira, Altissian comes a popular repository, Bitbucket. The repository offers you unlimited private codebases for Git or Mercurial. Features of Bitbucket: * Approve code that is pending for review * Create and manage pull requests * Secure workflow and flexible deployment models * Integration with JIRA ## Trello ![](https://cdn-images-1.medium.com/max/3200/0*2NmTJO-KFCqSnIsK.png) Trello is a small but useful tool for collaboration, especially for smaller projects. The tool offers you the option to organize projects on boards. At a glance, the boards tell you what is going on, who is working on what, and more. Imagine Trello to be a whiteboard where you have added a list of sticky notes, each note having a task and one or more team members assigned for each task. Now imagine each sticky note having an attachment of photos, data sources, documents, and a section where you can add comments. A portable whiteboard that you can access anywhere, from any device. ## Basecamp ![](https://cdn-images-1.medium.com/max/3200/0*wS8cUTMjAi10EMSv.png) Basecamp is a collaboration tool like JIRA or Asana. Using this tool, people manage their projects while communicating with the team members. You can use the tool to keep track of your work, assign tasks, set up and view deadlines, plan discussions, and more. Although it is considered a project management tool, we can categorize it as a collaboration tool because of its flexibility to its users. ## Jenkins ![](https://cdn-images-1.medium.com/max/3200/0*Hm0sWCWQMnx-09kH.png) CI/CD has become a must-have in today’s hyper-competitive environment. [Jenkins](https://www.lambdatest.com/learning-hub/jenkins?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=learning_hub) is one such tool that leads the race in the CI/CD market. It offers multiple plugins to build, deploy and automate a project. Here are the major features of Jenkins: * Open-source tool that comes as a self-contained program based on Java * Easy installation and configuration * Can be used as a simple CI server, with the flexibility to convert into a CD hub for a project * Distributed architecture of Jenkins allows you to set it up across multiple machines and execute a faster code build, test, deployment. ## Other Notable Tools As I have noted earlier, the count of best web development tools is almost endless. Let’s close our post by discussing [responsive testing](https://www.lambdatest.com/learning-hub/responsive-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=learning_hub) tools, task runners, [CI/CD tools](https://www.lambdatest.com/blog/best-ci-cd-tools/?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=blog) as well as some browser-based development tools that make a developer’s life easier by reducing repetitive tasks. ## LT Browser ![](https://cdn-images-1.medium.com/max/3764/0*hEedwqK-6t-QGXrO.png) There are many responsive testing tools available in the market, some with an advanced set of features. [LT Browser](https://www.lambdatest.com/lt-browser/?utm_source=devto&utm_medium=organic&utm_campaign=mar_11&utm_term=vs&utm_content=webpage) is one of them and can be used to test various device viewports simultaneously. You can open a website in the LT Browser and perform live testing across 50+ pre-installed device viewports. This freemium tool is built for the purpose of enabling rapid/accurate testing of websites and providing quick feedback to developers. Some of the features include: * Allows responsive testing on multiple viewports like mobile, tablets, desktops, and laptops * Generates performance reports of websites powered by Google lighthouse * Supports different network conditions while testing * Compares devices on multiple viewports simultaneously * Supports Hot reloading * Allows you to take full-page screenshots * Records test session ![](https://cdn-images-1.medium.com/max/2000/0*nXRYPwBgQSoCtW9C.png) ## Grunt ![](https://cdn-images-1.medium.com/max/3200/0*b4OrxT12M_rrLIPE.png) The main reason for using Grunt is automation. The less work that you can do while performing repetitive tasks means the more time you can invest in finding out innovative solutions to implement in your project. Be it code compilation, unit testing, linting, or other tasks, free-to-use Grunt is a popular task runner who can do most mundane tasks for your team with less effort. Moreover, the ecosystem of Grunt is evolving at a rapid pace. With a huge count of plugins, you can choose the ones you need and automate almost anything. ## Gulp ![](https://cdn-images-1.medium.com/max/3200/0*0wUmD3na6v9eV65s.png) Another popular task runner like Grunt, Gulp uses the flexibility of JavaScript and automates slow tasks to build an efficient pipeline. You can use Gulp to write your own code and perform a chain of tasks with a single command. Since the tool is open source, there are lots of community-built plugins. You can use them to get quickly started with Gulp. Each plugin does a small amount of work! Collectively using them allows you to build a chain of tasks that can give you the desired result. ## Docker ![](https://cdn-images-1.medium.com/max/3200/0*qmAEjdUdOIIk3sGy.png) Docker makes our life easier by taking away the mundane and repetitive configuration tasks that we perform throughout the development cycle of a web application. Here are the major features of Docker: * Flexibility to develop a unique app on Mac and Windows * Easy integration with popular tools like GitHub, CircleCI, or VS Code * Seamless collaboration with other team members, helping you publish images in the Docker Hub * Deliver multiple applications and run all of them in your environment, including design, staging, testing, and production * [Run Selenium tests with Docker](https://www.lambdatest.com/blog/run-selenium-tests-in-docker/) for expedited automation testing ## Sourcetree ![](https://cdn-images-1.medium.com/max/2000/0*ulX3cXqh4vH1obLN.png) Are you unfamiliar with the command-line interface and the commands needed to push and pull code from a GitHub repository? Sourcetree brings you a free Git client for both Mac and Windows. With Sourcetree, you can visualize your code, find changes and conflicts, and resolve them with ease. The tool works for both Git and Mercurial. You can visualize the latest code, compare your changes, and update code with confidence. If you are not that much confident with Git, you can learn it easily with a set of tutorials that covers merging, branching, and many more. ## GitLab ![](https://cdn-images-1.medium.com/max/3200/0*u0tMLGS77lIZ9lOb.png) GitLab is an open-source DevOps platform that lets you view your project, collaborate with other team members, and ship at a faster pace. Major features of GitLab: * Single interface, data store, and conversation thread making it easy to use the tool * Scalable and powerful end-to-end automated operation resulting in improved efficiency within the team * Highly secure tool that lets you unearth vulnerabilities and code quality issues. ## Codepen ![](https://cdn-images-1.medium.com/max/3200/0*6Vk2qgEuHR1sqpxA.png) Suppose you have just gone through new web technology and want to test how it works for your project. You don’t need to worry about setting up an environment for just a simple trial run since Codepen can help you in that area! It is a social development environment where UI developers build and deploy their work with a readily configured environment. You can browse and fork through multiple readily available codes, helping you learn with the community. ## JSFiddle ![](https://cdn-images-1.medium.com/max/3200/0*JZp8lk1xSk4HQs0y.png) Just like Codepen, [JSFiddle](https://jsfiddle.net/) is another great tool for testing and sharing your code. You don’t even need to register to try out a code. The online tool offers you three sections to individually type HTML, CSS, and JS code. The site will take care of the build and deployment in real-time. It also offers color-coded syntax and a tidy-up feature that properly formats the code with appropriate spaces. Moreover, you can browse fiddles published by others and learn new stuff. So, the next time someone in your team gets stuck in implementing a new thing, create a fiddle and share it with them to make your team’s task easier. ## Wrapping It Up We hope that you will find the tools that we listed to be quite useful. Choose the one that you need based on your project and your stakeholder and team’s requirement. If you are working on a large project, go for frameworks like Angular or React and collaboration tools like Jira. If your project is small, you can give Trello or Preact a try. Let us know if you have come across similar and useful tools that consume fewer resources while reducing your workload.
arnabroychowdhury
1,786,861
Meme Monday
Meme Monday! Today's cover image comes from last week's thread. DEV is an inclusive space! Humor in...
0
2024-03-11T13:23:09
https://dev.to/ben/meme-monday-ha4
jokes, watercooler, discuss
**Meme Monday!** Today's cover image comes from [last week's thread](https://dev.to/ben/meme-monday-2j0b). DEV is an inclusive space! Humor in poor taste will be downvoted by mods.
ben
1,787,019
The Adventures of Blink #16: Continuous Testing
Hey friends! Today we add to the "Becoming a DevOps" series with the topic of "Continuous Testing". ...
26,840
2024-03-28T12:38:30
https://dev.to/linkbenjamin/the-adventures-of-blink-16-continuous-testing-47f8
devops, testing, beginners, learning
Hey friends! Today we add to the "Becoming a DevOps" series with the topic of "Continuous Testing". By the time we finish this series we're going to have a complete look at a whole lot of DevOps principles. I hope you're enjoying the journey as much as I am! ## What's the big deal about testing? Maybe it was just a failing of my particular faculty, but in school we didn't talk about automated testing... like, at all. Testing your code meant "run the program and try things". It wasn't until much, much later that I learned from some excellent software engineers (Thanks @sturdy5 !) about the idea of automating the testing process. ### Types of Testing There are a few possible phrasings of "test automation"... but to my mind, they're all just focus points within the umbrella of software testing. Why do we need the distinctions? Well, mostly for our own ability to keep things organized... at the core, software testing is just ensuring that code does what it's supposed to. Nevertheless, we should probably cover a few key terms to help you get started and to make sure you know what people are talking about. #### Unit Testing Unit Testing is the discipline of validating that units of code perform correctly regardless of their inputs. Think of unit tests like the Proofs you had to write back in math class - how do you prove that an isosceles triangle has two congruent angles? In the same fashion, a unit test describes all the possible inputs to a bit of code and its expected outputs. For example, you might have a method in your program that calculates the amount of interest on a given principal sum, given a rate and its compounding frequency. ![Python Code to calculate interest](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l5j3leuahnmsczou99g.png) This is something that most definitely needs to be mathematically consistent! The unit test would take all possible inputs and provide expectations of outputs that those inputs would give. Very straightforward, don't you think? But Unit Testing requires a slightly different thought process to be successful... The Art of Unit Testing comes in the ability of the programmer to anticipate things not following the "happy path". You might look at the previous example and think "well that's pretty easy code. It's like 5 lines long. What's to test?" BUT... What happens if you call ```python simple_interest(-1,-1,-1) ``` Will those negative numbers cause an undesirable behavior in your application? What about if you passed it an array of values for principals - is it your intent to calculate the interest for each array element and return an array of results, or should that be an error condition? Or worse, what if you call it like this: ```python simple_interest('turnip',0,0) ``` For a simple script like this, these concerns might seem a bit over-the-top... but if you're producing code to be used by others, you need to add a degree of due diligence to your work to ensure that you provide a consistent, expected way of handling invalid (or just generally weird) inputs. You might already be in the habit of writing good error handling routines... but do you remember to test every possible iteration when you complete your work? [WATCH HERE: The Adventures of Blink - Getting started with PyTest!](https://www.youtube.com/watch?v=vFctOdcAlbY) {% embed https://www.youtube.com/watch?v=vFctOdcAlbY %} Unit tests generally don't ever exceed the boundaries of a single method - you're testing ONLY the functionality within the specific method. This may require you to provide "mock" data as a response from any code that your method calls. Your goal is _isolation_ of the code to ensure its logic is sound. #### Integration Tests Integration tests work the same unit tests, except that they focus on a higher layer - rather than verifying that one individual block of code functions correctly, they're intended to confirm that modules interact properly. Integration tests still rely on "mock" data to an extent - the goal is still to test the integration of the two components in isolation. #### Functional Tests Functional tests take the scope all the way to the user's perspective. Mocking is much less likely as we're focused on seeing the entire performance of the functionality. Functional tests are typically built to match the requirements provided by the user / designer. They're often a little "fuzzier" in nature - where unit tests can be at the level of a mathematic proof, a functional test is likely to be more about whether our code has met the "spirit of the law". #### Other types of tests You'll see lots of other types of "tests" enumerated in the wild: - UI Tests - Regression Tests - Load Tests - Performance Tests - Smoke Tests - Security Tests - Usability Tests Generally speaking, these are made up of the same basic components found in Unit, Integration, and Functional tests, but with different intents and points of focus. ### The Purpose of Testing There are several purposes of providing automated tests for your code: 1 - You need to ensure you're releasing as few bugs as possible. This seems the most obvious - you test your code to make sure you write (and release) good code! In the old Waterfall, we had a whole section of "QA Testing" which could either be manual or automated... as you might imagine, automated testing is less expensive and time-consuming than manual testing, so it makes sense that we'd want more of it. 2 - You're future-proofing. Testing would seem like a frustrating thing if you could only use it once. Fortunately, a well-designed suite of automated tests is tremendously valuable for your future! Imagine: You build and release this application, and then you move on to other work. A year later, someone comes back and asks for a change to this application. Is the change going to break anything? How would you know? You've forgotten all the details and you're having to get back up to speed before you can make the change, only to realize you don't know for certain if this will affect any other parts of the code. But because you've built a great test suite, you make your change and re-run your tests, and they all pass. You can be confident that you didn't break anything else by making this change. Or maybe another test fails, allowing you to spot something you forgot about. Having a test suite is critical to the maintenance of the application. If you don't invest in building it now, you're going to spend extra time manually testing things (or risking surprise production bugs)... which can be expensive. An ounce of prevention... 3 - You're you-proofing. Another way to think about future-proofing is from the perspective that "ownership is eternal". You wrote this code, and that means you get to maintain it. Forever. Doesn't matter if you moved to a different role... or maybe even a different company... someone's going to see your ID on the commit history and think "this is urgent" and reach out to you (as if you carry the history of this codebase around in your mind at all times!) A comprehensive test suite can help another developer figure out what to expect from your code. They can see how it's used, see what common concerns are, and validate their own changes without reaching out to you. ### The DevOps Angle ![A CI/CD Pipeline with Test Automation featured](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn1odo9tixr49nobz63i.png) Ok, Ben, you've convinced me that test automation is important... what does this have to do with being a DevOps practitioner? Have you ever made changes to a codebase, but forgot to run the tests? Or maybe you thought you could get by without it, and then the world came crashing down around you? Your CI/CD Pipeline _needs_ your test suite! If you configure every build to require a successful test run, you add safety into your changes. And the best part is that, if you're practicing good hygiene and swarm-fixing every broken build, this will ensure the build breaks whenever a test fails. Your delivery will get faster and smoother and as long as your tests are always improving, so too will your delivery stats! ### When to test The short answer: Early and Often! There are many proponents of a paradigm called "TDD"... "Test-Driven Development". In this strategy, a programmer FIRST writes the test suite code based on the requirements for the project, and then begins building the app until it passes all the tests. Is this a good practice? As with most things, it depends... 😏 If you're meticulous about your requirements and design phases, or if you're [embracing agility](https://dev.to/linkbenjamin/the-adventures-of-blink-13-agility-57nd) and your work is properly sized for fast delivery, you might be able to do this with ease. If your requirements are more fuzzy, or subject to change, or if you're struggling to get down to the smallest units of work, TDD may be a little more difficult to achieve. That's not to say it's not possible, but it may have some complexities that you'll have to work through. Ultimately, there's no "right" or "wrong" time to build your test suites though - having tests is always better than not having them! So if TDD doesn't work with your brain or with your work style, just commit to submitting tests alongside all new code that you write. Don't fall in the trap of "I'll do this later" because "later" never comes! ### What do I test? Some folks need the "100% Code Coverage" thing worked out... but in reality, that's overkill for most projects. So much of our code involves "boilerplate work" - setting up variables, creating get/set methods for operating with data in a class, default constructors... and there's very little value in actually testing those things because they're so simple. A good rule of thumb is that "if it was hard to write, it should be tested." Nobody struggles with creating get/set methods, except in occasional weird edge cases... but if you spent significant time on a bit of logic, make sure it's tested! ### Wrapping up Test automation is a critical need in the DevOps pipeline because it establishes one of the guardrails we need in order to move fast - we want to know that we're producing code that works! The various flavors of testing work together to give us a more complete picture of how our application will perform - but the biggest taekaway is that testing is not a task for someone else on another team. It's up to us, as we write our code, to think about how it could misbehave and not only address it but validate with that misbehavior in mind. This will make our work more reliable and valuable to our teammates and save us time & effort in the long term!
linkbenjamin
1,787,051
An Updated Guide to Maven Archetypes
About-Maven-Archetypes What is the point? One of the major goals in programming...
0
2024-03-11T15:38:16
https://dev.to/carter907/an-updated-guide-to-maven-archetypes-3eah
java, maven, programming, beginners
# About-Maven-Archetypes ### What is the point? One of the major goals in programming is to harbour re-usability and scalability. One way in which we can reuse code is by creating a templating system that allows you to quickly get up and running with predefined code that you're confident in. Maven archetypes one example of this system in action. ### How do I create my own? You can create your own Maven Archetype by using a Maven Archetype! Using the predefined `org.apache.maven.archetypes:maven-archetype-archetype`. Make sure you choose the archetype from Maven Central, as it has the most up-to-date version. You can also safely delete the test directory of the root folder as we this is simply a demonstration. Once you've done that, you can expect the following project structure. ```text my-archetype/ |-- src/ | |-- main/ | | |-- resources/ | | | |-- archetype-resources/ | | | | |-- src/ | | | | |-- main/ | | | | |-- java/ | | | | | |-- App.java | | | | |-- test/ | | | | | |-- TestApp.java | | | |-- META-INF/ | | | |-- maven/ | | | |-- archetype-meta.xml | | | | | |-- pom.xml |-- pom.xml ``` ### archetype-meta.xml `archetype.meta.xml` provides metadata for your archetype such as definitions for folder locations and how they should behave. More specifically, this is where you will define the expected layout of your source files, tests, and other properties that you will use in your project. Those files are created directly by you in the archetype-resources folder. ```xml <archetype-descriptor xmlns="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-descriptor/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-descriptor/1.0.0 http://maven.apache.org/xsd/archetype-descriptor-1.0.0.xsd" name="${artifactId}"> <fileSets> <!-- fileSets are used to denote some arbitrary group of folders in your project. --> <!-- per maven project specification, you should have a directory for your java source files --> <fileSet filtered="true" packaged="true"> <directory>src/main/java</directory> </fileSet> <!-- and a directory for your test files --> <fileSet filtered="true" packaged="true"> <directory>src/test/java</directory> </fileSet> </fileSets> </archetype-descriptor> ``` ### Filtered & Packaged attributes `filtered` - You will be using property binding in the java source files to bootstrap any required properties. `packaged` - the fileset will be prepended to the package or group-id of the newly created project. ### Required Properties Required properties allow you to specify parameters that will be queried to the user when building your archetype with the help of `mvn achetype:generate`. These properties can have a `defaultValue` which will be used if the user does not input any value. these properties are interpolated into the scaffold created in `archetype-resources`. You can learn more about this templating is done by checking out the [Apache Velocity Project](https://velocity.apache.org/) ```xml <requiredProperties> <requiredProperty key="about"> <defaultValue>no about specified</defaultValue> </requiredProperty> <requiredProperty key="color"/> <requiredProperty key="due-date"> <defaultValue>10 days from today</defaultValue> </requiredProperty> </requiredProperties> ``` here we create a few required properties. Only color will be required by mvn command but all will be queried if interactive mode is remained on. ### Defining Properties To define properties, we use the following syntax: ```thymeleaftemplatesexpressions ${name-of-property} ``` Here is our example template locate in `src/main/resources/archetype-resources/main/java/` ```java package $package; /** * * ${about} * due-date: ${due-date} */ public class App { public static void main( String[] args ) { System.out.println("Your favorite color is ${color}"); } } ``` ### Installing Installing is a matter of calling `mvn clean install` to install the archetype to our local repository, `.m2/repository`, which can be found as a hidden folder under your user directory. ### Generating a project using the Archetype & Final Result Now it's time to finally generate the template and retrieve it from our local repository which will be referenced automatically when we call `mvn achetype:generate` ``` mvn archetype:generate -DgroupId=com.example \ -DartifactId=my-archetype-created \ -DarchetypeArtifactId=my-new-maven-archetype \ -DarchetypeGroupId=org.example \ -DarchetypeVersion=1.0-SNAPSHOT \ -DinteractiveMode=false \ -Dcolor=red \ -Ddue-date=09/05/2023 \ -Dabout='This project was created using Maven Archetypes, and this text is from the command line' ``` This command will generate the project in the current directory using your artifact id (`my-archetype-created` in the above example). Now that we have are project set, we can view the templating taken place in `App.java` ```java package com.example; /** * * This project was created using Maven Archetypes, and this text is from the command line * due-date: 09/05/2023 */ public class App { public static void main( String[] args ) { System.out.println("Your favorite color is red"); } } ```
carter907
1,787,057
Unlocking Professional Growth: The Untapped Value of Keeping a Working Journal
Maintain a working journal to unlock professional growth, document achievements, combat recency bias, and empower career advancement. Use it as a strategic tool to showcase your value, enhance your personal brand, and engage with your professional community. Start journaling today to navigate your career journey with purpose and achieve your goals.
0
2024-03-11T20:11:06
https://dev.to/dev3l/unlocking-professional-growth-the-untapped-value-of-keeping-a-working-journal-5cld
professionalgrowth, careerdevelopment, workingjournal, continuouslearning
--- title: Unlocking Professional Growth: The Untapped Value of Keeping a Working Journal published: true description: Maintain a working journal to unlock professional growth, document achievements, combat recency bias, and empower career advancement. Use it as a strategic tool to showcase your value, enhance your personal brand, and engage with your professional community. Start journaling today to navigate your career journey with purpose and achieve your goals. tags: ProfessionalGrowth, CareerDevelopment, WorkingJournal, ContinuousLearning cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8d733u4p3ub7849dat4.png --- ![A depiction of a diverse group of professionals from various fields (such as software development, product management, and design), each holding a journal. They're standing on a path that leads toward a horizon illuminated by a rising sun, representing the journey of continuous improvement and career advancement that journaling can facilitate.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zpz3039epyaltr0jric.png) Professionals are continuously seeking ways to sharpen their skills, improve productivity, and propel their careers forward. Amidst the myriad of tools and methodologies promising to unlock professional growth, one simple yet profoundly impactful practice often goes overlooked: keeping a working journal. This practice, situated at the intersection of personal introspection and professional development, offers a unique blend of practical and psychological benefits that can transform the way we approach our work. For software development professionals—including engineers, product managers, UI/UX designers, and quality assurance specialists—a working journal serves not only as a repository of daily tasks and project milestones but as a mirror reflecting their journey of challenges, achievements, and learnings. By systematically documenting their professional experiences, these individuals can gain a nuanced understanding of their own development, advocate for their contributions, and navigate the ladder of career progression with confidence and clarity. Yet, the value of a working journal extends beyond mere personal growth. It plays a crucial role in fostering accountability, combating recency bias in performance evaluations, and providing tangible evidence of an individual's impact within an organization. As we delve deeper into the myriad benefits of this practice, it becomes clear that maintaining a working journal is not just an exercise in self-reflection—it's a strategic tool for enhancing personal brand, forging meaningful connections, and driving collective innovation. Join us as we explore the untapped potential of keeping a working journal, unveiling how this seemingly simple habit can unlock profound professional benefits and open new pathways to success in the tech industry and beyond. ## The Foundation of Professional Development ![An illustration showing a solid foundation made of journals stacked one atop another, supporting a structure that represents a person’s career. On this foundation, figurines of professionals are climbing a staircase made of books and milestones, highlighting the role of a working journal in laying the groundwork for continuous learning and professional growth.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5cswggculm8zmsg43t77.png) The quest for continuous improvement stands central to thriving in the tech industry. Amid the tools and methods available to aid this journey, the act of maintaining a working journal emerges as a fundamental yet powerful avenue for fostering professional growth. A working journal, meticulously kept, becomes a conduit through which professionals can channel their daily experiences into meaningful insights. It allows for a structured reflection on what was planned versus what was accomplished, granting clarity on the path forward and illuminating areas ripe for improvement and innovation. ### Structured Learning through Documentation The process of recording daily tasks, achievements, and challenges in a journal encourages a methodical approach to learning. Each entry serves as a stepping stone, constructing a mosaic of one's professional evolution. This structured documentation helps in identifying patterns over time—patterns in challenges encountered, solutions devised, and the effectiveness of different approaches. Recognizing such patterns enables professionals to refine their strategies, ensuring that their learning is both strategic and applied. ### Capturing Iterative Progress Software development, with its innate complexity and dynamism, demands a mindset of iterative progress. A working journal perfectly complements this ethos, providing a space to document the incremental steps taken towards project completion and personal skill enhancement. It empowers professionals to track their progress against set objectives, celebrating small wins and learning from setbacks, thereby embedding the principle of iteration in the fabric of their professional development. ### Enhancing Problem-Solving Skills Journaling not only captures what one has done but also the thought processes behind actions and decisions. This introspection on problem-solving strategies—what worked, what didn’t, and why—hones critical thinking and enhances problem-solving skills. As professionals reflect on their approaches to overcoming obstacles, they build a repertoire of strategies that can be adapted and applied to future challenges. ### Boosting Creativity and Innovation The act of maintaining a working journal also plays a pivotal role in sparking creativity. By documenting brainstorming sessions, ideas, and even fleeting thoughts, professionals can create a reservoir of inspiration. This repository, often revisited, becomes a catalyst for innovation, enabling individuals to draw upon past insights to inspire new solutions. --- The foundation of professional development lies in the ability to learn from each day's experiences, systematically reflect on them, and harness those reflections for continuous growth. A working journal, in its simplicity, offers a robust framework for doing just that. As professionals in the tech industry and beyond carve out their paths of expertise, the working journal stands as an invaluable companion, guiding them towards becoming more reflective practitioners, innovative thinkers, and adept problem-solvers. ## Beyond Accountability - Showcasing Your Value ![An image of a professional, confidently presenting a journal to a shadowed figure representing management or a review board. The journal's pages glow, illuminating the room, and are filled with charts, achievements, and positive feedback. This scene emphasizes the journal's role in showcasing an individual's value and contributions beyond daily tasks.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dxlht0k41lryd7gqj8w1.png) In the realm of professional growth, accountability is not merely about being responsible for one's tasks and actions; it's about owning one's career trajectory, recognizing contributions, and demonstrating the value brought to an organization. This is where the magic of a working journal truly shines, transforming it from a personal reflective tool into a powerful advocate of one's professional narrative. ### Documenting Progress and Achievements A meticulously kept working journal acts as a chronicle of one's professional journey, capturing not just day-to-day tasks but milestones, achievements, and contributions that have impacted the team and the organization at large. When it comes time for performance reviews or seeking opportunities for advancement, this journal becomes an indispensable resource, providing a detailed account of an individual's dedication, accomplishments, and the value they bring. ### Combatting Recency Bias One of the critical challenges in performance evaluations is the recency bias—where more recent achievements overshadow the contributions made throughout the review period. A comprehensive working journal mitigates this bias by offering a holistic view of one's performance, ensuring that all contributions, big and small, are recognized and celebrated. This level playing field not only ensures fairness but also empowers individuals to articulate their contributions confidently. ### Empowering Self-Advocacy Armed with a working journal, professionals are better equipped for discussions around career development, promotions, and compensation adjustments. The documented evidence of their achievements and the obstacles they’ve overcome serves as a strong foundation for advocating their worth and potential contributions to the organization. This empowerment shifts the dynamics of such conversations, making them more objective and data-driven. ### Fostering a Culture of Transparency and Recognition Encouraging the practice of maintaining a working journal organization-wide can foster a culture of transparency and recognition. As professionals share their journeys, challenges, and successes, it cultivates an environment where contributions are openly acknowledged, and achievements are celebrated. This culture not only boosts morale but also encourages others to strive for excellence, knowing that their hard work and dedication will be recognized and rewarded. --- Beyond serving as a tool for personal reflection and development, a working journal is a testament to an individual's commitment to accountability and professional excellence. It chronicles the journey of growth, perseverance, and achievement, showcasing the tangible value one brings to an organization. In the competitive and ever-evolving landscape of the tech industry, maintaining a working journal is not just a practice—it's a strategic asset for those looking to navigate their careers with intention and purpose. ## Mitigating Recency Bias in Evaluations ![A visual metaphor depicting a balance scale, with one side holding a stack of journals representing a full record of an employee's contributions and the other side holding a single, recent report. The scale is balanced, symbolizing how a working journal mitigates recency bias by providing a comprehensive view of an individual's performance over time.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lff8jhuh2h9h7ass6fci.png) Performance evaluations are pivotal moments in a professional's career, often determining prospects for promotions, bonuses, and salary adjustments. However, these assessments are susceptible to recency bias, where recent accomplishments and failures disproportionately influence evaluations. Herein lies the strategic value of a working journal in providing a comprehensive, balanced view of an individual's contributions over the evaluation period. ### Challenging the Recency Bias A well-maintained working journal serves as an antidote to recency bias by offering a chronological record of achievements, challenges tackled, and lessons learned. This continuous documentation ensures that evaluators have access to a complete narrative of an individual's performance, enabling a fair and comprehensive review that takes into account the entirety of the evaluation period. ### Showcasing Consistent Performance Documenting daily accomplishments and reflections allows professionals to present a clear picture of their consistent performance and proactive engagement throughout the year. This detailed record highlights not just the peaks of achievement but also the perseverance through challenges, illustrating a commitment to excellence and growth that transcends the recency of events. ### Leveraging Journal Entries in Self-Assessment A working journal also empowers individuals to conduct thorough self-assessments in preparation for performance reviews. By revisiting their journal, professionals can reflect on their growth, identify their most impactful contributions, and articulate their future goals with clarity. This preparation ensures that they can present their case effectively, making a compelling argument for their value to the organization. ### Guiding Objective Evaluations For managers and evaluators, a working journal provided by employees can serve as a valuable tool in conducting more objective and equitable assessments. The detailed record of contributions supports a more nuanced understanding of an employee's role and impact, guiding decisions on recognition and career advancement based on a comprehensive view of their performance. --- The practice of maintaining a working journal transcends its utility as a personal development tool, emerging as a strategic asset in navigating the nuances of performance evaluations. By mitigating the effects of recency bias, it ensures that all contributions—regardless of when they occurred—are recognized and valued. In doing so, a working journal not only champions fairness in evaluations but also reinforces the importance of consistent performance and growth over time. ## The Catalyst for Upward Mobility ![An image featuring a journal with pages turning into stepping stones, leading up a mountain path towards a peak that's crowned with a flag. Professionals are seen stepping on these stones, moving upward. This represents the journal as a catalyst for career advancement, with each entry a step towards higher achievements and roles.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rjczhvbypq74by1e4g25.png) In the journey of career advancement, demonstrating one's contributions and potential for greater responsibilities plays a crucial role. A working journal, meticulously curated over time, serves as a potent catalyst for upward mobility within an organization, providing undeniable evidence of an individual's readiness to take on new challenges and roles. ### Documenting Achievements and Impact A working journal allows professionals to systematically record their achievements, note the impact of their work, and track their development of new skills and knowledge. This documentation creates a powerful portfolio that showcases an individual's contributions to projects, innovations introduced, and improvements driven within the team or organization. When it comes time for promotional considerations, this journal serves as a comprehensive and persuasive record of why one is deserving of advancement. ### Articulating Career Goals and Aspirations Beyond capturing past and present accomplishments, a working journal provides a space for professionals to articulate their career goals and aspirations. Regular reflection on these goals and the steps being taken towards achieving them demonstrates foresight and ambition, qualities that are essential for individuals seeking to climb the career ladder. This future-oriented perspective, grounded in documented achievements and learnings, strengthens one’s case for promotion or role changes. ### Facilitating Meaningful Conversations with Management The insights gathered in a working journal equip professionals to engage in more meaningful and productive conversations with their managers about career progression. Instead of vague discussions about wanting to grow, individuals can present concrete examples of their accomplishments, their contributions to the team’s success, and their readiness for more significant challenges. This evidence-based approach makes it easier for managers to advocate for their team members’ advancement. ### Enhancing Negotiation for Compensation Adjustments When discussing salary adjustments or other forms of compensation, a working journal provides a solid foundation for negotiation. Documented evidence of an individual’s continuous improvement, achievements, and the added value brought to the organization strengthens their position in negotiations, ensuring that compensation matches their contribution and market value. --- The journey towards upward mobility in one's career is marked by the need to continuously prove one's value, adaptability, and readiness for greater challenges. A working journal, with its detailed record of accomplishments, learning, and aspirations, acts as a compelling advocate in this journey. It not only showcases an individual's contributions and growth but also underscores their potential for future impact, making it an invaluable tool for those aiming to ascend the professional hierarchy. ## Personal Branding and Community Engagement ![A dynamic scene where a professional is sitting at a desk, journal open and glowing, as they interact with a digital world map on a screen in front of them. Pins and connections across the map indicate their engagement with a global professional community. The image conveys how sharing insights from a working journal can enhance personal branding and foster community engagement.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9bh3vkhpfegxg7xc5x4.png) In today’s digital age, where professional networks extend far beyond the confines of one's immediate workplace, nurturing a personal brand and engaging with a broader community have become key components of career development. A working journal, while serving as a personal repository of growth and achievements, can also be a powerful tool for enhancing one's personal brand and fostering community engagement. ### Showcasing Expertise and Leadership By selectively sharing insights, challenges overcome, and innovative solutions from their working journal, professionals can position themselves as thought leaders and experts in their domain. Platforms like LinkedIn, personal blogs, or even community forums offer avenues to share these reflections, demonstrating an individual's depth of knowledge, problem-solving prowess, and commitment to their field. Such visibility not only elevates one’s personal brand but also opens doors to new opportunities, collaborations, and mentorship roles. ### Fostering Professional Networks Engaging with the community by sharing journal entries encourages dialogue, exchange of ideas, and knowledge sharing. This interaction contributes to building and expanding one's professional network, connecting with peers, industry leaders, and potential mentors. Over time, these connections can prove invaluable, offering guidance, opportunities for collaboration, and even paving the way for new career opportunities. ### Contributing to Collective Learning By documenting and sharing learnings, particularly those derived from tackling challenges or exploring new technologies, professionals contribute to the collective knowledge pool of their community. This culture of sharing and learning together not only enriches the professional community but also reinforces the individual’s role as an active, contributing member committed to the advancement of their field. ### Enhancing Visibility and Recognition Regularly sharing insights and achievements from one’s working journal increases visibility within the professional community. This enhanced visibility can lead to recognition, be it in the form of accolades, invitations to speak at conferences, or contribute to reputable publications. Each of these outcomes further strengthens an individual’s personal brand and establishes them as a valued member of their professional community. --- A working journal transcends its utility as a tool for personal reflection and growth, emerging as a catalyst for building a strong personal brand and engaging meaningfully with a broader professional community. By strategically sharing key insights and achievements, professionals can showcase their expertise, foster valuable connections, and contribute to collective learning and innovation. In the process, they not only enhance their visibility and industry standing but also open new avenues for professional growth and opportunities. ## Charting Your Course with a Working Journal ![A serene yet powerful image of a professional standing at the helm of a ship, navigating through calm seas under a star-filled sky. The compass in their hand is replaced with a journal, guiding the way. This illustrates the metaphor of using a working journal to chart one's course in the professional journey, navigating towards success with purpose and vision.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ailu1fbl25jcur7gvxro.png) As we've explored throughout this post, the act of maintaining a working journal is much more than a reflective practice. It is a strategic tool that fosters professional growth, enhances accountability, supports career advancement, and amplifies one’s personal brand. In the dynamic and competitive landscape of the tech industry, where innovation, adaptability, and continuous learning are paramount, a working journal stands out as an invaluable asset for any professional. Whether you're documenting daily achievements, reflecting on lessons learned from challenges, or strategizing your next career move, a working journal provides a structured framework to navigate your professional journey with intention and purpose. It allows you to: - **Capture and reflect on your growth**, embracing a mindset of continuous improvement. - **Showcase your contributions and achievements**, providing tangible evidence of your value to the organization. - **Combat recency bias** in performance evaluations, ensuring a fair and comprehensive assessment of your work. - **Empower yourself in career advancement conversations**, with documented evidence to support your case for promotions and compensation adjustments. - **Build and enhance your personal brand**, sharing insights and learnings to engage with and contribute to your professional community. The journey of professional development is uniquely personal and evolving. By committing to the practice of keeping a working journal, you not only chart your course towards achieving your current goals but also lay the groundwork for unforeseen opportunities that lie ahead. The story you document today may well be the key that unlocks tomorrow's achievements. ### Start Your Journaling Journey Today We invite you to embark on this journaling journey, to not only enhance your professional development and career trajectory but to also contribute to the collective growth and innovation of your community. Share your experiences, challenges, and triumphs. Let your journal be your guide, your advocate, and your voice in carving out a successful, fulfilling career. Remember, the path of growth is a journey, not a destination. Let your working journal illuminate the way.
dev3l
1,787,123
Forever Functional: D.I.Y. Booleans?
by Federico kereki Could you program if you didn't have booleans? What about lacking `and`, `or`,...
0
2024-03-11T16:55:08
https://blog.openreplay.com/forever-function-diy-booleans/
by [Federico kereki](https://blog.openreplay.com/authors/federico-kereki) <blockquote><em> Could you program if you didn't have booleans? What about lacking `and`, `or`, and other operators? Would you be able to cope without `if` or `while` statements? Functional Programming is very powerful, and in this article, we'll see how we can make our own booleans in a fully functional way. </em></blockquote> <div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;"> <hr/> <h3><em>Session Replay for Developers</em></h3> <p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p> <img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async"> <p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p> <hr/> </div> What do we use boolean values for? The answer is simple: to pick one thing or another. For instance, in an `if` statement, you do one thing or another depending on the value. If we can only work with functions, we will represent a boolean value by a function; no other option! The boolean function that represents "true", given two values, returns the first. Similarly, the boolean function that represents "false", given two values, will return the second. We can write: ```javascript const TRUE = (trueValue, falseValue) => trueValue; const FALSE = (trueValue, falseValue) => falseValue; ``` Can we do something with these? Let's start with something simple, to get used to these values gradually. We can write a `toString()` function to transform the boolean value into a string. ```javascript const toString = boolValue => boolValue("T", "F"); console.log(toString(TRUE)); // "T" console.log(toString(FALSE)); // "F" ``` In our implementation, boolean expressions will always be evaluated as TRUE or FALSE. ## Writing our boolean operators OK, we now have the most basic values, TRUE and FALSE. But, if we had no booleans, we would not have any boolean operators either! So, what can we do? Let's start with the simplest one. ### The "not" operator The first operator we'll work out is the equivalent of JavaScript's `!`: given a boolean value `x`, we want to write something corresponding to `!x`. ```javascript const NOT = (X) => X(FALSE, TRUE); ``` How does this work? Keep in mind that a boolean value is now a function that receives two values and returns the first if the boolean value represents "true" or the second if it represents "false". Here, we pass FALSE and TRUE as values, so if the boolean value represents "true", FALSE will be returned, and if the value represents FALSE, TRUE will be the result. We can test this quickly: ```javascript console.log(toString(NOT(TRUE))); // "F" console.log(toString(NOT(FALSE))); // "T" ``` Study this carefully, and make sure you understand how this works because the following operators are a bit more involved but require the same kind of thinking. ### The "or" operator Let's create a version of the "OR" (`||`) operator, so given two boolean values, `X` and `Y`, we can write something equivalent to `X||Y`. First, we must remember how the operation works. | X | Y | X OR Y | |:--------:|:--------:|:--------:| | FALSE | FALSE | FALSE | | FALSE | TRUE | TRUE | | TRUE | FALSE | TRUE | | TRUE | TRUE | TRUE | Checking the table above, we can say that: * if `X` is TRUE, the result is TRUE, no matter what `Y` is. * if `X` is FALSE, the result matches the value of `Y`. We can transform this into code: ```javascript const OR = (X, Y) => X(TRUE, Y); ``` Let's work this out, which is more involved! Suppose `X` represents TRUE. In that case, it will return the first argument it's given (in this case, TRUE), so when `X` is TRUE, `OR(X, Y)` is also TRUE. On the other hand, if `X` is FALSE, it will return the second argument it's given (here, `Y`), so in that case, `OR(X, Y)` will equal `Y`. Everything is according to our analysis of the table above. ### The "and" operator Let's move on to the "AND" (`&&`) operator, so given `x` and `y` we can calculate `x && y`. The truth table for it is as follows. | X | Y | X AND Y | |:--------:|:--------:|:--------:| | FALSE | FALSE | FALSE | | FALSE | TRUE | FALSE | | TRUE | FALSE | FALSE | | TRUE | TRUE | TRUE | In summary: * if `X` is FALSE, the result is FALSE no matter what `Y` is. * if `X` is TRUE, the result matches the value of `Y`. This is transformed directly into code: ```javascript const AND = (X, Y) => X(Y, FALSE); ``` This analysis is very similar to what we did for `OR`. Without going into much detail, if `X` is TRUE, it will return the first argument it's given (so, `Y`), and if `X` is FALSE, the `AND` will be FALSE; again, the working of this function matches our analysis. ### Operators for circuit building We're on a roll! Let's go for some other operators. In electronic circuit design, `NOR` and `NAND` logic gates are frequently used. (Why? This has to do with the concept of ["universal operators"](https://www.allaboutcircuits.com/textbook/digital/chpt-3/gate-universality/), but we won't go into that here. `NAND` is also known as the ["Sheffer stroke"](https://en.wikipedia.org/wiki/Sheffer_stroke), `X↑Y`, and `NOR` as the ["Webb Operator"](https://en.wikipedia.org/wiki/Logical_NOR), `X↓Y`.) | X | Y | X NOR Y | X NAND Y | |:--------:|:--------:|:--------:|:--------:| | FALSE | FALSE | TRUE | TRUE | | FALSE | TRUE | FALSE | TRUE | | TRUE | FALSE | FALSE | TRUE | | TRUE | TRUE | FALSE | FALSE | The definitions are simple: * `X NOR Y` is just the negation of `X OR Y`, so `!(X || Y)`, equivalent to `!X&&!Y`. * Similarly, `X NAND Y` is the negation of `X AND Y`, so `!(X && Y)` or, equivalently, `!X||!Y`. ```javascript const NOR = (X, Y) => NOT(OR(X, Y)); const NAND = (X, Y) => NOT(AND(X, Y)); ``` We won't go into details; after all, the code fully matches the definitions we gave above, and it's easy to verify they work correctly. ### Other operators So far, we've been able to transform several JavaScript operators into our functional style. What about some other operators that are less commonly used? The three we will see are: * `X XOR Y`, exclusive OR (but not the bitwise `^` operator). This is written as `X⊕Y` or `X⊻Y` in boolean algebra. * `X EQU Y`, equivalence, usually written as `X↔Y` or `X≡Y` * `X IMP Y`, implication, written as `X⊃Y` or `X⇒Y` The following table resumes the trio. | X | Y | X XOR Y | X EQU Y | X IMP Y | |:--------:|:--------:|:--------:|:--------:|:--------:| | FALSE | FALSE | FALSE | TRUE | TRUE | | FALSE | TRUE | TRUE | FALSE | TRUE | | TRUE | FALSE | TRUE | FALSE | FALSE | | TRUE | TRUE | FALSE | TRUE | TRUE | We can note: * The results of `X XOR Y` and `X EQU Y` are always opposite. * `X XOR Y` is equivalent to `X !== Y` or, equivalently, `(X && !Y) || (!X && Y)`... uglier! * `X EQU Y` is equivalent to `X === Y` or `(X && Y) || (!X && !Y)` * `X IMP Y` is always true unless `X` is TRUE and `Y` is FALSE, so `!X || Y`. (Check it out!) ```javascript const XOR = (X, Y) => X(NOT(Y), Y); const EQU = (X, Y) => X(Y, NOT(Y)); const IMP = (X, Y) => X(Y, TRUE); ``` Can you see how these work? Going directly to the definitions is best. For instance, let's see `XOR`: if `X` is TRUE, then the result will be `NOT(Y)`, and if `X` is FALSE, the result will be `Y`, as we saw earlier. (I'll leave the other two expressions to you.) The match between the boolean algebraic definitions and the actual JavaScript code gives us a lot of confidence in the validity of the results. <CTA_Middle_Programming /> ## Writing conditional statements We now have functional equivalents to boolean values, but what about using them? After all, the standard `if` and `while` JavaScript statements are not designed to work with our invented values. We will have to show that we can write our alternative implementations of both statements so we will be able to code anything we want to. ### Writing our if statements Now, what would an `if` look like? It would be a function (naturally! we only have functions!) with three parameters: * `booleanFn`, a function that returns a boolean value * `thenFn`, a function to be executed if the boolean value means "true" * `elseFn`, a function to be executed if the boolean value means "false" (Why are we providing functions? We don't want anything to be evaluated unless it's necessary. For example, if the boolean value is evaluated as TRUE, we will then evaluate `thenFn()` and ignore `elseFn()`; we will only evaluate what we really need.) We would then write: ```javascript const ifThenElse = (booleanFn, thenFn, elseFn) => booleanFn()(thenFn, elseFn)(); ``` For example, let's write a statement that will print out "AM" or "PM" depending on the time of the day. Let's assume an `isMorning()` function that returns a TRUE boolean value in the morning and a FALSE one in the afternoon. ```javascript const printAM = () => console.log("It's AM"); const printPM = () => console.log("It's PM"); ifThenElse(isMorning, printAM, printPM); // AM or PM as it may be ``` The `isMorning()` function would be written as follows. Keep in mind this is a very simple example; in reality, the evaluation could be much longer and more complex: ```javascript const isMorning = () => new Date().getHours() < 12 ? TRUE : FALSE; ``` Why are we writing this using the ternary operator? The reason is that in a purely functional language, any test would return either TRUE or FALSE, our functions, but in JavaScript (which, after all, *does* have booleans!) we have to do the conversion ourselves. ### Writing a `while` loop We're almost done; we managed to write basic TRUE and FALSE values, implemented several boolean operators, and also have an `if` alternative; we only need a `while` equivalent for looping, and we'll have all the tools we could ever need. This will be a tad trickier than the `if` code. In Functional Programming, instead of loops, we use recursion. The functional way of doing a `while` loop would be the following. ```javascript const whileLoop = (booleanFn, loopFn) => booleanFn()( () => { loopFn(); whileLoop(booleanFn, loopFn); }, () => {} )(); ``` Let's start by describing the parameters of the `whileLoop()` function. * `booleanFn` is, as in the `if` implementation, a function that will return a boolean value * `loopFn` is a function that will be executed in every pass of the loop, as long as the evaluated boolean value is TRUE. How does this work? If `booleanFn()` evaluates to TRUE, the first argument to the function will be evaluated -- it will execute `loopFn()` and recursively call itself again. On the other hand, when `booleanFn()` evaluates to FALSE, a "no operation" function will be executed, and the loop will end; good! Let's finish with a short example showing a countdown loop. ```javascript let number = 5; const checkPositive = () => MakeBool(number > 0); const loopFn = () => console.log(number--); whileLoop(checkPositive, loopFn); // 5 4 3 2 1 in that order ``` Can you follow how this works? The part that's usually harder to understand is the usage of recursion to do the looping; after you wrap your head around that, the rest follows quickly. ## Conclusion We started this article by wondering what we would do if we had to code without having boolean values and operators, and also lacking basic flow statements like `if` and `while`. By applying Functional Programming ideas, first, we were able to produce our own TRUE and FALSE equivalents. We then added several operators to be able to build complex expressions, and we finally produced functional versions for `if` and `while` statements. Initially, it seemed like we were getting into an impossible situation, but we solved all the issues. Of course, working with JavaScript, you wouldn't do this -- why would you? After all, booleans are an integral part of the language, so there's no need to do all this rigmarole to get what you already have! The true objective of the article was to help you widen your horizon and realize that Functional Programming goes even further than just letting you use functions in some ways; it allows you to elegantly express many concepts and operations, all with the same basic tools: impressive! I cannot say it better: as it reads in [an article by Alex Beal](http://www.usrsb.in/Building-Data-Structures-from-Functions.html), > *"In the end, this might strike you as nothing more than a useless programming trick. In a sense that’s right. I’d never use this in my own code. What makes this technique so valuable is that it actually fits into the broader context of lambda calculus* [...] *In the language of lambda calculus, you’re given only a very basic set of tools.* [...] *Incredibly, it turns out that that’s all you need."* ## References If you want to learn more about the techniques we used here, you should read about [Church Encoding](https://en.wikipedia.org/wiki/Church_encoding) and [Scott Encoding](https://en.wikipedia.org/wiki/Mogensen%E2%80%93Scott_encoding); for boolean values, both representations are the same. ## In the mood for some questions? If you want to experiment a bit more, try your hand with these questions. 1. Show that these `OR` versions are equivalent to the one we wrote. ```javascript const OR1 = (left, right) => left(left, right); const OR2 = (left, right) => right(right, left); ``` 2. Do the same for these alternatives to our `AND`. ```javascript const AND1 = (left, right) => left(right, left); const AND2 = (left, right) => right(left, right); ``` 3. Show that if `X` is a boolean value, `X(Y,Z)` is equivalent to `(X && Y) || (!X && Z)`. 4. Using the previous result, prove that our `NOT`, `OR`, `AND`, etc., definitions are correct. 5. Show that our implementations of `AND` and `OR` also apply JavaScript's ["short circuit"](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_AND#short-circuit_evaluation) evaluation rules. 6. Do the alternative versions for `AND` and `OR` from questions 1 and 2 also apply short circuit evaluation like JavaScript?
asayerio_techblog
1,787,181
Building A Custom Renderer For React
by Daniel Onyebuchi React has gained widespread popularity for its declarative and component-based...
0
2024-03-11T18:09:51
https://blog.openreplay.com/building-a-custom-react-renderer/
by [Daniel Onyebuchi](https://blog.openreplay.com/authors/daniel-onyebuchi) <blockquote><em> React has gained widespread popularity for its declarative and component-based approach to front-end development. At the core of its functionality is the concept of renderers, which translate components into UI elements. While React primarily uses a default renderer for the web (`ReactDOM`), its architecture's flexibility allows you to create custom renderers tailored to specific platforms or use cases, as this article will show. </em></blockquote> <div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;"> <hr/> <h3><em>Session Replay for Developers</em></h3> <p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p> <img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async"> <p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p> <hr/> </div> The demand for specialized rendering solutions grows as applications become more diverse and extend beyond traditional web platforms. The need for custom React renderers arises when developers encounter scenarios where the default rendering approach may not be optimal or when developers are targeting unconventional environments such as mobile applications, virtual reality experiences, or even server-side rendering. Recognizing this scenario is crucial when developers notice significant performance bottlenecks or limitations in achieving the desired user experience using React's default rendering. For instance, imagine a financial analytics dashboard within a React application that dynamically visualizes stock market data with numerous live updates and sophisticated charting requirements. In this example, the default rendering approach might not efficiently manage the continuous flow of data and intricate chart computations, signaling the necessity for a tailored rendering solution. Custom renderers offer several advantages. First, they provide enhanced performance by addressing specific bottlenecks, significantly improving rendering speed, memory usage, and frame rates compared to the default approach. For instance, a custom renderer could optimize data handling and chart rendering in the financial analytics dashboard example, resulting in a smoother and more responsive dashboard experience. In the following sections, we will explore the process of building a custom React renderer, unraveling the steps involved in extending React's rendering capabilities to meet specific project needs. ## Overview of the `react-reconciler` package Within the expansive landscape of React customization, the [react-reconciler](https://www.npmjs.com/package/react-reconciler) package stands out as a powerful tool that empowers developers to create custom renderers with precision and efficiency. This package is the backbone for building custom React renderers, providing a structured and extensible framework for the reconciliation process. The reconciliation process in React is responsible for efficiently updating the UI by determining the minimal set of changes needed to reflect the current state of the application. The `react-reconciler` package facilitates this crucial aspect by offering a set of interfaces and utilities that streamline the implementation of a custom renderer, allowing developers to tailor the reconciliation logic to specific use cases. ### Purpose and Use Cases The primary purpose of the `react-reconciler` package is to enable developers to construct custom renderers that align seamlessly with their project requirements. This versatility is particularly valuable when traditional rendering approaches may not suffice or when a tailored solution is necessary to address unique challenges. Use cases for custom React renderers built with `react-reconciler` are diverse. They range from optimizing performance for specific platforms, such as native mobile applications, to integrating React into unconventional environments like game engines or augmented reality frameworks. By leveraging this package, developers gain the flexibility to transcend the boundaries of standard web rendering and extend React's capabilities into a myriad of innovative applications. ### Relationship with the Fiber Architecture The `react-reconciler` package is intricately tied to React's Fiber Architecture, a sophisticated mechanism introduced to enhance the efficiency and responsiveness of the reconciliation process. By aligning with Fiber, `react-reconciler` taps into a robust infrastructure that manages the prioritization and scheduling of updates, resulting in a more resilient and performant rendering system. Understanding the relationship between `react-reconciler` and Fiber is crucial for developers aiming to build custom renderers. This synergy ensures that the custom renderer can seamlessly integrate with React's core architecture, harnessing the benefits of Fiber's incremental rendering to deliver a smooth and responsive user experience. ## Exploring the Fiber Data Structure A Fiber in React is a lightweight unit of work that represents a component in the virtual DOM. It plays a pivotal role in the reconciliation process, helping React update and render components efficiently. ### Fiber Node Anatomy: A Fiber node is a JavaScript object that holds information about a component. It contains various fields, such as: - **Type:** This represents the type of the component (e.g., function, class, host). - **Key:** An optional unique identifier to optimize updates. - **State:** The current state of the component. - **Props**: The properties passed to the component. - **Child, Sibling, and Return:** Pointers to other Fiber nodes, forming a tree structure. This tree structure represents the component hierarchy in the application. The "Child" pointer points to the first child of the current node, the "Sibling" pointer points to the next sibling, and the "Return" pointer points to the parent. ### Work-in-Progress and Committed Fiber Trees React maintains two Fiber trees during the reconciliation process: The work-in-progress tree (current changes being applied) and the committed tree (last successfully rendered state). The "Work-in-Progress Fiber Tree" is a dynamic, in-memory representation that reflects the current state of React components being processed. When changes occur in the application, such as state updates or prop changes, React generates a new version of the component tree. This newly created tree is called the work-in-progress tree, as it captures the ongoing changes. Conversely, the "Committed Fiber Tree" represents the last successfully rendered state of the user interface. Once the reconciliation process is complete, React takes the updated work-in-progress tree and designates it as the new committed tree. This committed tree signifies the most recent successfully rendered state and is ready to be displayed. The image displays both the committed tree and the work-in-progress tree. The blue-outlined rectangle indicates nodes that have been updated. ![image](https://blog.openreplay.com/images/building-a-custom-react-renderer/images/image1.png) ### Reconciliation Algorithm The reconciliation algorithm is at the core of React's ability to update the UI efficiently. It leverages the Fiber tree to determine which components need to be updated and in what order. The algorithm balances responsiveness and throughput, ensuring a smooth user experience. ## Fiber Node Lifecycle The lifecycle of a Fiber node is a dynamic process that goes through various stages during rendering. Understanding these stages is vital for building a custom React renderer. - **Initialization:** When a component is first rendered, a Fiber node is created and initialized. This involves setting the type, props, and state. It marks the beginning of the reconciliation process. - **Reconciliation:** During reconciliation, React compares the current and new states of the Fiber nodes. It identifies what changed and builds a plan for updating the UI. This process involves propagating changes through the Fiber tree. - **Rendering:** The rendering phase involves translating the virtual `DOM` representation into the actual UI. This process utilizes the committed Fiber tree, ensuring that only the necessary updates are applied. - **Commit:** Once the rendering is complete, React commits the changes to the `DOM`. This step ensures that the user sees the updated UI. The committed Fiber tree becomes the new basis for future updates. - **Cleanup:** After committing the changes, React performs cleanup tasks. It might involve releasing resources or updating internal data structures to prepare for the next rendering cycle. - **Reusable Fibers:** React optimizes performance by reusing Fibers across renders. This reusability reduces the need to create new objects, enhancing efficiency. A flowchart illustrating fiber node lifecycle. ![code2flow_R9GDrR](https://blog.openreplay.com/images/building-a-custom-react-renderer/images/image2.png) ## Steps to Build a Custom React Renderer The custom renderer applies to any React application; this article specifically focuses on testing it with the default React Single Page Application (SPA). To demonstrate the creation of our custom renderer, we deliberately remove the default renderer used by the React app and substitute it with our custom renderer. This custom renderer is responsible for displaying the contents of the default React page on the actual webpage. The removal of the default renderer initially breaks our app, triggering an error. However, as we incorporate fundamental functionalities into our custom renderer, the default page resurfaces and becomes visible on the webpage again. This step-by-step process helps us understand how the renderer works behind the scenes to showcase our app on the webpage. Additionally, it serves as a guide for constructing a custom renderer. ### Setting Up the Development Environment - Initiate a new React application by executing the following command in your terminal. Ensure to replace `<app_name>` with your desired application name. ```bash npx create-react-app <app_name> ``` Once the app is created, navigate to the project directory using the following command: ```bash cd <app_name> ``` With the project directory as your current location, install the required dependencies: ```bash npm install ``` Start the application. ```bash npm start ``` For the custom renderer, install the necessary dependency, `react-reconciler`, by running the following command in the terminal: ```bash npm install react-reconciler ``` ### Integrating the Custom React Renderer Follow these steps to integrate a custom React renderer: - Create a new file named `ReactDOMCustom` inside the `src` folder. Import the `react-reconciler` library in this file: ```javascript import ReactReconciler from "react-reconciler"; ``` - Open the `index.js` file, import `ReactDOMCustom`, and remove the import statement for `ReactDOM`: ```javascript import ReactDOMCustom from './ReactDOMCustom'; ``` - Utilize the custom renderer file (`ReactDOMCustom`) to render the app by replacing the following code block: ```javascript const root = ReactDOM.createRoot(document.getElementById("root")); root.render( <React.StrictMode> <App /> </React.StrictMode> ); ``` With the following line, render the app using our custom renderer file: ```javascript ReactDOMCustom.render(<App />, document.getElementById('root')); ``` As a result of these changes, your app will break, and an error will be displayed in the browser console. In the following sections, we will implement the logic for the custom renderer to ensure that our React app functions as expected in the browser. Here is an image capturing the displayed error. ![error](https://blog.openreplay.com/images/building-a-custom-react-renderer/images/image3.png) ## Creating a Custom `Host Config` A `host config` file refers to a JavaScript module that defines the behavior and capabilities of the host environment where React is being used. The host environment is the platform or runtime where your React application runs, and it could be a web browser, a mobile app environment, or any other runtime. The `host config` typically includes a set of methods the custom renderer must implement. These methods correspond to different aspects of the rendering process, such as creating and updating instances, appending children, handling text content, and more. You can control how React elements are created, updated, and manipulated within your target environment by providing a custom implementation for these methods. In the `ReactDOMCustom` file, instantiate the `react-reconciler` by creating a `reconciler` object and adding the following methods: ```javascript let reconciler = ReactReconciler({ // host config options supportsMutation: true, createInstance( type, props, rootContainerInstance, hostContext, internalInstanceHandle ) { // Logic for creating new instance }, createTextInstance( text, rootContainerInstance, hostContext, internalInstanceHandle, ) { // Logic for creating a text instance }, appendChildToContainer(container, child) { // Logic for appending a child to the container }, appendChild(parent, child) { // Logic for appending a child to a parent }, appendInitialChild(parent, child) { // Logic for appending initial child }, prepareUpdate( instance, type, oldProps, newProps, rootContainerInstance, currentHostContext, ) { // Logic for preparing an update }, commitUpdate( instance, updatePayload, type, oldProps, newProps, finishedWork ) { // Logic for committing an update }, finalizeInitialChildren() { // Logic for finalizing initial children }, getChildHostContext() { // Logic for getting child host context }, getPublicInstance() { // Logic for getting public instance }, getRootHostContext() { // Logic for getting root host context }, prepareForCommit() { // Logic before committing changes }, resetAfterCommit() { // Logic after committing changes }, shouldSetTextContent() { return false; }, }); ``` Let's complete the implementation of a few methods to ensure that our host configuration effectively renders and mounts components onto the `DOM`. <CTA_Middle_Frameworks /> ### `createInstance` Function This function generates and configures `HTML` elements based on the provided `type` and `props`. It utilizes the `document.createElement` method to create a new `HTML` element with the specified type. The function then checks for specific attributes, such as `className` and `src` in the `props` object, and applies them to the created element if they are present. An array of strings representing `HTML` attributes (`alt`, `className`, `href`, `rel`, `src`, `target`) is created to achieve this. The function iterates through this array, setting corresponding attributes on the element if they exist in the `props` object. Finally, the function returns the created `HTML` element. ```javascript // Define a function to create a new instance of an element createInstance( type, // The type of element to create (e.g., 'div', 'span') props, // The properties (attributes) to apply to the element rootContainerInstance, // The root container instance to which the element belongs hostContext, // The host context of the element internalInstanceHandle // The internal instance handle of the element ) { // Create a new HTML element based on the provided type let element = document.createElement(type); // Apply the className and src properties from the props object if they exist if (props.className) element.className = props.className; if (props.src) element.src = props.src; // Iterate through an array of specific attributes to check if they exist in the props object ["alt", "className", "href", "rel", "src", "target"].forEach((attr) => { // If the attribute exists in the props object, set it on the element if (props[attr]) element[attr] = props[attr]; }); // Log information about the created text instance console.log("Created instance:", type, props); // Return the created element return element; } ``` ### `createTextInstance` Function The purpose of this function is to generate text nodes in the user interface. It accomplishes this by returning a text node with the provided content. The implementation involves calling the `document.createTextNode` function and passing the `text` as the argument. ```javascript // Define a function to create a new text instance createTextInstance( text, // The text content of the instance rootContainerInstance, // The root container instance to which the text belongs hostContext, // The host context of the text instance internalInstanceHandle // The internal instance handle of the text instance ) { console.log("Created text instance:", text); // Create a new text node with the provided text content return document.createTextNode(text); } ``` ### `appendChildToContainer`, `appendChild`, and `appendInitialChild` Functions These functions facilitate the addition of `child` elements to `parent` containers within a user interface. The distinction between them is based on specific use cases or lifecycle events in the UI rendering process. Each function achieves this by utilizing the `appendChild` API inherent in browsers and passing the `child` element as the argument. ```javascript // Function to append a child to a container appendChildToContainer(container, child) { // Log information about appending child to container console.log("Appending child to container:", child); // Append the child to the container container.appendChild(child); } // Function to append a child to a parent element appendChild(parent, child) { // Log information about appending child to parent console.log("Appending child to parent:", child); // Append the child to the parent element parent.appendChild(child); } // Function to append an initial child to a parent element appendInitialChild(parent, child) { // Log information about appending initial child to parent console.log("Appending initial child to parent:", child); // Append the initial child to the parent element parent.appendChild(child); } ``` ### Enabling the Render Method The reconciliation object's API differs slightly from the top-level React DOM API. To incorporate the render method into the `index.js` file, define an object with the render method in the `ReactDOMCustom.js` file. This render method will take two arguments: the `component` to render and the `container` where it should be placed. ```javascript let ReactDOMCustom = { render(component, div) { // Logic for rendering }, }; ``` Within the render function, create a container using the `createContainer` method, which takes three arguments: the `container` itself and two `boolean` values set to false, representing concurrent mode and server-side hydration. ```javascript let container = reconciler.createContainer(div, false, false); ``` Next, call the `updateContainer` function to initiate the rendering process. This function requires four arguments: the `component` to be rendered, the pre-established `container`, and two `null` values, which indicate options for hydration and callback execution. ```javascript reconciler.updateContainer(whatToRender, container, null, null); ``` For reference, here's the complete `render` method: ```javascript // ReactDOMM object to encapsulate custom rendering logic let ReactDOMM = { // Render method to render a React component into a specified container render(whatToRender, div) { // Create a container using the reconciler's createContainer method let container = reconciler.createContainer(div, false, false); // Update the container with the specified component to trigger the rendering process reconciler.updateContainer(whatToRender, container, null, null); }, }; ``` Export `ReactDOMCustom`. ```javascript export default ReactDOMM; ``` We have successfully configured and exported `ReactDOMCustom`. This setup enables the rendering of your React app using the custom `React DOM`. Previously, when we initiated the development of our custom renderer, we intentionally disabled the default React renderer. This temporarily disrupted our app's display. With our custom renderer in place, we can observe our React app being displayed. Here's a snapshot of the `React app being rendered through our customized `React DOM`. ![image](https://blog.openreplay.com/images/building-a-custom-react-renderer/images/image4.png) Below is a snapshot of the browser console log messages that show created elements, their types, and how they are arranged as children in a hierarchy when rendered on the DOM. ![Untitled design](https://blog.openreplay.com/images/building-a-custom-react-renderer/images/image5.png) We've now integrated several methods into the host config file to render components onto the DOM. You can explore additional functions to further tailor the custom renderer according to your project's specific needs. ## Real-World Examples Real-world examples of custom React renderers demonstrate the versatility and adaptability of React's architecture. These examples showcase how developers can tailor React to suit specific needs, foster innovation, and provide solutions for diverse application domains. ### Case Studies of Custom React Renderers - **[React Three Fiber](https://docs.pmnd.rs/react-three-fiber/getting-started/introduction):** - **Description:** `React Three Fiber` is a custom React renderer designed for creating 3D graphics using the popular WebGL library, Three.js. - **Use Case:** It allows developers to use familiar React patterns to create and manage 3D scenes and objects declaratively. - **[React Native](https://reactnative.dev/):** - **Description:** While not a custom renderer in the same sense, `React Native` can be considered a custom renderer for React. It takes React components and renders them into native UI components on iOS and Android. - **Use Case:** `React Native` enables developers to use React to build mobile applications with a single codebase, bridging JavaScript and native platform APIs. - **[React ART](https://www.npmjs.com/package/react-art):** - **Description:** `React ART` is a library for drawing vector graphics using React. It provides a custom React renderer that outputs to Canvas or SVG. - **Use Case:** It allows developers to create complex vector graphics using React components, making it easier to manage and update the graphics through React's component lifecycle. - **[React PDF](https://react-pdf.org/):** - **Description:** `React PDF` is a custom renderer for React that enables the generation of PDF documents using React components. - **Use Case:** Developers can leverage their React skills to create dynamic PDF documents by defining the document structure using React components. - **[React Hardware](https://github.com/iamdustan/react-hardware):** - **Description:** `React Hardware` is a custom renderer that targets hardware components like Arduino and Raspberry Pi, enabling developers to use React to build Internet of Things (IoT) applications. - **Use Case:** This allows developers to apply their React knowledge to create interactive experiences on hardware devices. ### Use Cases Here are some general use cases associated with custom React renderers. - **Specialized UI Components:** Create custom React renderers for specialized UI components that require low-level rendering optimizations or integration with specific technologies (e.g., graphics libraries, game engines). - **Custom Platforms or Devices:** Develop React applications for non-standard platforms or devices (e.g., Internet of Things devices, custom hardware) by creating custom renderers tailored to their unique requirements. - **Domain-Specific Languages (DSLs):** Build domain-specific languages using React for specific use cases, such as generating dynamic PDF documents, where the components define the structure and content of the document. - **Graphical User Interfaces (GUIs) for 3D Applications:** Use custom React renderers for creating GUIs in 3D applications by integrating with libraries like `Three.js`, enabling developers to manage UI components within a 3D space. - **Performance Optimization:** Optimize rendering performance by creating a custom React renderer tailored to an application's specific needs, especially in scenarios where the default rendering process might introduce unnecessary overhead. ### Benefits Custom renderers offer several advantages, including: - **Declarative Syntax:** Leverage React's declarative syntax to express UI components clearly and concisely, making it easier for developers to understand and maintain code. - **Code Reusability:** Encapsulating rendering logic within React components enhances code reusability. This allows developers to reuse components across different projects or scenarios. - **Ecosystem Compatibility:** Leverage the existing React ecosystem and developer community, taking advantage of the vast array of libraries, tools, and resources available to React developers. - **Familiar Development Workflow:** Maintain a familiar development workflow for developers already experienced with React. Custom renderers allow developers to apply React patterns and best practices in various domains. - **Abstraction of Complexity:** Abstract away the complexity of lower-level rendering details by providing a high-level API that simplifies the creation and management of UI components, reducing the cognitive load on developers. - **Cross-Platform Development:** Facilitate cross-platform development by creating custom renderers for platforms like `React Native`, enabling developers to use a single codebase for building applications across multiple platforms. ## Conclusion Custom React renderers offer developers a versatile solution for tailoring front-end development to specific platforms and use cases. Leveraging React's core rendering concept and the `react-reconciler` package, developers can efficiently build custom renderers, as exemplified by real-world cases like `React Three Fiber` and` React Native`. These renderers provide benefits such as declarative syntax, code reusability, and compatibility with diverse ecosystems. The step-by-step guide underscores the innovation potential, demonstrating how developers can optimize performance and address unique challenges in a concise and adaptable manner. Ultimately, custom React renderers empower developers to shape React for efficient and flexible user interfaces in the dynamic landscape of front-end development. ## Additional Resources Explore further with the following resources: [React-reconciler](https://github.com/facebook/react/blob/main/packages/react-reconciler/README.md) [Fiber](https://github.com/acdlite/react-fiber-architecture?tab=readme-ov-file#what-is-a-fiber) [Reconciliation](https://legacy.reactjs.org/docs/reconciliation.html)
asayerio_techblog
1,787,186
⛵ Sailing the Choppy Waters of Floating-Point Precision in JavaScript 🔢
Ahoy there, fellow JavaScript sailors! 🚢 Today, we're setting sail on the treacherous seas of...
0
2024-03-11T18:19:01
https://dev.to/best_codes/sailing-the-choppy-waters-of-floating-point-precision-in-javascript-3577
webdev, javascript, tutorial, discuss
Ahoy there, fellow JavaScript sailors! 🚢 Today, we're setting sail on the treacherous seas of floating-point precision. Grab your life jackets, because things might get a little… floaty. ## The Floating Conundrum In the vast ocean of JavaScript, numbers are like the water – they're everywhere. But unlike the predictable H2O, JavaScript numbers can be a bit more… unpredictable. That's because JavaScript uses a single number type: the IEEE 754 double-precision floating-point format. Sounds fancy, right? Let's explain it a bit. ## Understanding IEEE 754 The **IEEE 754** standard is a crucial specification for representing **floating-point numbers** in computers. Let's dive into the details: 1. **Purpose and Background**: - **IEEE 754** was established in 1985 by the **Institute of Electrical and Electronics Engineers (IEEE)**. - It addresses issues found in various floating-point implementations, making them more reliable and portable. - This standard ensures that computers consistently calculate the same results for the same computations. 2. **Components of IEEE 754**: - **Sign of Mantissa**: - The sign bit determines whether the number is positive or negative. - If the sign bit is **0**, the number is positive; if it's **1**, the number is negative. - **Biased Exponent**: - The exponent field represents both positive and negative exponents. - A bias is added to the actual exponent to obtain the stored exponent. - **Normalized Mantissa**: - The mantissa is part of a number in scientific notation or a floating-point number. - A normalized mantissa has only one **1** to the left of the decimal point. 3. **Special Values**: - **Zero**: Represented with an exponent and mantissa of **0**. Both **+0** and **-0** exist but are equal. - **Denormalized**: When the exponent is all zeros, but the mantissa is not, it's a denormalized number. - **Infinity**: Represented with an exponent of all ones and a mantissa of all zeros. Sign distinguishes between positive and negative infinity. - **Not A Number (NAN)**: Used to represent errors (exponent field is all ones with a zero sign bit or a non-1 mantissa). ## Oh no… But here's the catch: this format can lead to some unexpected results. ```javascript console.log(0.1 + 0.2); // Expected 0.3, but surprise! It's 0.30000000000000004 ``` ## Why Do We Drift Off Course? The reason for this numerical oddity lies in how numbers are stored. In JavaScript, all numbers are floating-point numbers, meaning they have a certain amount of space for the digits before and after the decimal point. When we perform calculations, these numbers are converted into binary, and that's where the precision can get a bit… wavy. Binary systems work great for whole numbers, but for fractions? Not so much. Some numbers that look simple in decimal, like 0.1, are actually infinite repeating fractions in binary (in the same way that 1/3 is `0.33333333...`). And since our digital vessels can only hold so much, we end up rounding off, leading to precision errors. {% details Conversion of 0.1 to Binary %} **Multiplication by 2**: When converting a decimal fraction to binary, we start by multiplying the fraction by `2`. - 0.1 * 2 = 0.2 - Integer part: 0 **Decimal Part**: We take the decimal part of the result, which is 0.2, and continue the process. - 0.2 * 2 = 0.4 - Integer part: 0 **Repeat the Process**: We continue the process, multiplying the decimal part by 2 at each step. - 0.4 * 2 = 0.8 - Integer part: 0 - 0.8 * 2 = 1.6 - Integer part: 1 - 0.6 * 2 = 1.2 - Integer part: 1 - 0.2 * 2 = 0.4 - Integer part: 0 **Binary Representation**: The integer parts obtained in each step form the binary representation. In this case, the binary representation of 0.1 has an infinite repeating pattern, which is 0.0001100110011… and so on. {% enddetails %} ## Steering Clear of the Icebergs Fear not! There are ways to navigate these choppy waters: - **Rounding**: Use `Math.round()`, `Math.floor()`, or `Math.ceil()` to keep your numbers in check. - **Fixed Precision**: `toFixed()` can tie down your numbers to a certain number of decimal places. - **Big Numbers**: Libraries like `BigDecimal` or `Big.js` can be your lifeboats, offering more precise handling of large or tricky numbers. ## Charting the Course Ahead As we continue our journey through the JavaScript seas, remember that floating-point precision is just one of the many adventures that await. Keep your compass handy (that's your documentation), watch out for the icebergs (those pesky bugs), and always test the waters before you dive in (write those unit tests!). Happy coding, and may your console logs always be free of unexpected decimals! ---- And there you have it, a fun yet informative dive into the world of floating-point precision in JavaScript. May your coding journey be smooth sailing from here on out! ⛵ ---- _🤖 Yeah, I used AI for some of that. 🧍 No, I didn't use AI for all of it. Was the sailor theme too much? Let me know in the comments! 😂_ Article by Best_codes https://the-best-codes.github.io/?dev.to
best_codes
1,788,543
Crypto's Resurgence: A Nostalgic Return to 2021 🚀
Since 2024, the crypto market has regained its fervor reminiscent of the golden era of 2021. The...
0
2024-03-12T22:32:34
https://dev.to/irmakork/cryptos-resurgence-a-nostalgic-return-to-2021-4mmd
Since 2024, the crypto market has regained its fervor reminiscent of the golden era of 2021. The attention to Bitcoin and Ether headlines has been relentless, drawing parallels to the past. But amidst the hype, significant changes have unfolded. 🔄 Trend shifts are natural, but the landscape has evolved. NFTs, once the darlings of the market, have been replaced by modular and Layer 2 chains in 2024, signaling a new technological trend. This shift underscores the enduring intersection between technology and finance in the crypto space. 🌟 Innovation takes center stage as new tokens flood the market. Projects like Celestia (TIA) and Near Protocol (NEAR) lead the charge with their modular facilities and cross-chain solutions. NEAR's integration with WhiteBIT, offering free USDT and USDC withdrawals, adds fuel to the fire. 💼 Adoption vs. decentralization: Crypto's journey from fringe experiment to mainstream asset is evident. Now, cryptocurrencies are akin to fiat, with investments made easy through approved ETFs. Yet, the goal of achieving maximal decentralization remains a driving force. 🎭 Memecoins: The new face of crypto investment. These community-driven tokens, born out of the 2021 hysteria, continue to dominate the charts. Despite dubious tokenomics, their strong community support propels them forward, shaping the modern crypto landscape. 📚 Education is key: The rise of memecoins coincides with a growing emphasis on crypto education. Surveys show increasing interest in teaching cryptocurrencies in schools. Memecoins play a surprising role in this, serving as a gateway for beginners to learn about crypto in an entertaining manner. In summary, the crypto market of 2024 bears echoes of the past while embracing innovation and education, signaling a promising future for the industry. 🌐 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jrwu1htvv3r9byzsy1f.png)
irmakork
1,787,286
Accessibility tip for old HTML pages
Introduction Decades past many web developers did not consider adding accessibility...
0
2024-03-11T20:00:58
https://dev.to/karenpayneoregon/accessibility-tip-for-old-html-pages-2nj6
vscode, html, codenewbie
## Introduction Decades past many web developers did not consider adding accessibility features to their web sites and there are cases where these sites are still up and running. Not having proper accessibility features, a site can lose visitors as visitors with assisted technologies like screen readers will not be able to read web pages. The following table structure has two issues. ```html <table> <tr> <td> First name </td> <td> <input type="text" id="first_name"> </td> <tr> <td> Last name </td> <td> <td> <input type="text" id="last_name"> </td> </tr> </table> ``` ## Revolve inputs with no labels The task, take text like first name and last name, place the text into labels which are associated with inputs (this works with select elements too). **The goal** ```html <table> <tr> <td> <label for="first_name">First name</label> </td> <td> <input type="text" id="first_name"> </td> <tr> <td> <label for="last_name">Last name</label> </td> <td> <td> <input type="text" id="last_name"> </td> </tr> </table> ``` **Solution** No matter the IDE or developer editor used originally, the solution uses Microsoft [Visual Studio Code](https://code.visualstudio.com/download) which is a free developer editor. Configuring VS Code to perform a surround with operation. By Surround with, highlight code, press a key shortcut, enter label for instance and the highlighted code is encased in a label. By using surround with reduces needed keystrokes that would otherwise be needed e.g. cut the original text, add a label, paste back text. An added bonus of the following a for attribute is added to the label with the cursor position in the quotes. Note that the shortcut will be <kbd>Shift</kbd> + <kbd>Alt</kbd>, <kbd>W</kbd> but can be whatever you like. Lets get going. Open Visual Studio code, bottom left corner of the editor, click the gear. ![VS Code gear](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/be0qui0vagtq1o8ash6r.png) Click **Keyboard Shortcuts**. ![Keyboard shortcuts menu item](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxo6tq86tadxrnrc0q7l.png) Click the **button** shown below with the arrow ![Shows button to open json file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93rsr6l0cra68156gxlt.png) This opens another window with a caption of _keyboardbindings.json_. ```json // Place your key bindings in this file to override the defaults [ { "key": "ctrl+f11", "command": "editor.emmet.action.matchTag" }, { "key": "ctrl+y", "command": "editor.action.deleteLines", "when": "textInputFocus && !editorReadonly" }, { "key": "ctrl+shift+k", "command": "-editor.action.deleteLines", "when": "textInputFocus && !editorReadonly" }, { "key": "ctrl+;", "command": "editor.emmet.action.wrapWithAbbreviation" }, { "key": "ctrl+e ctrl+e", "command": "mssql.runQuery", "when": "editorTextFocus && editorLangId == 'sql'" }, { "key": "ctrl+shift+e", "command": "-mssql.runQuery", "when": "editorTextFocus && editorLangId == 'sql'" }, { "key": "ctrl+shift+alt+t", "command": "workbench.action.tasks.terminate" }, { "key": "ctrl+alt+n", "command": "python.execInTerminal" } ] ``` Add the following ```json , { "key": "shift+alt+w", "command": "editor.emmet.action.wrapWithAbbreviation" } ``` Finally results. ```json // Place your key bindings in this file to override the defaults [ { "key": "ctrl+f11", "command": "editor.emmet.action.matchTag" }, { "key": "ctrl+y", "command": "editor.action.deleteLines", "when": "textInputFocus && !editorReadonly" }, { "key": "ctrl+shift+k", "command": "-editor.action.deleteLines", "when": "textInputFocus && !editorReadonly" }, { "key": "ctrl+;", "command": "editor.emmet.action.wrapWithAbbreviation" }, { "key": "ctrl+e ctrl+e", "command": "mssql.runQuery", "when": "editorTextFocus && editorLangId == 'sql'" }, { "key": "ctrl+shift+e", "command": "-mssql.runQuery", "when": "editorTextFocus && editorLangId == 'sql'" }, { "key": "ctrl+shift+alt+t", "command": "workbench.action.tasks.terminate" }, { "key": "ctrl+alt+n", "command": "python.execInTerminal" }, { "key": "shift+alt+w", "command": "editor.emmet.action.wrapWithAbbreviation" } ] ``` ## Usage sample Old HTML code with an input with no label. ```html <table> <tr> <td> First name </td> <td> <input type="text" id="first_name"> </td> </table> ``` Highlight **first name**, press <kbd>Shift</kbd> + <kbd>Alt</kbd> + <kbd>W</kbd>and the following dialog opens at the top of the editor. ![prompt for Abbreviation ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjsxk8q4aaj4j7pofide.png) Type **label** and press <kbd>enter</kbd> This results with the following and note red squiggles which means no label associated with this input. ![results](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n8ad295sul2w764p03zi.png) Last step, add the id of the input into the for attribute. ![finished](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jittibhal4q45arh4br2.png) ## Summary Adding accessibility features to an old web page does not have to be time consuming, what has been presented is one technique to shorten time required to meet [WCAG AA](https://www.w3.org/WAI/WCAG2AA-Conformance) requirements. ## Resource VS Code [Advanced customization](https://code.visualstudio.com/docs/getstarted/keybindings#_advanced-customization)
karenpayneoregon
1,787,481
CSS Battle #1 - Simply Square
see the target Challenge Overview The goal of this challenge is to create a design...
0
2024-03-12T01:57:14
https://dev.to/jitheshpoojari/css-battle-1-simply-square-45oj
html, css, challenge, webdev
![Image css battle target #1 - Simply Square](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imowbv991gz0s5i7dajp.png) [see the target](https://cssbattle.dev/play/1) ### Challenge Overview The goal of this challenge is to create a design resembling a square using HTML and CSS. The provided target image features a square shape against a contrasting background. ### Method 1 - Box Model Approach **YouTube Video** : [see video](https://www.youtube.com/watch?v=Z0Asqy0dcvU) **GitHub Repo** : [see code](https://github.com/jithesh-poojari/css-battles/blob/main/battles/%231%20-%20Pilot%20Battle/%231_Simply_Square.md#method-1---box-model-approach) **Stats:** - **Match:** 100% - **Score:** 779.62{84} ###code ```html <style> * { background: #5d3a3a; margin: 0; > * { background: #b5e0ba; width: 200; height: 200; } } </style> ``` **Code Explanation:** - **Background (`*`):** The `*` selector targets the entire page, setting its background to a warm golden shade `#5d3a3a`. - **Container (`>`):** The `>` selector targets all child elements, ensuring they have a white background `#b5e0ba` and fixed dimensions of 200x200 pixels. ### Method 2 - Box Shadow Technique **YouTube Video** : [see video](https://www.youtube.com/watch?v=i9WFFFbxYNc) **GitHub Repo** : [see code](https://github.com/jithesh-poojari/css-battles/blob/main/battles/%231%20-%20Pilot%20Battle/%231_Simply_Square.md#method-2---box-shadow-technique) **Stats:** - **Match:** 100% - **Score:** 839.08{54} ###code ```html <a style=box-shadow:0+0+0+2in#b5e0ba,0+0+0+5in#5d3a3a> ``` **Code Explanation:** - **Box Shadow (`box-shadow`):** The `box-shadow` property is applied to an anchor (`<a>`) element. Two shadows are defined: - A 2-inch inner shadow with a white color (`#b5e0ba`). - A 5-inch outer shadow with a contrasting dark color (`#5d3a3a`). ### Conclusion Both methods effectively create a square design, meeting the challenge's requirements. Method 1 utilizes the box model approach, setting background colors and dimensions. Method 2 achieves a similar result using the `box-shadow` property to simulate the square shape.
jitheshpoojari
1,787,587
Getting Started with Java Development
Java remains a powerhouse in the programming world, consistently ranking high in indexes like TIOBE....
0
2024-03-12T05:38:26
https://dev.to/etelligens/getting-started-with-java-development-49hb
webdev, java, softwaredevelopment
Java remains a powerhouse in the programming world, consistently ranking high in indexes like TIOBE. This popularity is well-deserved, thanks to its robust maintenance and ongoing enhancements. Its Java Virtual Machine (JVM) is arguably the most sophisticated environment for executing managed programming languages. Java's extensive library ecosystem supports a wide array of applications — from command-line tools and desktop software to web applications, back-end services, data processing, and beyond. With new features on the horizon such as vectorized computations, lightweight virtual threads, enhanced native integration, and custom value objects, Java is more versatile than ever for various software development projects. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02i9qayavmg505mqujhb.jpg) However, the vast scope of Java and its evolving ecosystem present challenges for newcomers. Deciding on the Java version, installation process, and appropriate tools and IDEs can be daunting due to the multitude of available options and the rapid evolution of the platform. Despite this, abundant outdated resources remain accessible online, adding to the confusion about what's currently relevant. This blog aims to offer a targeted guide for beginners embarking on **[Java development](https://www.etelligens.com/technology/java-development-company/)** in 2023, emphasizing a smooth introduction to this comprehensive platform. The recommendations provided are based on my preferences and experiences, acknowledging that alternatives might better suit different needs and tastes. ## Understanding Java: Key Concepts Explained For those new to Java, distinguishing between its various components can be confusing. Here's a breakdown of essential terms: • **Java Programming Language:** A general-purpose, statically typed language with object-oriented and functional elements, compiled into portable bytecode. • **Java Platform:** Comprises tools for developing and executing Java programs, including the compiler (javac), JVM, and standard class library, primarily focusing on Java Standard Edition (SE). • **Java Virtual Machine (JVM):** Executes Java bytecode, handling tasks like code loading, verification, compilation, and garbage collection. Multiple implementations exist, such as HotSpot and OpenJ9. • **Java Development Kit (JDK):** The toolset for Java application development and execution. • **OpenJDK:** An open-source Java SE implementation and the community behind it. • **Java Community Process (JCP):** Develops Java specifications, including versioning. **Choosing a Java Distribution** Java is maintained by OpenJDK, with various vendors offering their distributions. For beginners, Eclipse Temurin is recommended, supported by the Adoptium project and available for free use and commercial support. **Selecting a Java Version** With Java releasing a new version every six months, starting with the current Long-Term Support (LTS) release, Java 17, is advisable. LTS releases provide extended maintenance, making them suitable for most users over newer, less stable versions. **Installation Tips** Java can be installed via vendor websites or operating system package managers. For simplicity, SDKMan is a useful tool for managing Java SDKs, allowing easy installation, updates, and version switching. **Crafting Your First Java Program** Java's object-oriented nature means programs are structured around classes. The iconic "Hello World" example introduces basic concepts, including the class structure, the main method, and standard output. Tools like jshell and jbang facilitate interactive and third-party library inclusive programming exploration. **Deepening Your Java Knowledge** Learning Java in-depth requires a systematic approach, possibly through reputable books and resources. Websites like dev.java offer extensive free materials, while the Java Language Specification provides authoritative insights. Certifications, such as the "Oracle Certified Professional: Java SE 17 Developer," can further validate your skills. **Building and Managing Your Code** For project management beyond simple compilation, tools like Apache Maven, Gradle, and Bazel are essential. Maven, in particular, is beginner-friendly, offering a structured project framework and easy integration with IDEs and other tools. **Choosing an Editor or IDE** The choice of development environment is subjective, with options ranging from lightweight editors like VSCode to comprehensive IDEs like IntelliJ IDEA. Starting with VSCode is suggested for its Java support and plugin ecosystem. **Utilizing Libraries** Java's strength lies in its vast library ecosystem. However, caution is advised when incorporating external libraries to avoid dependency conflicts and maintain simplicity. Essential libraries include JUnit, slf4j, Jackson, and Hibernate, among others. **Frameworks and Containerization** Application frameworks like Quarkus or Spring Boot provide scaffolding for enterprise applications, integrating seamlessly with various technologies. For containerized applications, using the Eclipse Temurin base image ensures a stable and up-to-date environment. **Advancing Your Java Journey** Exploring JDK tools, GraalVM, performance analysis, continuous integration, and publishing libraries are ways to deepen your Java expertise. Staying informed through resources like dev.java and inside.java helps keep your knowledge current.
etelligens
1,787,644
Timeless Elegance of Harley Davidson Leather Chaps Mens
In the world of motorcycle fashion, few brands evoke the same level of timeless elegance as Harley...
0
2024-03-12T06:59:34
https://dev.to/leatherbaba3/timeless-elegance-of-harley-davidson-leather-chaps-mens-3abd
In the world of motorcycle fashion, few brands evoke the same level of timeless elegance as Harley Davidson. Renowned for their iconic bikes, this American legend has seamlessly extended its influence to rider apparel. In this blog, we embark on a journey through history, exploring the enduring allure of [Harley Davidson Leather Chaps Men](https://leatherbaba.com/mens-leather-chaps/). From their roots to the latest trends, we'll uncover the unique blend of craftsmanship, style, and functionality that sets these chaps apart. **History of Harley Davidson Leather Chaps** Harley Davidson's foray into leather apparel can be traced back to the early 20th century. As the motorcycle culture gained momentum, riders sought protective gear that not only shielded them from the elements but also reflected their rebellious spirit. The first leather chaps emerged as a pragmatic solution, providing riders with added protection against wind, debris, and potential road abrasions. The iconic Harley Davidson logo became synonymous with the freedom of the open road, and their leather chaps soon became a symbol of rugged individualism. Crafted from high-quality leather, these chaps were not merely functional but also exuded a distinct style that resonated with the spirit of the Harley Davidson rider. **Evolution of Style** Over the years, Harley Davidson leather chaps have evolved beyond their utilitarian origins into iconic fashion statements. The distinct design, featuring the Harley Davidson logo and intricate detailing, has become synonymous with the rebellious spirit of motorcycle culture. The chaps not only offer protection but also exude a sense of confidence and individuality. In recent years, the fashion landscape has witnessed a resurgence of interest in vintage and retro styles. Harley Davidson leather chaps, with their timeless appeal, have seamlessly integrated into this trend. Celebrities, fashion influencers, and motorcycle enthusiasts alike have embraced the rugged yet sophisticated look that these chaps offer. **Craftsmanship and Design** One of the enduring aspects of Harley Davidson leather chaps is the meticulous craftsmanship that goes into their creation. The brand has always been committed to delivering products that stand the test of time, both in terms of durability and style. Each pair of leather chaps undergoes a detailed manufacturing process, ensuring that it meets the high standards set by Harley Davidson. The design philosophy revolves around a balance between form and function. While providing essential protection to riders, the chaps also enhance the overall aesthetic appeal. The unmistakable Harley Davidson insignia is often embossed or stitched onto the chaps, adding a touch of brand heritage to the rider's ensemble. **New Trends in Harley Davidson Leather Chaps** In recent years, the world of motorcycle fashion has witnessed a resurgence of interest in vintage and classic styles. Harley Davidson, being a trendsetter, has seamlessly adapted to these changing preferences while maintaining the core elements that define their brand. The latest trend in Harley Davidson leather chaps for men involves a fusion of retro aesthetics with modern functionality. Advanced materials and innovative design techniques have been incorporated to enhance comfort and performance. Breathable fabrics and ergonomic designs ensure that riders can enjoy the timeless elegance of leather chaps without compromising on comfort during long rides. Customization has also become a key aspect of the latest trend. Harley Davidson enthusiasts are increasingly seeking personalized touches, from unique stitching patterns to custom patches that tell a rider's individual story. This shift towards individual expression while embracing the classic Harley Davidson style is shaping the current landscape of leather chaps fashion. **Maintaining Your Harley Davidson Leather Chaps** Owning a pair of [Harley Davidson Leather Chaps](https://leatherbaba.com/mens-leather-chaps/) comes with the responsibility of proper care and maintenance. To ensure their longevity and timeless appeal, consider the following tips: **Regular Cleaning:** Wi pe down your leather chaps with a damp cloth after each ride to remove dust and debris. Use a mild leather cleaner to maintain their suppleness and shine. Avoid Moisture: Leather is susceptible to damage from moisture. If your chaps get wet, allow them to air dry naturally. Avoid using heat sources such as hairdryers, as excessive heat can cause the leather to crack. **Storage:** When not in use, store your leather chaps in a cool, dry place. Avoid hanging them in direct sunlight, as prolonged exposure can fade the color and dry out the leather. **Conditioning:** Periodically apply a quality leather conditioner to keep the leather hydrated and prevent it from becoming stiff. This also helps maintain the natural luster of the material. **Conclusion** In the ever-evolving world of motorcycle fashion, Harley Davidson leather chaps for men stand out as a testament to enduring style and functionality. From their humble beginnings in the early 20th century to the present day, these chaps have evolved without losing the essence of timeless elegance. As we embrace the latest trends that blend classic aesthetics with modern innovation, it's evident that the Harley Davidson legacy continues to influence and shape the culture of motorcycle fashion. So, whether you are a seasoned rider or someone just starting their journey into the world of motorcycles, investing in a pair of Harley Davidson leather chaps is not just a fashion choice but a nod to the rich history and everlasting allure of an American legend.
leatherbaba3
1,787,659
Top Cryptocurrency Exchange Platforms
Please remember that the cryptocurrency world is constantly evolving, and new apps may have emerged...
0
2024-03-12T07:25:47
https://dev.to/bitnasdaqglobal/top-cryptocurrency-exchange-platforms-47j5
cryptocurrency, blockchain
Please remember that the cryptocurrency world is constantly evolving, and new apps may have emerged since the last update. It's crucial to double-check the most recent information and user reviews before deciding on a trading app. Below are some of the popular [cryptocurrency exchange in India](https://www.bitnasdaq.com/) as of my last update: ## Binance • Beginner-friendly interface • Supports a wide variety of cryptocurrencies • Offers advanced trading features • Allows spot, futures, and options trading • Binance Coin (BNB) can be used for fee discounts ## Coinbase • Easy-to-use interface suitable for beginners • Regulated and compliant with US laws • Enables buying, selling, and storing various cryptocurrencies • Coinbase Pro provides advanced trading features ## BitNasdaq • Well-established exchange known for security • Supports various cryptocurrencies • Provides advanced trading features, including futures trading • High liquidity • [Best app for trading cryptocurrency](https://www.bitnasdaq.com/) ## Gemini • Founded by the Winklevoss twins and regulated in the US • User-friendly platform with a focus on security • Supports a range of cryptocurrencies • Gemini Earn allows users to earn interest on their crypto holdings ## Robinhood • Known for commission-free stock and crypto trading • Simple and intuitive interface • Offers a Cash Card for easy spending of crypto holdings • Limited selection of supported cryptocurrencies ## eToro • Social trading platform allowing users to copy successful investors' trades • Supports various assets, including cryptocurrencies • User-friendly interface • Allows both crypto trading and investment in stocks and other assets ## KuCoin • User-friendly interface with a wide variety of cryptocurrencies • Offers various trading pairs and products • KuCoin Shares (KCS) provide additional benefits for users • Allows staking of certain cryptocurrencies ## OKEx • Comprehensive exchange with spot and futures trading • High liquidity and support for a wide range of cryptocurrencies • OKB token offers fee discounts and other perks • Options and futures trading available These apps offer a combination of user-friendly interfaces, security features, and support for various cryptocurrencies. When selecting a trading app, consider factors like your experience level, the range of cryptocurrencies you intend to trade, fees, security features, and any additional services offered by the platform. Also, make sure to stay updated with the latest reviews and news about the app to gauge its performance and reputation accurately.
bitnasdaqglobal
1,789,275
Error Handling in Go
Error Handling in Go Error handling is an important aspect of any programming language,...
0
2024-03-13T15:10:03
https://dev.to/romulogatto/error-handling-in-go-fhb
# Error Handling in Go Error handling is an important aspect of any programming language, and Go is no exception. In this guide, we will explore the various methods available in Go for handling errors effectively. ## Introduction to Error Handling in Go In Go, error handling is done through the use of **error values**. An error value represents a failure or abnormal condition that occurred during the execution of a program. By convention, functions that may encounter errors return one or more `error` types as their last return value. ## Returning and Checking Errors Go encourages explicit error checking by requiring developers to handle errors explicitly rather than letting them pass unnoticed. When calling a function that may potentially return an error value, it's best practice to assign the returned error to a variable and check its value against `nil`. For example: ```go result, err := someFunction() if err != nil { // Handle the error gracefully log.Fatal(err) } ``` By checking if the `err` variable is equal to `nil`, you can detect whether an error occurred during function execution. If an error does occur, it's essential to handle it appropriately based on your application's requirements. ## The `errors` Package Go provides built-in support for creating and working with custom errors through the `errors` package. This package allows you to create new instances of errors using its `New()` function and retrieve their string representation using its `Error()` method. Here's an example of how you can create a custom error using the `errors` package: ```go import "errors" func divide(a int, b int) (float64, error) { if b == 0 { return 0, errors.New("division by zero") } return float64(a) / float64(b), nil } ``` In this example, when dividing two numbers (`a` divided by `b`), if the divisor (`b`) equals zero, we return a custom error using the `errors.New()` function. ## Panic and Recover In some cases, you may encounter unrecoverable errors that require your program to terminate abruptly. In Go, this can be achieved through the use of **panic**. By calling the built-in `panic()` function, you can interrupt normal program execution and display an error message before shutting down. ```go func someFunction() { // ... if criticalErrorOccurs { panic("critical error occurred") } // ... } ``` To handle panics gracefully and prevent abrupt termination, Go provides a built-in mechanism called **recover**. The `recover()` function allows you to regain control after a panic occurs and execute additional code to handle or report the panic situation appropriately. To use recover in your code: ```go func main() { defer func() { if r := recover(); r != nil { // Handle or report panic here log.Println("Panic occurred:", r) } }() // Rest of your code goes here } ``` By wrapping your code with a deferred function that includes recovery logic (`defer func() {...}`), you ensure that any occurrence of panic within that block will trigger the execution of recovery statements defined inside it. ## Conclusion Error handling is crucial for writing reliable and robust applications in Go. By following these best practices discussed above, you can effectively manage errors encountered during program execution. Remember always to check for errors explicitly using conditional statements and consider using custom error types when necessary. Additionally, leverage panic and recover only when dealing with exceptional situations requiring immediate termination or special handling
romulogatto
1,787,675
The Guo farm is a financial scam
#WenguiGuo#WashingtonFarm Guo Wengui lied to hundreds of thousands of his online followers,...
0
2024-03-12T07:47:46
https://dev.to/hamaa/the-guo-farm-is-a-financial-scam-1cel
webdev
#WenguiGuo#WashingtonFarm Guo Wengui lied to hundreds of thousands of his online followers, promising them huge profits if they invested in GTV Media Group, Himalayan Farm Alliance, G| Club and Himalayan Exchange. Since at least March 2018, Guo Wengui has been the mastermind behind the complex financial scam, in which he and his financial adviser Kin Ming Je defrauded thousands of people and made more than $1 billion. Guo Wengui's success has a "cult" flavor. Calling believers in the name of "anti-communist", creating a community with closed information, while bullying or threatening to punish those who are not firm in their beliefs. After packaging himself as a "master" type of figure, it is natural to harvest the wealth of believers.
hamaa
1,787,719
What role does quantum machine learning play in the Multiverse Computing ecosystem?
The Power of Quantum Machine Learning in the Multiverse Computing...
0
2024-03-12T08:31:39
https://dev.to/yagnapandya9/what-role-does-quantum-machine-learning-play-in-the-multiverse-computing-ecosystem-5a4k
javascript, beginners, programming, react
## The Power of Quantum Machine Learning in the Multiverse Computing Ecosystem [Introduction:](https://fxdatalabs.com/) In the realm of cutting-edge technology, the convergence of quantum computing and machine learning has sparked a revolution in computational capabilities. Quantum machine learning, a hybrid discipline that merges the principles of quantum mechanics with machine learning algorithms, holds immense promise for tackling complex problems that defy classical computation. Within the Multiverse Computing ecosystem, quantum machine learning emerges as a transformative force, offering unprecedented opportunities for innovation and discovery. In this detailed article, we explore the role of quantum machine learning in the Multiverse Computing ecosystem, elucidating its fundamental concepts, applications, and transformative potential. [Understanding Quantum Machine Learning:](https://fxdatalabs.com/) Quantum machine learning represents the synergy between quantum computing and traditional machine learning techniques. At its core, quantum machine learning leverages the inherent parallelism and computational power of quantum computers to process and analyze vast datasets, optimize algorithms, and make predictions with unparalleled speed and accuracy. Unlike classical machine learning algorithms, which operate on classical bits, quantum machine learning algorithms harness the v properties of qubits to explore multiple computational pathways simultaneously, unlocking new avenues for pattern recognition, optimization, and data analysis. ## Quantum Machine Learning Algorithms: [Quantum Support Vector Machines (QSVM):](https://fxdatalabs.com/) QSVM is a quantum-enhanced version of the classical support vector machine algorithm, which leverages quantum algorithms to perform classification tasks with improved efficiency and v. By encoding data into quantum states and exploiting quantum interference effects, QSVMs can classify complex datasets with higher accuracy and reduced computational complexity compared to classical SVMs. [Quantum Neural Networks (QNN):](https://fxdatalabs.com/) QNNs are quantum counterparts to classical neural networks, wherein quantum circuits are used to model and train machine learning models. By encoding input data as quantum states and implementing quantum gates to perform operations, QNNs can potentially outperform classical neural networks in tasks such as pattern recognition, image classification, and natural language processing. [Quantum Clustering Algorithms:](https://fxdatalabs.com/) Quantum clustering algorithms, such as quantum k-means and quantum hierarchical clustering, utilize quantum principles to partition data into distinct clusters based on similarity metrics. These algorithms offer advantages in terms of speed and scalability, enabling the analysis of large-scale datasets with complex structures. ## Applications of Quantum Machine Learning in Multiverse Computing: [Drug Discovery and Materials Science:](https://fxdatalabs.com/) Quantum machine learning accelerates the discovery of new materials with desired properties and the design of novel drug compounds with therapeutic potential. By simulating molecular interactions and predicting material properties with high precision, quantum machine learning algorithms facilitate the discovery and optimization of materials for various applications, including renewable energy, pharmaceuticals, and nanotechnology. [Financial Modeling and Portfolio Optimization:](https://fxdatalabs.com/) Quantum machine learning algorithms enable the development of predictive models for financial markets, risk assessment, and portfolio optimization. By analyzing historical data, identifying patterns, and forecasting market trends, quantum machine learning algorithms empower investors and financial institutions to make informed decisions and mitigate risks in a dynamic market environment. [Healthcare and Medical Diagnosis:](https://fxdatalabs.com/) In the realm of healthcare, quantum machine learning contributes to disease diagnosis, medical imaging analysis, and personalized treatment planning. By analyzing medical datasets, identifying biomarkers, and predicting patient outcomes, quantum machine learning algorithms support clinical decision-making, disease prevention, and precision medicine initiatives. [Challenges and Future Directions:](https://fxdatalabs.com/) Despite its transformative potential, quantum machine learning faces several challenges, including qubit decoherence, error correction, and scalability limitations. Addressing these challenges requires advancements in quantum hardware, algorithm design, and error mitigation techniques. Moreover, the integration of quantum machine learning into practical applications necessitates interdisciplinary collaboration between quantum physicists, machine learning experts, and domain specialists to unlock its full potential. [Conclusion:](https://fxdatalabs.com/) Quantum machine learning represents a paradigm shift in computational science, offering new tools and methodologies for solving complex problems across diverse domains. Within the Multiverse Computing ecosystem, quantum machine learning emerges as a cornerstone technology, driving innovation, discovery, and breakthroughs in quantum computation. As research and development in quantum machine learning continue to progress, the possibilities are limitless, heralding a new era of computational intelligence and transformative applications in science, industry, and society. For more insights into AI|ML and Data Science Development, please write to us at: contact@htree.plus| [F(x) Data Labs Pvt. Ltd.](https://fxdatalabs.com/) #QuantumMachineLearning #FutureTech #InnovationLeadership #MultiverseComputing 🚀🔬
yagnapandya9
1,787,800
Stash changes in a git repository with VS Code
git stash is a useful command that temporarily stores current changes in a Git repository without...
0
2024-03-12T09:57:39
https://amanhimself.dev/blog/stash-changes-with-vscode/
git, vscode, webdev, programming
`git stash` is a useful command that temporarily stores current changes in a Git repository without committing them, making it possible to return to them later. ## Stash using VS Code Visual Studio Code (VS Code) is a highly capable code editor that offers many well-thought-out functionalities. Even after using it for years, I still find new things about it. Using in-built **Source Control**, you can quickly view the modified files and temporarily save them by stashing them: - In VS Code, go to the Source Control tab. - Click the three-dotted menu (`...`) next to Source Control to open a dropdown menu. - In the menu, select **Stash** > **Stash (Include Untracked)**. - That's it. The file changes are now stashed locally. ![Stashing changes locally using VS Code](https://amanhimself.dev/_next/image/?url=%2Fimages%2Fstash-option-vscode.png&w=1920&q=100) ## Bring the latest stashed changes to a branch Let's assume you've now created a new branch where you want to bring those changes that are saved temporarily. - In VS Code's Source Control, open the dropdown menu. - Select **Stash** > **Apply Latest Stash**. ![Stashing changes locally using VS Code](https://amanhimself.dev/_next/image/?url=%2Fimages%2Fapply-stash-in-branch.png&w=1920&q=100) You can now bring those changes to the current branch and commit them. ## Conclusion Stashing is particularly useful when you want to fix something and keep those changes around so you can return to them later. Collecting the stashed changes can result in a new branch, where local changes can be brought later.
amanhimself
1,787,877
File Injection and Path Traversal vulnerabilities
G’day guys! Following on from my last post where we looked at Newline Injection, today I wanted to...
0
2024-03-13T21:45:26
https://jason.sultana.net.au/security/php/2024/03/11/file-injection-path-traversal.html
security, php
--- title: File Injection and Path Traversal vulnerabilities published: true date: 2024-03-11 09:07:00 UTC tags: security,php canonical_url: https://jason.sultana.net.au/security/php/2024/03/11/file-injection-path-traversal.html --- G’day guys! Following on from my [last post](https://jason.sultana.net.au/security/2024/02/04/newline-injection.html) where we looked at Newline Injection, today I wanted to review a couple of other injection-style vulnerabilities in what might be an innocent-looking little snippet. I’ll be using PHP in this example, but these same vulnerabilities could exist in any web application, regardless of the language being used. Imagine a blog, online store or other simple-ish application with urls looking something like: - `https://my-site.com/?page=home.php` - `https://my-site.com/?page=about.php` - `https://my-site.com/?page=post-2024-01-02-03.php` Internally, we could have an `index.php` file with something similar to: ``` <?php $page = $_GET['page']; include($page); ``` I imagine that most of you are gasping in horror at the blatantly obvious secuity hole here. Or if you’re starting out in your career, or haven’t specifically read up on injection vulnerabilities, the security hole may not be so obvious. I can certainly forgive you if that’s the case, and I’ll admit that I almost certainly would have written something like this in my younger years. For the more experienced reader though, I’ll ask a different question - _how many_ security vulnerabilities can you spot here? Spoiler alert: There are 3 vulnerabilities in 2 lines of code. ![](https://jason.sultana.net.au/static/img/three-vulnerabilities-gollum.jpg) # Local File Injection Since the vulnerable code is including any file specified in the GET parameter, a malicious user could specify any local file on the server and have its contents executed by the PHP interpreter (or returned as output). For example, imagine an XML database sitting in the same directory as the index file. An attacker could gain access to said database just by loading `https://my-site.com/?page=database.xml` in their browser. As for executing malicious code - chances are there isn’t any malicious code already on the remote server. But if the application included an upload file feature, a user could upload `malicious-code.php` and then execute it by navigating to `https://my-site.com/?page=malicious-code.php`. # Remote File Injection Okay - let’s say that there isn’t an upload feature, and the attacker really wanted to run some malicious code on the server. Another thing they could try is to: 1. Upload some malicious code to a server that they conrol 2. Navigate to `https://my-site.com/?page=http://attackers-site.com/malicious-code.php` When I tried this in a local XAMPP installation, I got: ``` Warning: include(): http:// wrapper is disabled in the server configuration by allow_url_include=0 in <path-to-ini-file> ``` So thankfully the default config prevents remote file execution by default, but that may not always be the case, and it’s probably a good idea to assume that the config has this enabled, and to protect against Remote File Injection in the application logic as well. # Path Traversal There’s one last vulnerability here - Path Traversal. Imagine navigating to some urls like the following: - `https://my-site.com/?page=../../etc/php.ini` - a relative path, to obtain the PHP configuration - `http://my-site.com/?page=/etc/passwd` - an absolute path, to obtain information about user accounts on the server This exploit basically allows the attacker to navigate to files outside of the web server’s directory (eg: htdocs) to either view or execute files that they normally should not be able to access. # Prevention? There are a few ways to deal with all three of these vulnerabilities, but at their core, I think that all of them (along with almost all other injection attacks) can be prevented by following the maxim of **never trust user input**. That is, if we come back to our snippet: ``` <?php $page = $_GET['page']; include($page); ``` Since a GET parameter, POST parameter, route parameter, HTTP header, etc can all be tampered with by the user, we should treat them as suspicious by default. Some options for workarounds might be: 1) Store a whitelist of permitted files, and validate the user-submitted input (i.e - the GET param) against the whitelist before including it. Eg: ``` <?php $page = $_GET['page]; if (!is_in_whitelist($page)) die('Wicked! Tricksy! False!'); include($page); ``` 2) Or even better, instead of accepting a filename in the GET param, accept a page ID, and look up the filename in a lookup table or database based on the provided page ID. Eg: ``` <?php $page_id = $_GET['page_id']; $page = get_page_by_id($page_id); if (!$page) die('Wicked! Tricksy! False!'); include($page); ``` # Useful reading 1. https://owasp.org/www-project-web-security-testing-guide/stable/4-Web\_Application\_Security\_Testing/07-Input\_Validation\_Testing/11.1-Testing\_for\_Local\_File\_Inclusion 2. https://owasp.org/www-project-web-security-testing-guide/stable/4-Web\_Application\_Security\_Testing/07-Input\_Validation\_Testing/11.2-Testing\_for\_Remote\_File\_Inclusion 3. https://owasp.org/www-community/attacks/Path\_Traversal And that’s about it! Have I missed any vulnerabilities here, or do you have any other advice for prevention? Let me know in the comments. Catch ya!
jasonsultana
1,787,908
Build a Nuxt app with Azure serverless REST API
Learn how to add serverless APIs to a Nuxt app using Azure Functions and its tools.
26,758
2024-03-12T10:57:45
https://mayashavin.com/articles/azure-serverless-function-nuxt
serverless, typescript, azure, nuxt
--- title: "Build a Nuxt app with Azure serverless REST API" description: "Learn how to add serverless APIs to a Nuxt app using Azure Functions and its tools." tags: ['serverless', 'TypeScript', 'Azure', 'Nuxtjs' ] published: true cover_image: https://res.cloudinary.com/mayashavin/image/upload/v1710240615/articles/azure/azure_serverless_nuxt.png series: Developing Serverless Web Apps canonical_url: https://mayashavin.com/articles/azure-serverless-function-nuxt --- _In this article, we will learn how to build an static web app using Nuxt, with serverless APIs using Azure Functions CLI and Static Web Apps Emulator._ ## Table of Content - [Table of Content](#table-of-content) - [Prerequisites](#prerequisites) - [Setting up API project with Azure Function Core Tools](#setting-up-api-project-with-azure-function-core-tools) - [Adding an serverless API for products](#adding-an-serverless-api-for-products) - [Implementing the `getProducts` function](#implementing-the-getproducts-function) - [Registering the `getProducts` function](#registering-the-getproducts-function) - [Fetching products from API on the client-side](#fetching-products-from-api-on-the-client-side) - [Run and debug locally](#run-and-debug-locally) - [Installing Azure Static Web App CLI](#installing-azure-static-web-app-cli) - [Running the app locally with swa emulator](#running-the-app-locally-with-swa-emulator) - [Resources](#resources) - [Summary](#summary) ## Prerequisites For our demo app, we will use Nuxt project with Vite. However, feel free to use any other frontend framework per your preference. We create our demo app `azure-funcs-demo-nuxt` with the following command: ```bash npx nuxi@latest init azure-funcs-demo-nuxt ``` And since we will build and deploy our APIs with Azure functions, you need an active Azure subscription ([a free subscription is also available](https://azure.microsoft.com/en-in/free/)). Lastly, you need to have [Node.js v20+](https://nodejs.org/en/download/) and [Azure Function Core Tools CLI](https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&pivots=programming-language-javascript#install-the-azure-functions-core-tools) installed for setting up our APIs project, which we will cover in the next section. ## Setting up API project with Azure Function Core Tools Azure Functions Core Tools is a set of CLI tools that you can use to create, develop, debug, and deploy Azure Functions in different programming languages. Within our project's directory, we will create an TypeScript `api` project, using the below command: ```bash func init api --worker-runtime typescript --model V4 ``` The CLI then generates a new `api` folder, containing the following main files and folders: - `.funcignore` - contains the list of files and folders to be ignored when deploying to Azure. - `host.json` - metadata file contains global configurations for all functions. - `local.settings.json` - contains app settings, connection strings, environment variables used by local development tools. - `package.json` - contains list of dependencies and scripts for the project. - `tsconfig.json` - contains the TypeScript configurations. - `src/functions` - where the function code resides. We can now create a new function. Let's do it next. ## Adding an serverless API for products For our store, we create a serverless API function `getProducts` that returns a list of products, upon an HTTP GET request. To do so, within our `/api` folder, we run the below command: ```bash func new --name getProducts --template "HTTP trigger" ``` Note here we use `HTTP trigger` as the template for serverless function. The above command will create a new file `src/functions/getProducts.ts`, with some boilerplate code. We will go through and adjust the code to our needs. ### Implementing the `getProducts` function The boilerplate code defines a trigger function `getProducts` that accepts two arguments: - `request` - the actual request details. - `context` - for logging, binding data, and retrieving other context information. This function returns a `Promise<HttpResponse>` including the response's data, which is currently a simple string in the `body`. We re-implement `getProducts` to return a list of products and an OK status code (`200`), as seen in the below code: ```typescript export async function getProducts( request: HttpRequest, context: InvocationContext ): Promise<HttpResponseInit> { const products = [ { id: 1, name: 'Product 1', price: 100 }, { id: 2, name: 'Product 2', price: 200 }, { id: 3, name: 'Product 3', price: 300 }, ]; return { jsonBody: products, status: 200 }; }; ``` Note here we replace the `body` with `jsonBody` to return the products as a JSON object. Otherwise, the response will be returned as a string. Next, we will modify the function registration to handle only GET requests, and to be available at the endpoint `api/products`. ### Registering the `getProducts` function Azure CLI registered our `getProducts` function with a default configuration using `app.http()`, located at the end of the `getProducts.ts` file as follows: ```js /**... */ app.http('getProducts', { methods: ['GET', 'POST'], authLevel: 'anonymous', handler: getProducts }); ``` The above code registers the `getProducts` function as the handler for an HTTP request to the endpoint `api/getProducts`, with the following configurations' properties: - `methods` - the list of HTTP methods that the API supports. - `authLevel` - the authentication level to access the API, such as `anonymous`. - `handler` - the function handles the API request, which is `getProducts`. - `route` - the route template of the API endpoint. This params is very handy when we need a dynamic route for the API, while keeping the registered function name consistently. To have the API available as `api/products` and only accept 'GET' requests, we change `methods` to 'GET', and its registered name to be `products` instead, as below: ```js app.http('products', { methods: ['GET'], authLevel: 'anonymous', handler: getProducts }); ``` We can also use the `route` property to achieve the same result, as follows: ```typescript app.http('getProducts', { //... methods: ['GET'], route: 'products' }); ``` Our API is now ready for us to consume in our web app, which we will do next. ## Fetching products from API on the client-side In our main codebase, we will create a `pages/products.vue` page that fetches the products from the API `api/products`, with the help of Nuxt's `useFetch()`, as seen below: ```html <template> <h1>Products</h1> <div v-if="pending"> Loading ... </div> <ul v-else> <li v-for="product in products" :key="product.id"> <article> <h2>{{ product.name }}</h2> <p>Price: {{ product.price }}</p> </article> </li> </ul> </template> <script setup> const { data: products, pending } = await useFetch('/api/products', { server: false }); </script> ``` That's it! Let's run and debug our app locally. ## Run and debug locally To run the app locally and have both the frontend and the api proxy in the same domain, we will use [Azure Static Web App CLI](https://github.com/Azure/static-web-apps-cli). ### Installing Azure Static Web App CLI We install the CLI globally with the following command: ```bash npm install -g @azure/static-web-apps-cli ``` Once installed, within our project, we can run `swa` command and follow the prompts to configure the settings for running and deploying our app. The CLI then generates a `swa-cli.config.json` file containing the configurations, with the following crucial options: - `apiLocation` - path to our APIs, which is `api`. - `appLocation` - path to our web app, which is `.`. - `outputLocation` - path to the build output of out web app, which is `dist`, mainly for deployment. - `apiDevServerUrl` - the web app's local server, which is `http://localhost:3000` in our case. - `run` - the command to run our web app locally, which is `yarn dev`. - `apiDevserverUrl` - the API's local server, which is usually `http://localhost:7071`. Alternatively, you can manually set it using `--api-devserver-url` in the command line. Next, we will run both our API and web app, and verify the integration between them. ### Running the app locally with swa emulator First, in the `api` directory, we start the server for the APIs: ```bash func start ``` The above command will run the server and provide us with the URL as seen below: <img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/azure/api_server_local" loading="lazy" alt="The local server of API project" class="mx-auto"/> Note that every time we modify the API code, we **must** restart the server to have the changes applied. Next, in our main project's root directory, we start the proxy server for our app with the below command: ```bash swa start ``` We will then have our web app's emulator running locally, with proxy to our APIs, at `http://localhost:4280`, as seen below: <img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/azure/swa_web_app_api_server" loading="lazy" alt="Screenshot of the local server after running swa" class="mx-auto"/> We can use this URL to access the web app using a browser. Both the API and the web app should be now running on the same server endpoint and accessible locally. <img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/azure/nuxt_app_ui" loading="lazy" alt="Screenshot of the Nuxt app on running" class="mx-auto"/> ## Resources The demo project is available on [GitHub](https://github.com/mayashavin/azure-funcs-demo-nuxt). --- ## Summary In this post, we learned how to add a standard TypeScript serverless APIs project using Azure Functions. We also learned how to consume the created API in our Nuxt app and run both the API and the web app locally using SWA CLI Emulator. In the upcoming post, we will learn how to secure both API and the web app using OAuth2 Authentication, and deploy them to Azure. 👉 _Learn about Vue 3 and TypeScript with my new book [Learning Vue](https://www.oreilly.com/library/view/learning-vue/9781492098812/)!_ 👉 _If you'd like to catch up with me sometimes, follow me on [X](https://x.com/MayaShavin) | [LinkedIn](https://www.linkedin.com/in/mayashavin)._ Like this post or find it helpful? Share it 👇🏼 😉
mayashavin
1,787,981
Spotlight on Open-Source Projects: Fostering Innovation and Collaboration
Introduction: Open-source projects have become the cornerstone of innovation and...
0
2024-03-12T12:27:12
https://dev.to/rohit1415/spotlight-on-open-source-projects-fostering-innovation-and-collaboration-39lc
#Introduction: Open-source projects have become the cornerstone of innovation and collaboration in the modern technology landscape. These initiatives, built on the principles of transparency, community-driven development, and accessibility, have revolutionized the way software is created, shared, and utilized. In this spotlight article, we'll delve into the significance of open-source projects, exploring how they drive innovation, foster collaboration, and empower developers worldwide. ##The Significance of Open-Source Projects: Open-source projects are software initiatives whose source code is made publicly available for anyone to inspect, modify, and distribute. This accessibility breaks down traditional barriers to entry, democratizing technology and allowing individuals from diverse backgrounds to contribute and benefit. The significance of open-source projects lies in several key aspects: 1. **Innovation**: Open-source projects are hotbeds of innovation. By granting access to source code, developers can explore, experiment, and build upon existing solutions. This fosters a culture of continuous improvement and drives the development of cutting-edge technologies across various domains. 2. **Collaboration**: Collaboration is at the heart of open-source development. Projects typically thrive on contributions from a global community of developers, each bringing unique perspectives, skills, and experiences to the table. This collaborative environment encourages knowledge sharing, peer review, and collective problem-solving, leading to more robust and reliable software. 3. **Accessibility**: Open-source software is accessible to anyone with an internet connection. This accessibility promotes inclusivity, allowing individuals and organizations of all sizes to leverage and customize software solutions according to their specific needs. Moreover, it eliminates the barriers associated with proprietary licensing models, making technology more equitable and affordable. 4. **Transparency**: Transparency is a fundamental principle of open-source development. By making source code freely available, projects promote accountability and trust within the community. Users can scrutinize the code for security vulnerabilities, privacy concerns, or other issues, driving greater transparency and ensuring the integrity of the software. ##Examples of Notable Open-Source Projects: Numerous open-source projects have made significant contributions to various fields, from web development and data science to artificial intelligence and cybersecurity. Here are a few notable examples: 1. **Linux Kernel**: The Linux operating system, powered by the Linux kernel, is one of the most prominent examples of open-source software. It serves as the foundation for countless devices, servers, and embedded systems worldwide, demonstrating the power and scalability of collaborative development. 2. **Mozilla Firefox**: Mozilla Firefox, an open-source web browser, has played a pivotal role in promoting web standards, privacy, and user empowerment. Its community-driven development model has led to the creation of a fast, secure, and customizable browsing experience for millions of users. 3. **TensorFlow**: TensorFlow, an open-source machine learning framework developed by Google, has democratized the field of artificial intelligence. Its flexible architecture and extensive ecosystem have enabled researchers and developers to build and deploy machine learning models at scale, driving innovation in areas such as healthcare, finance, and autonomous vehicles. 4. **WordPress**: WordPress, an open-source content management system (CMS), powers a significant portion of the web. Its user-friendly interface and extensive plugin ecosystem have empowered individuals and businesses to create and manage websites with ease, fueling the growth of online publishing and e-commerce. #Conclusion: Open-source projects play a vital role in shaping the future of technology by fostering innovation, collaboration, and accessibility. These initiatives empower developers to create impactful solutions, drive societal progress, and democratize access to technology worldwide. As we continue to embrace the principles of openness and transparency, open-source projects will remain at the forefront of technological advancement, driving positive change and enabling a more inclusive digital ecosystem.
rohit1415
1,788,006
Revolutionize Your Website with the Best AI Generator For WordPress 🪄
Experience the future of content creation with our advanced AI technology. Enhance your online...
0
2024-03-12T13:13:20
https://dev.to/ki_bappi/revolutionize-your-website-with-the-best-ai-generator-for-wordpress-259d
aigenerator, imagegenerator, chatgpt, ai
**Experience the future of content creation with our advanced AI technology. Enhance your online presence and captivate your audience with high-quality, AI-generated content. #website #AIgenerator #WordPress #contentcreation #SEO** > Unlock Superior Content Creation with **[UltimateAI](https://ultimateai.io/)** – The Best AI Generator for WordPress. Elevate content creation, refine code effortlessly, engage in seamless chats, and achieve stunning image generation. Explore a curated template library for a revolutionary content strategy. ## [AI Writing Tools](https://ultimateai.io/app/ai_writing_tools/) **Write & Rewrite ANY Content at Lightning Pace** Stop staring at that blank screen again. Our AI text generator can help you tackle any writing challenge effortlessly. - Write keyword-optimized website content to increase SEO ranking. - Empower your visitors with compelling ad copy that drives them to take action. - Each copy is freshly created to match your desired structure, length, tone, and keywords. - Real-Time Data, Dynamic Content. ## [Chatbot Personalization](https://ultimateai.io/app/ai_chatbot/) **Meet UltimateChat— Your AI Copilot is Now on Your Platform.** Get immediate solutions for yourself, your clientele, or your colleagues with artificially intelligent chatbots specifically trained with your own content and documentation. Ideal for customer support, automated query resolution, knowledge maintenance, coaching, and employee development. It is comparable to constructing your own ChatGPT with your company’s data or your client’s. - Custom Bot Builder - Extensive Prompt Library - Real-time Trending AI Generations - Chatbot Personalization ## [Image Generator](https://ultimateai.io/app/ai_images/) **Unleash Your Creativity with the Power of AI Art Generation.** Unlock your creativity and unleash your potential to create advanced images effortlessly. Don’t limit yourself, your imagination has no boundaries. - Text to Images - Image Remix - Stabule Diffusion & Dall-E - Enhance Prompts - Negative Prompt - Upscale & Enhance - Private Generation - Maximun Scale 1024x1024 ## [Code Generator](https://ultimateai.io/app/ai_code/) **Meet UltimateCode, Your AI Assistant** Empowered by the Ultimate AI editor, you have the ability to effortlessly transform your ideas into reality. With just one click, you can debug, autocomplete, and effortlessly turn natural language into code. Let your imagination soar with the ease of Ultimate AI. - Code faster with AI - Find, fix and explain code - Generate unit tests - Generate unit tests - AI-powered chat for you code - Build custom commands
ki_bappi
1,788,030
Setting up My WSL and Neovim Environment
Introduction: In my ongoing journey of the 100DaysOfCode challenge, Day 4 marked a significant...
0
2024-03-12T13:29:06
https://dev.to/emanueljrc/setting-up-my-wsl-and-neovim-environment-54cl
beginners, vim, linux, tutorial
Introduction: In my ongoing journey of the 100DaysOfCode challenge, Day 4 marked a significant milestone as I decided to level up my coding environment. Inspired by fellow developer [Craftzdog (Takuya)](https://github.com/craftzdog), I took the plunge to enhance my setup. In this blog post, I'll share my experience of setting up WSL Ubuntu, Fish Shell, and configuring Neovim with LazyVim, revolutionizing my coding workflow. Setting up WSL Ubuntu and Fish Shell: Setting up WSL Ubuntu was surprisingly straightforward. After enabling WSL in Windows features and installing Ubuntu from the Microsoft Store, I opted for the Fish Shell for its powerful features and intuitive syntax. To manage Fish plugins, I chose Fisher, a lightweight package manager that simplifies plugin installation and management. Configuring Neovim with LazyVim: Neovim has been gaining popularity among developers for its extensibility and performance. Inspired by [Craftzdog's (Takuya)](https://github.com/craftzdog/dotfiles-public) setup, I decided to configure Neovim with LazyVim, a plugin manager that simplifies the setup process and provides a curated set of plugins for productivity and efficiency. With LazyVim, I customized Neovim to suit my preferences, including syntax highlighting, auto-completion, and code navigation features. Taking Inspiration from Craftzdog (Takuya): Craftzdog (Takuya) has been a source of inspiration for many developers, including myself. His approach to configuring Neovim and Fish Shell has inspired me to elevate my own coding environment. Future Plans and Conclusion: As I continue my coding journey, I'm excited to explore the possibilities offered by my enhanced coding environment. Whether it's building web applications, diving into data science projects, or contributing to open-source software, I'm confident that my WSL Ubuntu, Fish Shell, and Neovim with LazyVim setup will support me every step of the way. Stay tuned for more updates as I embark on new coding adventures! Images of My Development Environment: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5htyzwq3iz35k1u9tex5.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0tbo2r3ohmof51hpa1f.png) Conclusion: Day 4 of #100DaysOfCode was a game-changer as I upgraded my coding environment with WSL Ubuntu, Fish Shell, and Neovim with LazyVim. By harnessing the power of Linux, Fish Shell, and optimizing my code editor, I'm ready to tackle coding challenges with confidence and efficiency. Here's to embracing new tools and technologies in the pursuit of coding excellence! If you want a full tutorial or have any questions feel free to ask :)
emanueljrc
1,788,035
Non-Functional Testing Guide: Exploring Its Types, Importance and Tools
Are you looking for ways to ensure your software development projects are successful? Non-functional...
0
2024-03-12T13:35:20
https://www.headspin.io/blog/the-essentials-of-non-functional-testing
testing, webdev, programming, coding
Are you looking for ways to ensure your software development projects are successful? Non-functional testing is an essential part of the process, helping to guarantee that applications and systems meet the necessary non-functional requirements such as availability, scalability, security, and usability. In this blog post, we'll provide an overview of the basics of non-functional testing, from types of tests to use to tools for implementation. We'll also discuss the benefits of using these tests and give tips on choosing the right ones for your project. ## Delving into the Importance of Non-Functional Testing In the ever-changing world of software testing, non-functional testing stands as a steadfast protector of software quality. While [functional testing](https://www.headspin.io/blog/a-complete-guide-to-functional-testing) examines whether the software meets its basic requirements, non-functional testing goes beyond functionality. It ensures that the software performs seamlessly under various conditions, spotlighting potential issues that could arise in real-world usage. Functional testing is like checking off a to-do list, ensuring each feature works as intended. Non-functional testing is more like stress testing – it examines how well the software handles pressure. Also, non-functional testing evaluates the software's behavior under different user loads and scenarios. In essence, non-functional testing uncovers hidden vulnerabilities and bottlenecks that could impact performance. Your app remains smooth even when countless users are active simultaneously. This testing method ensures that your software shines in the face of challenges, delivering a seamless experience to users everywhere. ## Capturing the Essence of Non-Functional Requirements Embarking on the non-functional testing journey necessitates a meticulous understanding of non-functional requirements—a pivotal step that sets the stage for comprehensive software evaluation. These requirements delve into the software's behavior beyond its mere functionalities, focusing on aspects like performance, security, and user experience. When capturing accurate non-functional requirements, the aim is to envision how the software will perform under specific circumstances. This entails envisioning scenarios such as when the application faces varying user loads—ranging from a handful to a surge in user activity. Moreover, it encompasses situations where network congestion might slow data flow or where extensive data volume could strain the software's capabilities. Collaboration with stakeholders is instrumental in this endeavor. Software testers gain valuable insights by engaging with individuals interested in the software's performance, such as clients, users, and developers. These interactions facilitate a comprehensive understanding of the application's expected behavior under diverse scenarios. By laying this solid groundwork through accurate non-functional requirements, software testers pave the way for effective non-functional testing. This testing phase, driven by precise expectations, becomes a strategic tool for identifying potential issues, [optimizing performance](https://www.headspin.io/blog/a-performance-testing-guide), and ensuring the software's resilience when subjected to real-world demands. ## Differentiating Functional and Non-Functional Requirements ![Difference between Functional and Non Functional testing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cr28zx0hplj0n9fm530s.png) It's important to note that functional and non-functional requirements work hand in hand to ensure a comprehensive software testing strategy. While functional requirements lay the foundation for the software's features and behavior, non-functional requirements guarantee optimal performance, security, and user-friendliness. Combining both requirements ensures a holistic approach to assessing and enhancing software quality. ## Unveiling the Core Objectives of Non-Functional Testing: A Comprehensive Overview Non-functional testing is a multi-faceted endeavor guided by a set of core objectives that elevate software quality to new heights. These objectives delve into various dimensions, ensuring that the software performs its functions and excels in diverse scenarios. Let's dive into each of these objectives to understand their significance: - **Performance Assessment**: Non-functional testing evaluates software responsiveness and stability under varying loads, ensuring a smooth user experience even during high demand. - **Scalability Testing**: This objective examines the software's ability to handle increased user loads without compromising performance, anticipating sudden spikes in usage. - **Security Validation**: Non-functional testing identifies vulnerabilities, fortifying the software's security to protect sensitive user data and maintain trust. - **Usability Testing**: Ensuring seamless user interaction, usability testing enhances user satisfaction by pinpointing and addressing potential usability hurdles. - **Compatibility Testing**: This objective validates the software's performance across different platforms, browsers, and devices, expanding its accessibility and reach. > Read: [Game Functionality Testing - How It's Impacting The Gaming Industry](https://dev.to/abhayit2000/game-functionality-testing-how-its-impacting-the-gaming-industry-2c4n) ## Defining Characteristics of Non-Functional Testing Non-functional testing exhibits distinctive characteristics that set it apart from its functional counterpart. These include: 1. Invisible Aspects: Non-functional testing goes beyond the surface and scrutinizes the hidden aspects of software, such as performance and security. It assesses intangible qualities that impact user experience, ensuring the software's inner workings align with expectations. 2. Indirect Validation: Unlike functional testing, which verifies explicit features, non-functional testing indirectly validates overall software quality. It ensures that the software not only performs tasks but does so efficiently and reliably, contributing to a seamless user experience. 3. Varied Testing Approaches: Non-functional testing employs diverse methodologies tailored to specific quality attributes. These approaches collectively address performance, security, usability, and other critical factors, comprehensively evaluating software excellence. 4. Realistic Scenarios: Testing scenarios in non-functional testing mirror real-world usage, simulating practical conditions. This approach unveils how the software behaves under actual user loads, network fluctuations, and other authentic situations, providing invaluable insights into its performance and resilience. ## Unveiling the Advantages of Non-Functional Testing Embracing non-functional testing yields a plethora of advantages, including: - **Enhanced User Experience**: Non-functional testing stands as a guardian of user satisfaction. Rigorously assessing software performance guarantees that applications operate seamlessly, offering a responsive and smooth user experience. Just as a well-tuned instrument produces harmonious melodies, non-functional testing ensures that software delivers users an uninterrupted and enjoyable journey. - **Early Issue Identification**: One of the remarkable advantages of non-functional testing is its role as a vigilant early warning system. Uncovering potential bottlenecks and vulnerabilities during the development phase allows for swift remediation before these issues escalate. Like a skilled detective, non-functional testing ensures that lurking problems are spotted and resolved well before they impact the end users. - **Cost-Efficiency**: In software development, prevention is often more cost-effective than cure. Non-functional testing embodies this principle by identifying and addressing performance issues in the early stages of development. By nipping these concerns in the bud, it prevents the need for extensive and costly post-release fixes. Much like regular maintenance keeps a machine running smoothly, non-functional testing ensures that software remains efficient and trouble-free. - **Brand Reputation**: In the digital age, a brand's reputation hinges on its software's performance. Non-functional testing contributes to a positive brand image by ensuring that software operates reliably, reflecting a commitment to excellence. Just as stellar customer service enhances a brand's reputation, robust software performance fosters user trust and confidence. Through non-functional testing, a brand can establish itself as a reliable provider of high-quality software, forging a positive and enduring impression. > Also read: [Mastering SaaS Testing Tools - A Comprehensive Guide for Enterprises](https://dev.to/abhayit2000/mastering-saas-testing-tools-a-comprehensive-guide-for-enterprises-59lm) ## What are the Different Types of Non-functional Testing? Performance testing is a non-functional test that evaluates how quickly a system responds to user requests and how well it performs under load. This enables businesses to identify any issues that could affect customer satisfaction due to slow response times or poor performance. **Stress testing** looks at how a system behaves when subjected to extreme load levels and helps companies determine if their programs can handle increased usage. Scalability testing examines the capability of a system to scale up or down depending on the number of users and data present in it. **Security testing** assesses the level of security in an application and detects any weaknesses that could put customer data at risk. Compatibility testing checks whether an application works with different operating systems, browsers, and devices so customers can access it regardless of their device's specifications. Overall, different non-functional testing types provide invaluable information about an application's performance before its launch into production environments, which helps ensure customer satisfaction regarding reliability and efficiency across all platforms. ## Exploring Tools for Non-Functional Testing: An Array of Options Non-functional testing ensures that applications and systems meet essential non-functional requirements, encompassing aspects like availability, scalability, security, and usability. Achieving this objective necessitates utilizing a diverse range of automated and manual testing tools. These tools play a pivotal role in assessing performance, reliability, and security, contributing to creating robust applications. As you navigate the landscape of non-functional testing tools, consider the following pointers: ● **Automated Testing Tools**: These tools offer efficiency by executing tests swiftly compared to manual processes. They automate data collection, results comparison, and parallel testing across various systems or environments. Examples include: - Apache JMeter: For load testing web applications. - LoadRunner: For stress testing web applications. - Selenium WebDriver: For browser automation. - SoapUI: For API testing. ● **Manual Testing Tools**: When precision and complex scenarios are paramount, manual tools shine. They encompass non-functional test types like User Acceptance Tests (UAT), Smoke Tests, Regression Tests, and Exploratory Tests. - Microsoft Office Suite Programs: Excel spreadsheets for test case creation. - Zephyr Test Management Suite: UAT management. - Xray Test Management Plugin: Regression tracking in JIRA Software. - Bugzilla Bug Tracker Tool: Defect tracking during exploratory tests. ‍● Selecting the Right Tool: - Cost Effectiveness: Consider budget constraints and tool pricing. - Compatibility: Ensure alignment with existing technology stacks. - Ease of Use: User-friendly interfaces for seamless adoption. - Scalability: Ability to accommodate growing testing needs. - Support Services: Evaluate vendor-provided support. - Performance Metrics: Assess response time measurement capabilities. - Logging Information: Ability to capture detailed logs for analysis. - Integration: Compatibility with development frameworks like Jenkins. - Security Certification: Look for compliance certifications like SOC2. - Reporting Capabilities: Real-time dashboards for insights. - Custom Solutions: Consider developing tailored solutions if needed. ## Tailoring Non-Functional Testing Tools to Your Needs Depending on their unique requirements, organizations can opt for third-party solutions or custom-built tools. While third-party tools provide off-the-shelf convenience, custom solutions allow precise alignment with specific needs, albeit with higher research and development costs. By strategically navigating the non-functional testing tool landscape and considering these factors, developers can ensure that their applications meet non-functional requirements while delivering impeccable user experiences across diverse devices and platforms, irrespective of performance challenges and network conditions. This thorough approach paves the way for successful application launches in production environments. ## How To Choose the Right Tools for Non-Functional Testing Choosing the right tools for non-functional testing ensures that applications and systems meet quality standards. When selecting a tool, it's essential to understand the purpose of the test as well as the desired outcome. Non-functional tests can vary greatly depending on the system, so it's essential to determine which tests need to be performed to meet requirements. The environment where the testing will occur is also essential, as some tools are better suited for certain environments than others. It's worth researching different available tools and comparing their features and cost before deciding. The cost could be a significant factor while selecting a tool, but scalability and portability should also be considered if an organization plans to scale or move its operations. Once you have selected your tool of choice, testing it in a development environment before using it in production is highly recommended. This will help identify any issues early on and ensure no surprises when deploying your application or system into production. By following these steps, organizations can ensure they select the best non-functional testing tool for their project and meet all of their quality requirements. With careful consideration, businesses can reduce risks associated with costly failures while providing customers with an excellent user experience. > Check out: [Why End-user Experience Monitoring is critical?](https://dev.to/abhayit2000/why-end-user-experience-monitoring-is-critical-2dl6) ## Harnessing HeadSpin's Unique Capabilities in Non-Functional Testing Regarding efficient and comprehensive non-functional testing, HeadSpin emerges as a frontrunner. With its advanced capabilities, HeadSpin empowers software teams to conduct rigorous performance, security, and usability assessments. With real-world usage scenarios and data science-driven insights, HeadSpin equips developers to identify and address issues, ensuring high-quality, seamless software creation. Here are additional unique capabilities of HeadSpin in the realm of non-functional testing: - **Network Condition Variation**: HeadSpin enables software teams to execute testing under various network conditions, including latency, bandwidth, and packet loss, to assess performance under different connectivity scenarios. - **Load Testing at Scale**: With HeadSpin, load testing can be conducted at scale, simulating thousands of concurrent users to evaluate application performance under heavy user loads. - **Comprehensive Browser Testing**: The Platform offers extensive browser compatibility testing, ensuring optimal performance across a wide range of web browsers. - **Multi-Platform Compatibility**: HeadSpin's capabilities extend to testing applications on different platforms, such as Android, iOS, and more, ensuring consistent performance across various operating systems. - **Real-Time Monitoring**: With an extensive global device infrastructure, HeadSpin provides real-time monitoring and analytics, allowing developers to observe application behavior and performance metrics in real-world scenarios as tests are executed. - **Third-Party Integrations**: HeadSpin seamlessly integrates with popular third-party tools and frameworks, enhancing the testing ecosystem and enabling efficient collaboration within existing workflows. - **Automated Reporting**: The Platform generates automated, detailed reports summarizing test results, performance metrics, and AI-driven insights for streamlined issue identification and resolution. - **Customizable Testing Scenarios**: HeadSpin allows teams to create custom testing scenarios tailored to specific use cases, enabling targeted evaluation of non-functional aspects. Expert Support and Guidance: Alongside its tools and frameworks, HeadSpin offers expert support and guidance, assisting software teams in interpreting results, optimizing performance, and enhancing software quality. ## The Way Forward In the ever-evolving landscape of software development, non-functional testing stands as a cornerstone of software quality assurance. By meticulously assessing performance, security, and usability, non-functional testing ensures that software systems operate seamlessly, even under the most challenging conditions. As you embark on your journey to deliver exceptional software, consider harnessing the power of HeadSpin's specialized tools and expertise. _Article resource: This post was originally published here https://www.headspin.io/blog/the-essentials-of-non-functional-testing_
abhayit2000
1,788,422
Bootcamp De MS-900 Gratuito: Dicas E Truques Para O Exame
Participando do Bootcamp MS-900 Gratuito da Green Tecnologia, você terá acesso a uma experiência de...
0
2024-03-22T11:10:55
https://guiadeti.com.br/bootcamp-ms-900-green-tecnologia-gratuito/
bootcamps, cursosgratuitos, microsoft, microsoft365
--- title: Bootcamp De MS-900 Gratuito: Dicas E Truques Para O Exame published: true date: 2024-03-12 16:00:00 UTC tags: Bootcamps,cursosgratuitos,microsoft,microsoft365 canonical_url: https://guiadeti.com.br/bootcamp-ms-900-green-tecnologia-gratuito/ --- Participando do Bootcamp MS-900 Gratuito da Green Tecnologia, você terá acesso a uma experiência de aprendizado online concentrada em dicas, truques, práticas simuladas e material didático exclusivo para o exame MS-900. O programa oferece e-learning com simulados interativos comentados por instrutores MCT. Ao concluir, os participantes recebem certificados reconhecidos internacionalmente, validando suas habilidades. Participe dessa imersão totalmente guiada, explorando a fundo algumas das principais questões cobradas no exame MS-900: Microsoft 365 Fundamentals. Este bootcamp é a sua porta de entrada para dominar os fundamentos do Microsoft 365, adquirir habilidades essenciais e alcançar o sucesso em sua certificação, proporcionando uma base sólida para avanços profissionais significativos! ## Bootcamp MS-900 Gratuito Explore o Bootcamp MS-900 Gratuito da Green Tecnologia, um e-learning dinâmico, apresentando orientações e truques essenciais para o exame MS-900. Aproveite para aprimorar conhecimentos, desenvolver habilidades práticas e se preparar totalmente para a certificação Microsoft 365 Fundamentals. ![](https://guiadeti.com.br/wp-content/uploads/2024/03/image-21-1024x528.png) _Página do Bootcamp_ Possuindo simulados comentados por instrutor MCT, esta experiênciagarante uma imersão totalmente guiada em algumas das principais questões do Microsoft 365 Fundamentals. ### Aulas Online Gravadas: Flexibilidade e Acesso Ilimitado Desfrute de aulas online gravadas, proporcionando flexibilidade de aprendizado e a oportunidade de revisar o conteúdo quantas vezes desejar, dentro do prazo de 60 dias. Esta plataforma imersiva oferece um ambiente ideal para a preparação intensiva. ### Simulado Intensivo e Certificado de Participação O Bootcamp MS-900 Gratuito inclui um simulado intensivo com 50 questões comentadas, abrangendo as habilidades mensuradas no exame MS-900. Ao concluir o bootcamp, receba um Certificado de Participação, comprovando sua preparação para a certificação Microsoft 365 Certified Fundamentals. ### Dicas & Truques Direto ao Ponto Receba dicas práticas sobre como abordar as questões do exame, com explicações diretas e análises de cenários. Compreenda não apenas a alternativa correta, mas também as razões pelas quais as outras estão incorretas. Aguçe seu olhar para desvendar sutilezas nas entrelinhas das questões. ### Preparação Ampla O bootcamp oferece o aprofundamento dos conhecimentos em soluções do Microsoft 365. Aprimore sua compreensão sobre como essas soluções impulsionam a produtividade, favorecem a colaboração e otimizam as comunicações. Participe do Bootcamp MS-900 Gratuito da Green Tecnologia e trilhe o caminho para o sucesso na certificação Microsoft 365 Certified Fundamentals! <aside> <div>Você pode gostar</div> <div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/03/Trilha-Full-Stack-280x210.png" alt="Trilha Full Stack" title="Trilha Full Stack"></span> </div> <span>Trilha Full Stack Developer Júnior Gratuita Da +PraTi</span> <a href="https://guiadeti.com.br/trilha-full-stack-developer-junior-gratuita/" title="Trilha Full Stack Developer Júnior Gratuita Da +PraTi"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/03/Imersao-Em-Low-Code-E-No-Code-280x210.png" alt="Imersão Em Low Code E No Code" title="Imersão Em Low Code E No Code"></span> </div> <span>Imersão Em Low Code E No Code Gratuita Para Mulheres</span> <a href="https://guiadeti.com.br/imersao-low-code-no-code-mulheres/" title="Imersão Em Low Code E No Code Gratuita Para Mulheres"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/03/AulaoScrum-ITIL-CobitPMBOK--280x210.png" alt="AulãoScrum, ITIL, Cobit,PMBOK" title="AulãoScrum, ITIL, Cobit,PMBOK"></span> </div> <span>Aulão De Scrum, ITIL, Cobit E PMBOK Gratuito: Ganhe Voucher</span> <a href="https://guiadeti.com.br/aulao-scrum-itil-cobit-e-pmbok-gratuito/" title="Aulão De Scrum, ITIL, Cobit E PMBOK Gratuito: Ganhe Voucher"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/03/Desafio-De-HTML-E-CSS-280x210.png" alt="Desafio De HTML E CSS" title="Desafio De HTML E CSS"></span> </div> <span>Desafio De HTML E CSS Gratuito: Pratique A Essência Do Front-End</span> <a href="https://guiadeti.com.br/desafio-html-e-css-gratuito/" title="Desafio De HTML E CSS Gratuito: Pratique A Essência Do Front-End"></a> </div> </div> </div> </aside> ## MS-900 O exame MS-900, conhecido como Microsoft 365 Fundamentals, é uma certificação essencial para profissionais que buscam compreender os fundamentos do ecossistema Microsoft 365. O exame serve como um ponto de partida para aqueles que desejam explorar e consolidar seus conhecimentos sobre as soluções oferecidas pela Microsoft para a produtividade e colaboração. ### Tópicos Abordados no Exame #### Conceitos Básicos do Microsoft 365 - Exploração das principais características e benefícios do Microsoft 365. - Compreensão das diferenças entre o Microsoft 365 e as soluções tradicionais. #### Navegação nas Aplicações e Serviços - Familiaridade com o uso prático das aplicações e serviços essenciais do Microsoft 365. - Navegação eficiente pelos ambientes online e offline. #### Segurança e Conformidade - Conscientização sobre as práticas de segurança e conformidade no Microsoft 365. - Compreensão dos recursos de proteção de dados e privacidade. #### Gerenciamento de Identidade e Acesso - Conhecimento das opções de gerenciamento de identidade no Microsoft 365. - Entendimento das práticas recomendadas para o gerenciamento de acesso. ### Serviços de Colaboração e Comunicação - Exploração dos serviços de colaboração, como o Microsoft Teams. - Compreensão das ferramentas de comunicação disponíveis no Microsoft 365. ### Objetivo do Exame: Avaliar Competências Essenciais O principal objetivo do exame MS-900 é avaliar a compreensão do candidato em relação aos conceitos fundamentais do Microsoft 365, abrangendo diversos aspectos que vão desde o entendimento das principais funcionalidades até a familiaridade com os serviços oferecidos pela plataforma. ## Green Tecnologia A Green Tecnologia se destaca como uma referência ao oferecer treinamentos desenvolvidos e ministrados por profissionais especialistas, proporcionando uma aprendizagem flexível com conteúdos multiplataforma. Tanto no formato presencial, in-company, ao vivo ou gravado, os cursos são elaborados para preparar os alunos tanto para o dia a dia quanto para as demandas do mercado. ### Diversidade de Treinamentos e Tradição de Três Décadas Com mais de 400 treinamentos em seu portfólio, a Green Tecnologia acumula uma sólida experiência de mais de três décadas no mercado. Ao longo desse período, já treinou mais de 458.000 alunos e atendeu mais de 8.000 empresas, consolidando sua posição como referência em formação profissional. ### Serviços Oferecidos - Cursos em formato Live, presencial ou videoaula; - Turmas abertas em calendário e turmas fechadas ou in-company para empresas; - Desenvolvimento de conteúdo personalizado para empresas, disponibilizado em formatos online, ao vivo ou presencial, alinhado com objetivos e demandas corporativas; - Formatos exclusivos de capacitação: convênio educacional, programa de vouchers e assinatura de cursos Live; - Exames de certificação online e presencial, com opção de venda em lotes para empresas; - Webinars gratuitos: cursos para iniciantes em AWS, Microsoft e ITIL. ### Certificações Oficiais Microsoft, AWS, LPI e EXIN A Green Tecnologia orgulha-se de ser um centro de treinamento e certificação oficial de grandes nomes como Microsoft, AWS, LPI e EXIN, proporcionando aos alunos uma formação alinhada com as mais altas exigências da indústria de tecnologia. ### Compromisso Social e Educação de Qualidade A Green Tecnologia não apenas busca excelência acadêmica, mas também se coloca como agente de uma sociedade mais justa. Compreendendo que a qualidade de vida na sociedade é responsabilidade coletiva, a instituição busca contribuir positivamente por meio de sua oferta educacional e compromisso social. ## Potencialize sua Jornada com o Bootcamp Gratuito! As [inscrições para o Bootcamp MS-900 Gratuito](https://www.green.com.br/infogreen/Landing_page/Bootcamp-MS900/index.html?) devem ser realizadas no site da Green Tecnologia. ## Compartilhe o Conhecimento! Gostou do conteúdo sobre o bootcamp preparatório para o exame MS-900? Então compartilhe com a galera! O post [Bootcamp De MS-900 Gratuito: Dicas E Truques Para O Exame](https://guiadeti.com.br/bootcamp-ms-900-green-tecnologia-gratuito/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,788,049
JS core concepts
Closures example: The closure is a javascript nested function where we can access outerfunction...
0
2024-03-12T13:54:02
https://dev.to/uthirabalan/js-core-concepts-2n3d
**_Closures example:_** The closure is a javascript nested function where we can access outerfunction variables inside the inner function by calling the innerfunction inside the outer function. let outerfunction = ()=>{ var count = 0 let innerFunction=()=>{ count++; console.log(count) } return innerFunction; } let increment = outerfunction() increment(); increment(); increment(); increment();
uthirabalan
1,788,078
How to see someone's first post on Instagram
Most times we are just curious to see the earliest posts of a person on Instagram. Seeing the old...
0
2024-03-12T14:36:20
https://dev.to/fasakinhenry/how-to-see-someones-first-post-on-instagram-21m8
Most times we are just curious to see the earliest posts of a person on Instagram. Seeing the old posts becomes a nightmare, especially if the person is an active user. In this article, you get to know different ways of carrying out that interesting sneaky act😂😎 1. The first approach involves using a physical object on your keyboard's space bar and returning after a while. Damn, I shouldn't be telling you that🤦‍♀️😝. 2. You can long press the scrolling wheel of your mouse(if you are using one) and scroll to the end of the page which automatically continues scrolling till it reaches the end of the user's post. 3. The third approach which is the best approach for a programmer like me is the use of codes that run in the browser's console. I mean this saves your space bar and mouse scrolling wheel from extensive work😅🤦‍♀️. To achieve this you need to right-click on your laptop or desktop and inspect the page before accessing the console or use the appropriate shortcut(I think it is CTRL+shift+I🤔🤔). You then paste this code there. ``` window.setInterval(function(){window.scrollTo(0,document.body.scrollHeight);},1000); ``` The downside of this method is that it can't be done on a mobile phone except you are smart enough to find your way around that. If you can, Please share with others in the comment section. Also, ensure that the rate limiter does not mess with your progress🤦‍♀️(You can't even stop that if it occurs. Can you?). I remain Henry Fasakin, a software engineer who loves clean UI and designs. You can reach out to me on [Twitter](https://twitter.com/henqsoft) or [LinkedIn](https://linked.com/in/fasakin-henry). I am most active on my Github. Do well to check that out. [GitHub](https://github.com/fasakinhenry)
fasakinhenry
1,788,116
Disorder (scramble algorithm)
#include &lt;iostream&gt; int *arreglo (int n){ int *a = new int[n]; for (int i=0; i&lt;n; i++)...
0
2024-03-12T15:11:51
https://dev.to/imnotleo/disort-scramble-algorithm-3h0f
``` #include <iostream> int *arreglo (int n){ int *a = new int[n]; for (int i=0; i<n; i++) a[i] = rand() % (n*10) + 1; return a; } void print(int *a, int n){ for (int i=0; i<n; i++) std::cout << a[i] << " "; printf("\n"); } void swap(int &a, int &b){ int c = a; a = b; b = c; } void bsort (int *a, int n){ for (int k = n - 1; k > 0; k--) for(int i = 0; i < k; i++) if (a[i] > a[i+1]) swap(a[i], a[i+1]); } void shuffle(int *a, int n){ for (int k = n - 1; k > 0; k--) swap(a[rand() % (k + 1)], a[k]); } int main(){ srand((unsigned) time(nullptr)); int n = 10; int *a = arreglo(n); print(a,n); printf("\n"); for (int i = 0; i < 24; i++){ shuffle(a,n); print(a,n); } delete [] a; return 0; } ```
imnotleo
1,788,249
Introducing Enhance Image
Although we in web development (and certainly those of us at Begin) do a lot of talking about the...
0
2024-03-12T16:26:59
https://dev.to/begin/introducing-enhance-image-hh8
webdev, webcomponents, html, frontend
Although we in web development (and certainly those of us at Begin) do a lot of talking about the cost of JavaScript, there’s another type of content on the web that has a huge bearing on performance: images. In fact, images are by far [the largest contributor to page weight](https://almanac.httparchive.org/en/2022/page-weight#fig-8), while also being in close competition with JavaScript for [the highest number of resource requests per page](https://almanac.httparchive.org/en/2022/page-weight#fig-3). Images have a significant impact on multiple aspects of performance, including [Cumulative Layout Shift](https://web.dev/articles/cls) and [Largest Contentful Paint](https://web.dev/articles/lcp). Furthermore, with [median image payloads hovering around 1MB](https://almanac.httparchive.org/en/2022/page-weight#fig-13), images can also cost end users a lot of data and money to access (and considering that [just over 1 in 5 people in the UK still use Pay As You Go](https://www.ofcom.org.uk/phones-telecoms-and-internet/advice-for-consumers/advice/pay-as-you-go-mobile-use-it-or-lose-it) — and [as of 2020, 2.45 million people in Canada did, too](https://www.statista.com/statistics/460086/total-number-of-prepaid-mobile-subscribers-canada/) — lowering data access requirements is decidedly not just a problem for emerging cellular markets). This makes images an obvious and important target for performance optimization. In 2024, we’re fortunate to have more options available to us than ever to reduce the negative impacts of using images on the web. Both the Image and Picture elements now have APIs in place for [implementing responsive images](https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Responsive_images), which allow web developers to send the most appropriately sized or formatted image to a particular viewport or device. Using these techniques can greatly reduce the overhead of sending oversized images to a multitude of displays (and users). Unfortunately, the APIs for implementing responsive images can be difficult to wrap your head around. While these techniques are effective, they’re not exactly a breeze to internalize — as evidenced by [this (very good) *ten part* series of articles on responsive images](https://cloudfour.com/thinks/responsive-images-101-definitions/). Meanwhile, preparing multiple variants (based on size, quality, and image format) of every single image on a website can be time consuming at best, and certainly designers and engineers could imagine spending their time on more enjoyable tasks. With all this in mind, one of the first side projects I spun up for myself after joining Begin in 2022 was to investigate how we could make responsive images easier for users to author in their [Enhance](https://enhance.dev) projects. While by no means revolutionary, the core concept was to make a configurable, standards based, single file component available to users, which would simplify the implementation of responsive images in addition to eliminating the need to generate arbitrary image variants by hand. I (and my colleagues at Begin) went through multiple iterations and proposals for this project, weighing everything from the pros, cons, and most compelling use cases of the Image and Picture elements, to different component signatures, options, and patterns for configuration. Today, we’re excited to announce that the results of this project are now available in the form of our beta release of [Enhance Image](https://enhance.dev/docs/enhance-ui/image), the first in a suite of standards based UI components we’re calling Enhance UI. ## Simplified authoring Generally speaking, responsive images solve one of two problems: resolution switching and art direction. ‘Resolution switching’ refers to providing *different resolutions* of a given image to different viewports (for example: sending small images to mobile devices, and large images to external monitors), while ‘art direction’ refers to providing *different aesthetic variants* of an image to different viewports (for example: rendering a landscape crop of an image on a shorter, wider screen, and a portrait crop of an image to a taller, narrower screen). Enhance Image is built for the former use case: providing different resolutions of the same image to different viewports. This use case is supported primarily via two attributes of the native Image element — those being the `srcset` and `sizes` attributes. In short, the `srcset` attribute allows an author to declare URLs to multiple different variants of an image, each of which must have its width declared with a width descriptor (effectively the image’s width in pixels followed by the letter `w`). Meanwhile, the `sizes` attribute allows an author to specify various media conditions for the viewport and the width of the content area that the image should occupy under that condition. With this information from the `sizes` attribute, the browser will examine the list of images in the `srcset` attribute and select the image it determines to be the best available option. Let’s review an example of how this works in practice before demonstrating what Enhance Image brings to the table. Let’s say we’ve got a nice big hero image we want to render at the top of a web page. We have an asset for this image from our designer all ready to go in our Enhance project. But hero images can be pretty large, and we’d like to render smaller versions of this image on smaller viewports, so as not to send those users more data than needed. Before implementing this image responsively, our code for our hero section might look something like this: ```html <section> <img src="/_public/hero.jpg" alt="Axol the axolotl galavanting through the world of Enhance" /> </section> ``` To implement this image responsively, we could have our designer carve out multiple variations of this image in different sizes, and then use the `srcset` and `sizes` attributes to provide the browser with the ability to serve the best option to a given display: ```html <section> <img src="/_public/hero.jpg" srcset="/_public/hero-large.jpg 2400w, /_public/hero-medium.jpg 1200w, /_public/hero-small.jpg 800w" sizes="100vw" alt="Axol the axolotl galavanting through the world of Enhance" /> </section> ``` A few things are going on in the example above: we’re using the `srcset` attribute to declare multiple variants of our image, along with a width descriptor for each; we’re leaving the `src` attribute in place as a fallback for browsers that may not have the `srcset` attribute available; and we’re setting the `sizes` attribute to `100vw` to tell the browser that it should select an image from the options in `srcset` that will be best suited for filling up 100% of the current viewport’s width. Note that the browser will make this determination by factoring in not just the pixel width of the viewport and content areas, but also the display’s pixel density and zoom level, and possibly other factors such as network conditions (these factors can vary between browser vendors). This implementation works great, but the code is admittedly not necessarily the easiest to follow. If we want to provide a larger number of variants in the `srscset` attribute, things can get gnarly pretty fast (both for authors writing the code and designers creating multiple variants). This is also just a single image on our site — as previously noted, most sites have many more than a single image per page, thus multiplying the amount of work required. This is where Enhance Image comes in. First, it provides a familiar but simplified component signature — for example, the equivalent of the above code using Enhance Image would look like this: ```html <section> <enhance-image src="/_public/hero.jpg" alt="Axol the axolotl galavanting through the world of Enhance" ></enhance-image> </section> ``` Where have our `srcset` and `sizes` attributes gone? Well, I’ve been a bit sneaky and previously specified a list of images formatted to Enhance Image’s defaults — that is, Enhance Image will make a 2400px, 1200px, and 800px wide variant available on demand for each source image it’s provided. We also default to `100vw` for the `sizes` attribute, and it can thus be omitted. (The same is true for the native `sizes` attribute, but it was shown previously for clarity). Thus, using just the default configuration, every image in your Enhance project can now be delivered responsively by swapping out the `img` tag for the `enhance-image` custom element tag. In the above example, we don’t just deliver the generated variants of your source image in different sizes. By default, we also provide further optimizations by delivering these variants in [`webp` format](https://developers.google.com/speed/webp), at an 80% quality setting. And — like all other Enhance elements — Enhance Image renders its content as HTML on the server, with no client side JavaScript required. Of course, the defaults that ship with Enhance Image won’t work for everyone — in fact, we strongly encourage you to experiment with these values, which is why we’ve made configuring Enhance Image a breeze. ## Configuration options Enhance Image’s single file component provides a simple but powerful component primitive for authors, but this is only half the story. Enhance Image itself is powered by a versatile, fully configurable, on demand image transformation service that powers the creation of your source image variants. Here, all credit goes to the brains behind this service, fellow Beginner [Ryan Bethel](https://indieweb.social/@ryanbethel). Our image transformation service allows authors to request generated variants of a given image using three different configuration options, which need to be set in your project’s [Preflight file](https://enhance.dev/docs/conventions/preflight): <dl> <dt><code>widths</code> (optional)</dt> <dd> The `widths` option takes an array of pixel widths, specified as unitless integers. A variant of your source image will be generated for every width specified, with a height corresponding to the source image's intrinsic aspect ratio. The default widths are 2400, 1200, and 800. </dd> <dt><code>format</code> (optional)</dt> <dd> The format option takes one of the following format strings: `webp`, `avif`, `jxl`, `jpeg`, `png`, or `gif`. Generated images will be returned in the given format. `webp` is recommended for compatibility and performance, and is the default option. [Read more about image formats on the web here.](https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Image_types) </dd> <dt><code>quality</code> (optional)</dt> <dd> The quality setting takes a number between 0–100. Generated images will be returned at the quality level specified. It's best to choose a quality level that results in the smallest possible file size without significant degradation in image quality — this can vary based on the content of the images being processed, and you may need to experiment a bit to find the best setting based on your content. The quality option defaults to 80. </dd> </dl> For each image passed to the `enhance-image` single file component, our image transformation service will return one generated variant per `width` specified in the configuration, formatted and optimized based on the `format` and `quality` settings. This saves authors from having to manually create and optimize images ahead of deployment, and allows different variants to be iteratively added or removed as your needs change over time. ## Pushing performance further One important thing to note is that, since Enhance applications don’t have a build step but are rather deployed as cloud functions, the generated variants for each image are created at the time those images are requested by a browser. Each image variant will be cached after its creation; however, you may experience a slight delay when the images are first requested, especially for large or complex images that may take longer to process. This is why Enhance Image also ships with a cache warming script that can be run either locally or via a CI service like GitHub Actions. This script will recursively scan a directory in your project for image files, and then — based on your configuration options — will make requests for each of your image variants, effectively ‘warming’ their caches before an end user even requests them. The cache warming script can be run like this: ```shell npx @enhance/image warm --directory /public/images --domain https://example.org ``` The script takes two arguments: <dl> <dt><code>--directory</code></dt> <dd> The path to the directory in your project containing the images you’ll be using with Enhance Image, for which you’d like variants (and caches) generated, e.g. `/public/images`. This path **must start with `/public`**. The directory will be scanned recursively, so only the top most directory needs to be provided. </dd> <dt><code>--domain</code></dt> <dd> The URL of your application’s deployment, e.g. `https://example.org` or `https://image-4ab.begin.app`. </dd> </dl> For further details, see [the Enhance Image docs](https://enhance.dev/docs/enhance-ui/image). ## Beta available today! We’ve spent a surprisingly long time working to bring Enhance Image to a public beta, and we’re pretty excited about the results. It’s available for you to try today — [check out the docs to get started](https://enhance.dev/docs/enhance-ui/image)! If you have any questions, problems, or other feedback, feel free to [join us on Discord](https://enhance.dev/discord) or [open an issue](https://github.com/enhance-dev/enhance-image/issues)! Possible breaking changes, however, are in our sights — primarily as concerns the method of providing configuration options, as well as the possibility of using Enhance Image with private S3 buckets (eliminating the need to track image assets in your project’s git repository). While we always aim to make changes additively rather than destructively, we’re not sure how these changes will land once we get to work on them — and it’s for this reason that we’re labelling Enhance Image as a beta release for now. That said, we’ve been using Enhance Image (first in its alpha form, and more recently in its current beta form) on our own production domains (including this blog) for several weeks now, and we’re confident both in its abilities and its results. We hope you enjoy using Enhance Image to more easily author responsive images!
colepeters
1,788,252
List Comprehension em Python
A List Comprehension é uma característica poderosa e elegante do Python que permite criar listas de...
0
2024-03-12T16:27:41
https://dev.to/viana/list-comprehension-em-python-313m
A List Comprehension é uma característica poderosa e elegante do Python que permite criar listas de maneira concisa e eficiente. Essa construção sintática simplifica o processo de criação de listas a partir de iteráveis, como listas, tuplas, conjuntos, ou mesmo strings. Neste artigo, vamos explorar em detalhes o conceito de List Comprehension em Python, entender sua sintaxe e descobrir como aproveitá-la ao máximo em seu código. **Entendendo a List Comprehension** Em sua forma mais simples, uma List Comprehension em Python segue a seguinte estrutura: `[expressão for item in iterável]` Essa expressão cria uma nova lista aplicando uma expressão a cada item em um iterável, como uma lista, e adicionando o resultado à nova lista. A expressão pode ser qualquer operação ou cálculo que você deseja aplicar a cada item do iterável. **Sintaxe** - A sintaxe básica de uma List Comprehension inclui os seguintes elementos: - A parte inicial, dentro dos colchetes, é a expressão que define cada elemento da nova lista. - Em seguida, vem a cláusula for, seguida de uma variável (ou variáveis) que representam os itens do iterável. - Por fim, especificamos o iterável do qual estamos retirando os elementos. Vamos examinar um exemplo simples para ilustrar como isso funciona: ``` # Criando uma lista de números pares de 0 a 9 pares = [num for num in range(10) if num % 2 == 0] print(pares) # Saída: [0, 2, 4, 6, 8] ``` Neste exemplo, estamos criando uma lista de números pares de 0 a 9. A expressão num for num in range(10) itera sobre os números de 0 a 9, e a condição if num % 2 == 0 verifica se o número é par. Se for, ele é adicionado à lista. **Usos Avançados** A List Comprehension em Python também suporta múltiplas iterações e condições, permitindo uma maior flexibilidade. Por exemplo: ``` # Criando uma lista de tuplas com todas as combinações de pares (x, y) onde x é um número de 1 a 3 e y é um número de 1 a 3 combinacoes = [(x, y) for x in range(1, 4) for y in range(1, 4)] print(combinacoes) # Saída: [(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)] ``` Neste exemplo, estamos criando uma lista de tuplas representando todas as combinações possíveis de pares (x, y), onde x e y são números de 1 a 3. **List Comprehension vs Loop Tradicional** Embora List Comprehensions possam tornar o código mais conciso e legível em muitos casos, é importante entender que nem sempre são a melhor escolha. Às vezes, um loop tradicional pode ser mais claro, especialmente quando a lógica é complexa ou envolve múltiplas etapas. **Por fim** Com a List Comprehension podemos criar listas de maneira concisa e eficiente em Python. Ao dominar essa construção sintática, você pode escrever código mais limpo e expressivo, economizando tempo e tornando seu código mais legível. No entanto, lembre-se de que a clareza do código é fundamental, e use List Comprehensions apenas quando elas tornarem o código mais compreensível e eficiente.
viana
1,788,718
Vardhman amrante showrooms for sale in Ludhiana | Vardhman Amrante
Invest wisely with showrooms for sale in Ludhiana at Vardhman Amrante Boulevard. Enjoy an...
0
2024-03-13T05:17:57
https://dev.to/vardhmanamrante/vardhman-amrante-showrooms-for-sale-in-ludhiana-vardhman-amrante-361b
Invest wisely with [showrooms for sale in Ludhiana](https://www.vardhmanamrante.com/amrante-boulevard) at Vardhman Amrante Boulevard. Enjoy an unparalleled location boasting an elite pin code, with approximately 60,000 daily commuters on canal road. The area is home to around 2,00,000 inhabitants on canal road and approximately 6,00,000 residents in nearby areas. Choose Vardhman Amrante Boulevard for a strategic investment opportunity in a prime location with high footfall and a thriving community. For more project details visit at https://www.vardhmanamrante.com/amrante-boulevard
vardhmanamrante